id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.01042
DivClust: Controlling Diversity in Deep Clustering
Clustering has been a major research topic in the field of machine learning, one to which Deep Learning has recently been applied with significant success. However, an aspect of clustering that is not addressed by existing deep clustering methods, is that of efficiently producing multiple, diverse partitionings for a given dataset. This is particularly important, as a diverse set of base clusterings are necessary for consensus clustering, which has been found to produce better and more robust results than relying on a single clustering. To address this gap, we propose DivClust, a diversity controlling loss that can be incorporated into existing deep clustering frameworks to produce multiple clusterings with the desired degree of diversity. We conduct experiments with multiple datasets and deep clustering frameworks and show that: a) our method effectively controls diversity across frameworks and datasets with very small additional computational cost, b) the sets of clusterings learned by DivClust include solutions that significantly outperform single-clustering baselines, and c) using an off-the-shelf consensus clustering algorithm, DivClust produces consensus clustering solutions that consistently outperform single-clustering baselines, effectively improving the performance of the base deep clustering framework.
Ioannis Maniadis Metaxas, Georgios Tzimiropoulos, Ioannis Patras
2023-04-03T14:45:43Z
http://arxiv.org/abs/2304.01042v1
# DivClust: Controlling Diversity in Deep Clustering ###### Abstract Clustering has been a major research topic in the field of machine learning, one to which Deep Learning has recently been applied with significant success. However, an aspect of clustering that is not addressed by existing deep clustering methods, is that of efficiently producing multiple, diverse partitionings for a given dataset. This is particularly important, as a diverse set of base clusterings are necessary for consensus clustering, which has been found to produce better and more robust results than relying on a single clustering. To address this gap, we propose DivClust, a diversity controlling loss that can be incorporated into existing deep clustering frameworks to produce multiple clusterings with the desired degree of diversity. We conduct experiments with multiple datasets and deep clustering frameworks and show that: a) our method effectively controls diversity across frameworks and datasets with very small additional computational cost, b) the sets of clusterings learned by DivClust include solutions that significantly outperform single-clustering baselines, and c) using an off-the-shelf consensus clustering algorithm, DivClust produces consensus clustering solutions that consistently outperform single-clustering baselines, effectively improving the performance of the base deep clustering framework. Code is available at [https://github.com/ManiadisG/DivClust](https://github.com/ManiadisG/DivClust). ## 1 Introduction The exponentially increasing volume of visual data, along with advances in computing power and the development of powerful Deep Neural Network architectures, have revived the interest in unsupervised learning with visual data. Deep clustering in particular has been an area where significant progress has been made in the recent years. Existing works focus on producing a single clustering, which is evaluated in terms of how well that clustering matches the ground truth labels of the dataset in question. However, consensus, or ensemble, clustering remains under-studied in the context of deep clustering, despite the fact that it has been found to consistently improve performance over single clustering outcomes [4, 20, 50, 82]. Consensus clustering consists of two stages, specifically generating a set of base clusterings, and then applying a consensus algorithm to aggregate them. Identifying what properties ensembles should have in order to produce better outcomes in each setting has been an open problem [21]. However, research has found that inter-clustering diversity within the ensemble is an important, desirable factor [17, 28, 38, 57], along with individual clustering quality, and that diversity should be moderated [18, 26, 57]. Furthermore, several works suggest that controlling diversity in ensembles is important toward studying its impact and determining its optimal level in each setting [26, 57]. The typical way to produce diverse clusterings is to promote diversity by clustering the data multiple times with different initializations/hyperparameters or subsets of the data [4, 20]. This approach, however, does not guarantee or control the degree of diversity, and is computationally costly, particularly in the context of deep clustering, where it would require the training of multiple models. Some methods have been proposed that find diverse clusterings by including diversity-related objectives to the clustering process, but those methods have only been applied to clustering precomputed features and cannot be trivially incorporated into Deep Learning frameworks. Other methods tackle diverse clustering by creating and clustering diverse feature subspaces, including some that apply this approach in the context of deep clustering [54, 69]. Those methods, however, do not control inter-clustering diversity. Rather, they influence it indirectly through the properties of the subspaces they create. Furthermore, typically, existing methods have been focusing on producing orthogonal clusterings or identifying clusterings based on independent attributes of relatively simple visual data (e.g. color/shape). Consequently, they are oriented toward _maximizing_ inter-clustering diversity, which is not appropriate for consensus clustering [18, 26, 57]. To tackle this gap, namely generating multiple clusterings with deep clustering frameworks efficiently and with the desired degree of diversity, we propose DivClust. Our method can be straightforwardly incorporated into existing deep clustering frameworks to learn multiple clusterings whose diversity is _explicitly controlled_. Specifically, the proposed method uses a single backbone for feature extraction, followed by multiple projection heads, each producing cluster assignments for a corresponding clustering. Given a user defined diversity target, in this work expressed in terms of the average NMI between clusterings, DivClust restricts inter-clustering similarity to be below an appropriate, dynamically estimated threshold. This is achieved with a novel loss component, which estimates inter-clustering similarity based on soft cluster assignments produced by the model, and penalizes values exceeding the threshold. Importantly, DivClust introduces minimal computational cost and requires no hyperparameter tuning with respect to the base deep clustering framework, which makes its use simple and computationally efficient. Experiments on four datasets (CIFAR10, CIFAR100, Imagenet-10, Imagenet-Dogs) with three recent deep clustering methods (IIC [41], PICA [36], CC [49]) show that DivClust can effectively control inter-clustering diversity without reducing the quality of the clusterings. Furthermore, we demonstrate that, with the use of an off-the-shelf consensus clustering algorithm, the diverse base clusterings learned by DivClust produce consensus clustering solutions that outperform the base frameworks, effectively improving them with minimal computational cost. Notably, despite the sensitivity of consensus clustering to the properties of the ensemble, our method is robust across various diversity levels, outperforming baselines in most settings, often by large margins. Our work then provides a straightforward way for improving the performance of deep clustering frameworks, as well as a new tool for studying the impact of diversity in deep clustering ensembles [57]. In summary, DivClust: a) can be incorporated in existing deep clustering frameworks in a plug-and-play way with very small computational cost, b) can explicitly and effectively control inter-clustering diversity to satisfy user-defined targets, and c) learns clusterings that can improve the performance of deep clustering frameworks via consensus clustering. ## 2 Related Works ### Deep Clustering The term deep clustering refers to methods that cluster data while learning their features. They are generally divided into two categories, namely those that alternate training between clustering and feature learning and those that train both simultaneously. **Alternate learning**: Methods following this approach generally utilize a two-step training regime repeated in regular intervals (e.g. per-epoch or per-step). First, sample pseudo-labels are produced based on representations extracted by the model (e.g. by feature clustering). Second, those pseudo-labels are utilized to improve the learned representations, typically by training the feature extraction model as a classifier. Those methods include DEC [72], DAC [10], DCCM [71], DDC [9], JULE [74], SCAN [64], ProPos [37] and SPICE [55], as well as DSC-N [40], IDFD [73] and MIXEM [65], which propose ways to train models whose representations produce better outcomes when clustered. Other works in this area are DeepCluster [7], SeLa [1], PCL [47] and HCSC [25], though their primary focus is on representation learning. **Simultaneous learning**: These methods jointly learn features and cluster assignments. They include ADC [27], IIC [41] and PICA [36], which train clustering models end-to-end with loss functions that enforce desired properties on the clusters assignments, ConCURL [14, 59], which builds on BYOL [23] with a loss maximizing the agreement of clusterings from transformed embeddings, DCCS [80], which leverages an adversarial component in the clustering process, and GatCluster [56], which proposes an attention Figure 1: Overview of DivClust. Assuming clusterings \(A\) and \(B\), the proposed diversity loss \(L_{div}\) calculates their similarity matrix \(S_{AB}\) and restricts the similarity between cluster pairs to be lower than a similarity upper bound \(d\). In the figure, this is represented by the model adjusting the cluster boundaries to produce more diverse clusterings. Best seen in color. mechanism combined with four self-learning tasks. Finally, methods such as SCL [35], CC [49], GCC [81], TCC [60] and MiCE [63] leverage contrastive learning. Although some deep clustering methods [1, 41, 59] use multiple clusterings, most do not explore the prevalence and impact of inter-clustering diversity, and none proposes ways to control it. Our work is, to the best of our knowledge, the first that addresses both issues. ### Diverse Clustering The most straightforward way of producing multiple, diverse clusterings is clustering the data multiple times. Typical methods to increase diversity include varying the clustering algorithm or its hyperparameters, using different initializations, and clustering a subset of the samples or features [4]. This approach, however, is a) computationally costly, in that it requires clustering the data multiple times, b) unreliable, as some ways to increase diversity might decrease the quality of clusters (e.g. using a subset of the data), and c) ineffective, as there is no guarantee that the desired degree of diversity will be achieved. To tackle this, several methods have been proposed to create multiple, diverse clusterings [29]. We identify two main approaches of promoting inter-clustering diversity: a) explicitly, by optimizing for appropriate objectives, and b) implicitly, by optimizing for decorrelated/orthogonal feature subspaces, which, when clustered, lead to diverse clusterings. Methods in the first category include COALA [2], Meta Clustering [8], Dec-kmeans [39], MNMF [75], MSC [30], ADFT [12] and MultiCC [68]. Subspace clustering methods include MISC [67], ISAAC [77], NRkmeans [53], RAOSC [79] and ENRC [54]. Distinctly, diverse clustering has also been explored in the context of multi-view data by OSC [11], MVMC [76], DMSMF [52], DMClusts [70], DiMSC [6], and DiMVMC [69]. To the best of our knowledge, except for DiMVMC and ENRC, _none_ of the existing methods are compatible with Deep Learning, they require a learned feature space on which to be applied, and most have quadratic complexity relative to the number of samples. This restricts their use on real-life high dimensional data, where deep clustering produces better outcomes [63, 64]. Regarding DiMVMC and ENRC, they depend on autoencoder-based architectures and adapting them to more recent deep clustering frameworks, which perform significantly better, is not trivial. More importantly, they utilize subspace clustering, inheriting its limitations regarding controlling diversity. Specifically: a) no method has been proposed to infer _how_ different the subspaces must be in order to lead to a _specific_ degree of inter-clustering diversity, and b) subspace clustering methods inherit the randomness of the clustering algorithm applied to the subspaces (K-means for DiMVMC and ENRC), which further limits their control over the outcomes. ### Consensus Clustering The performance of clustering algorithms varies depending on the data and their properties, the algorithm itself, and its hyperparameters. This makes finding reliable clustering solutions particularly difficult. Consensus, or ensemble, clustering has emerged as a solution to this problem, specifically by combining the results of multiple, different clusterings, rather than relying on a single solution. This has been found to produce better and more robust outcomes than single-clustering approaches [4, 20, 21, 50]. The process of consensus clustering happens in two stages: a) multiple, diverse base clusterings are generated and b) those clusterings are aggregated using a consensus algorithm. **Generating diverse clusterings:** The properties of the set of clusterings used by the consensus algorithm is a key factor for obtaining good performance. Multiple works [28, 45, 57] have found that both the quality of individual base clusterings and their diversity is critical, and that, indeed, clustering ensembles with a moderate degree of diversity lead to better outcomes [24, 26, 18]. Typical methods for ensemble generation include using different clustering algorithms [16], using different initializations of the same clustering algorithm or different hyperparameters (e.g. the number of clusters) [19, 26, 46], clustering with different subsets of the features [61], using random projections to diversify the feature space [17], and clustering with different subsets of the dataset [15, 16]. However, concrete methods for identifying optimal hyperparameters, such as the degree of diversity, the number of clusterings in the ensemble, and the method by which the ensemble is generated, remain elusive. **Consensus algorithms:** Consensus algorithms aim to aggregate multiple, diverse clusterings to produce a single, robust solution. Various approaches to this problem have been proposed, such as using matrix factorization [48], distance minimization between clusterings [84], utilizing multiple views [62], graph learning [83, 32, 82] and matrix co-association [33, 42]. We note that, while improving consensus algorithms increase the robustness of consensus clustering overall, the stages of ensemble generation and its aggregation with consensus algorithms are largely independent. **Consensus Clustering & Deep Learning:** Despite the established advantages of consensus clustering over single-clustering approaches, consensus clustering has not been explored in the context of deep clustering. A possible reason is the computational cost of generating multiple, diverse base clusterings, which would require training multiple models. The only work that has, to the best of our knowledge, applied consensus clustering in the deep clustering setting is DeepCluE [31]. Notably, however, the base clusterings used by DeepCluE are not all learned by the model. Rather, a single-clustering model is trained, and an ensemble is generated by clustering features from multiple layers of the model with U-SPEC [34]. Our work addresses this gap, by proposing a way to train a single deep clustering model to generate multiple clusterings with controlled diversity and with minimal computational overhead. ## 3 Method **Overview:** Our method consists of two components: a) A novel loss function that can be incorporated in deep clustering frameworks to control inter-clustering diversity by applying a threshold to cluster-wise similarities, and b) a method for dynamically estimating that threshold so that the clusterings learned by the model are sufficiently diverse, according to a user-defined metric. More concretely, we assume a deep clustering model that learns \(K\) clusterings (typically a backbone encoder followed by \(K\) projection heads), a deep clustering framework and its loss function \(L_{main}\), and a diversity target \(D^{T}\) set by the user, expressed as an upper bound to inter-clustering similarity1 (i.e. the maximum acceptable similarity). In order to control the inter-clustering similarity \(D^{R}\) of the learned clusterings so that \(D^{T}\leq D^{R}\), we propose a complementary loss \(L_{div}\). Specifically, given soft cluster assignments for a pair of clusterings \(A,B\in K\), we define the inter-clustering similarity matrix \(S_{AB}\in\mathbb{R}^{C_{A}\times C_{B}}\), where \(C_{A}\) and \(C_{B}\) is the number of clusters in each clustering, and \(S_{AB}(i,j)\in[0,1]\) measures the similarity between clusters \(i\in C_{A}\) and \(j\in C_{B}\). It follows that decreasing the values of \(S_{AB}\) reduces the similarity between the clusters of \(A\) and \(B\), and therefore increases their diversity. Accordingly, \(L_{div}\) utilizes \(S_{AB}\) in order to restrict inter-clustering similarity to be under an upper similarity bound \(d\). The value of \(d\) is dynamically adjusted during training, decreasing when \(D^{R}>D^{T}\) and increasing when \(D^{R}\leq D^{T}\), thereby tightening and relaxing the loss function so that, overall and throughout training, inter-clustering similarity \(D^{R}\) remains at or under the desired level \(D^{T}\). Footnote 1: It is trivial to modify our formulation to enforce a lower bound. However, experiments (see Sec. 4.2) showed that, when learning multiple clusterings, deep clustering frameworks inherently tend to converge to near-identical solutions, which made the lower bound scenario redundant. **Defining the inter-clustering similarity matrix:** Our method assumes a standard deep clustering architecture, consisting of an encoder \(f\), followed by \(K\) projection heads \(h_{1},...,h_{K}\), each of which produces assignments for a clustering \(k\). Specifically, let \(X\) be a set of \(N\) unlabeled samples. The encoder maps each sample \(x\in X\) to a representation \(f(x)\), and each projection head \(h_{k}\) maps \(f(x)\) to \(C_{k}\) clusters, so that \(p_{k}(x)=h_{k}(f(x))\in\mathbb{R}^{C_{k}\times 1}\) represents a probability assignment vector mapping sample \(x\in X\) to \(C_{k}\) clusters in clustering \(k\). Without loss of generality, we assume that \(C=C_{k}\forall k\in K\). Each clustering can then be represented by a cluster assignment matrix \(P_{k}(X)=[p_{k}(x_{1}),p_{k}(x_{2}),...,p_{k}(x_{N})]\in\mathbb{R}^{C\times N}\). The column \(p_{k}(n)\), that is the probability assignment vector for the \(n\)-th sample, encodes the degrees to which sample \(x_{n}\) is assigned to different clusters. The row vector \(q_{k}(i)\ \in\mathbb{R}^{N}\) shows which samples are softly assigned to cluster \(i\in C\). We refer to \(q_{k}(i)\) as the cluster membership vector. To quantify the similarity between clusterings \(A\) and \(B\) we define the inter-clustering similarity matrix \(S_{AB}\in\mathbb{R}^{C\times C}\). We define each element \(S_{AB}(i,j)\) as the cosine similarity between the cluster membership vector \(q_{A}(i)\) of cluster \(i\in A\) and the cluster membership vector \(q_{B}(j)\) of cluster \(j\in B\): \[S_{AB}(i,j)=\frac{q_{A}(i)\cdot q_{B}(j)}{||q_{A}(i)||_{2}||q_{B}(j)||_{2}} \tag{1}\] This measure expresses the degree to which samples in the dataset are assigned similarly to clusters \(i\) and \(j\). Specifically, \(S_{AB}(i,j)=0\) if \(q_{A}(i)\perp q_{B}(j)\) and \(S_{AB}(i,j)=1\) if \(q_{A}(i)=q_{B}(j)\). It is, therefore, a differentiable measure of the similarity of clusters \(i\) and \(j\). **Defining the loss function:** Based on the inter-clustering similarity matrix \(S_{AB}\), we define DivClust's loss to softly enforce that a clustering \(A\) does not have an _aggregate_ cluster similarity with a clustering \(B\) greater than a similarity upper bound \(d\). The aggregate similarity \(S_{AB}^{aggr}\) is defined as the average similarity of clustering \(A\)'s clusters with their most similar cluster of clustering \(B\) (Eq. (2)). Using this metric, we propose \(L_{div}\) (Eq. (3)), a loss that regulates diversity between clusterings \(A\) and \(B\) by forcing that \(S_{AB}^{aggr}<d\), for \(d\in[0,1]\). It is clear from Eq. (3) that \(S_{AB}^{aggr}<d\Rightarrow L_{div}(A,B)=0\), in which case the diversity requirement is satisfied and the loss has no impact. Conversely, \(S_{AB}^{aggr}\geq d\Rightarrow L_{div}(A,B)>0\), in which case the loss requires that inter-clustering similarity decreases. \[S_{AB}^{aggr}=\frac{1}{C}\sum_{i=1}^{C}\underset{j}{max}(S_{AB}(i,j)) \tag{2}\] \[L_{div}(A,B)=\left[S_{AB}^{aggr}-d\right]_{+} \tag{3}\] Having defined the diversity loss \(L_{div}\) between two clusterings, we extend it to multiple clusterings \(K\) and combine it with the base deep clustering framework's objective. For a clustering \(k\in K\), we denote with \(L_{main}(k)\) the loss of the base deep clustering framework for that clustering, and with \(L_{div}(k,k^{\prime})\) the diversity controlling loss between clusterings \(k\) and \(k^{\prime}\). We present the joint loss \(L_{joint}(k)\) for each clustering \(k\) in Eq. (4), where \(L_{main}(k)\) depends on cluster assignment matrix \(P_{k}\), while \(L_{div}(k,k^{\prime})\) depends on \(P_{k}\) and \(P_{k^{\prime}}\). Accordingly, the model's training loss \(L_{total}\), seen in Eq. (5), is the average of \(L_{joint}\) over all clusterings. \[L_{joint}(k)=L_{main}(k)+\frac{1}{K-1}\sum_{k^{\prime}=1,k^{\prime}\neq k}^{K }L_{div}(k,k^{\prime}) \tag{4}\] \[L_{total}=\frac{1}{K}\sum_{k=1}^{K}L_{joint}(k) \tag{5}\] The loss \(L_{total}\) is therefore a combination of the base deep clustering framework's loss \(L_{main}\) for each clustering \(k\in K\) and the loss \(L_{div}\), which is used to control inter-clustering diversity. The proposed loss formulation is applicable to any deep clustering framework that produces cluster assignments through the model (as opposed to frameworks using offline methods such as MIX'EM [65]), which covers the majority of deep clustering frameworks outlined in Sec. 2. **Dynamic upper bound \(d\):** The proposed loss \(L_{div}\) controls inter-clustering diversity by restricting the values of \(S_{AB}\) according to the similarity upper bound \(d\). However, the values of \(S_{AB}\) are calculated based on the cosine similarity of _soft_ cluster assignments. This means that pairs of cluster assignment vectors \(i\), \(j\) will have different similarity values \(S_{AB}(i,j)\) depending on their sharpness, even if they point to the same cluster in terms of their corresponding hard assignment. It follows that \(S_{AB}\) and, accordingly, the impact of \(d\), are dependent on the confidence of cluster assignments and vary throughout training and between experiments (as factors like the number of clusters and model capacity influence the confidence of cluster assignments). Therefore, \(d\) is an ambiguous and unintuitive metric for users to define diversity targets with. To tackle this issue and to provide a reliable and intuitive method for defining diversity objectives, we propose dynamically determining the value of the threshold \(d\) during training. Concretely, let \(D\) be an inter-clustering similarity metric chosen by the user. In this work, we use avg. Normalized Mutual Information (NMI), a well established metric for estimating inter-clustering similarity. \[D=\frac{1}{(K-1)(K/2)}\sum_{k=1}^{K-1}\sum_{k^{\prime}=k+1}^{K}NMI(P_{k}^{h}, P_{k^{\prime}}^{h}) \tag{6}\] where \(P_{k}^{h}\in\mathbb{Z}^{N}\) is the hard cluster assignment vector for \(N\) samples in clustering \(k\in K\) and \(NMI(P_{k}^{h},P_{k^{\prime}}^{h})\) represents the NMI between \(k\) and \(k^{\prime}\). \(D\in[0,1]\), with higher values indicating more similar clusterings. Assuming a user-defined similarity target \(D^{T}\), expressed as a value of metric \(D\), we denote with \(D^{R}\) the measured inter-clustering similarity of the clusterings learned by the model, expressed in the same metric. DivClust's objective is to control inter-clustering diversity, which translates to learning clusterings such that \(D^{R}\leq D^{T}\). Accordingly, appropriate thresholds \(d\) must be used during training. Under the assumption that \(D^{R}\) decreases monotonically w.r.t. \(d\), we propose the following update rule for \(d\): \[d_{s+1}=\begin{cases}max(d_{s}(1-m),0),&\text{if }D^{R}>D^{T}\\ min(d_{s}(1+m),1),&\text{if }D^{R}\leq D^{T}\end{cases}, \tag{7}\] where \(d_{s}\) and \(d_{s+1}\) are the values of the threshold \(d\) for the current and the next steps, and \(m\in(0,1)\) regulates the magnitude of the update steps. Following this update rule, we decrease \(d\) when the measured inter-clustering similarity \(D^{R}\) needs to decrease, and increase it otherwise. For computational efficiency, instead of calculating \(D^{R}\) over the entire dataset in every training step, we do so every 20 iterations on a memory bank of \(M=10,000\) cluster assignments - the latter is updated at every step in a FIFO manner. We set the hyperparameter \(m\) to \(m=0.01\) in all experiments. ## 4 Experiments We conduct several experiments to evaluate DivClust's adaptability, its effectiveness in controlling diversity, and the quality of the resulting clusterings. First, to show that DivClust effectively controls inter-clustering diversity and produces high quality clusterings with various frameworks, we combine it with IIC [41], PICA [36] and CC [49], and apply it to CIFAR10 with various diversity targets \(D^{T}\). Subsequently, we focus on the best framework of the Figure 2: Examples of synthetic cluster assignments \(P_{A}\), \(P_{B}\) and similarity matrix \(S_{AB}\). Note that clusters \(i\in A\) and \(j\in B\) are softly assigned the same samples. Correspondingly, their similarity score \(S_{AB}(i,j)\) is high (highlighted with red in Fig. 1(c)). Best seen in color. three, namely CC, and conduct experiments on 4 datasets (CIFAR10, CIFAR100, Imagenet-10 and Imagenet-Dogs). Our findings demonstrate that, across frameworks and datasets, DivClust can: a) effectively control diversity and b) improve clustering outcomes over the base frameworks and alternative ensembling methods. ### Experiments setup **Datasets:** We conduct experiments with 4 standard datasets in deep clustering: CIFAR10, CIFAR100 [44] (evaluating on the 20 superclasses), ImageNet-10 and ImageNet-Dogs [10]. **Metrics:** Inter-clustering similarity is measured by averaging the NMI between clusterings to calculate the inter-clustering NMI metric \(D\) (Eq. (6)), with higher values indicating more similar clusterings. We denote with \(D^{T}\) the diversity target set by the user and with \(D^{R}\) the measured inter-clustering similarity after training. When DivClust is applied we want that \(D^{R}\leq D^{T}\). Clustering quality is evaluated based on overlap of the clusterings with the dataset's ground truth labels, using the Accuracy (ACC), Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI) metrics. We also report the avg. cluster assignment confidence (CNF), which measures cluster separability. For all four metrics greater values are better, \(1\) being optimal. **Implementation & Training:** DivClust is incorporated into the base frameworks as described in Sec. 3, by adding DivClust's loss to their objective and duplicating projection heads \(h\) to produce multiple clusterings. The models were trained following the configurations (model architecture, training duration, hyperparameters etc.) suggested in their respective papers [49, 36, 41], unless stated otherwise. PICA and IIC were trained without overclustering. We set the number of clusterings to \(K=20\), following convention in consensus clustering [82], and the number of clusters \(C\) to the number of classes for each dataset, following convention for deep clustering evaluation [49, 36, 41]. **Consensus Clustering:** To extract single clustering solutions we examine three methods: a) selecting the clustering \(k\) with the lowest corresponding loss \(L_{main}(k)\) (**DivClust A**), b) using the consensus clustering algorithm SCCBG [82] to aggregate clusterings (**DivClust B**), and c) a combination of the two, where we select the 10 best clusterings with regard to their loss and then apply SCCBG (**DivClust C**). For clarity and space, we present in the paper \begin{table} \begin{tabular}{c|c c|c c c c} \hline \hline Framework & Clusterings & \(D^{T}\) & \(D^{R}\) & CNF & Mean Acc. & Max. Acc. & DivClust Acc. \\ \hline \multirow{6}{*}{IC} & 1 & - & - & 0.997 & 0.442 & 0.442 & 0.442 \\ & 20 & 1. & 0.983 & 0.996 & 0.526 & 0.526 & 0.526 \\ & 20 & 0.95 & 0.939 & **0.998** & 0.531 & 0.537 & 0.533 \\ & 20 & 0.9 & 0.888 & 0.997 & 0.568 & 0.59 & 0.578 \\ & 20 & 0.8 & 0.8 & 0.997 & **0.611** & **0.678** & 0.653 \\ & 20 & 0.7 & 0.694 & 0.996 & 0.566 & 0.637 & **0.685** \\ \hline \multirow{6}{*}{PICA} & 1 & - & - & **0.906** & 0.533 & 0.533 & 0.533 \\ & 20 & 1. & 0.991 & 0.814 & 0.597 & 0.597 & 0.596 \\ & 20 & 0.95 & 0.931 & 0.826 & 0.624 & 0.631 & 0.625 \\ & 20 & 0.9 & 0.891 & 0.841 & **0.648** & 0.665 & 0.652 \\ & 20 & 0.8 & 0.817 & 0.828 & 0.598 & 0.635 & 0.595 \\ & 20 & 0.7 & 0.703 & 0.824 & 0.625 & **0.691** & **0.671** \\ \hline \multirow{6}{*}{CC} & 1 & - & - & **0.936** & 0.764 & 0.764 & 0.764 \\ & 20 & 1. & 0.976 & 0.934 & 0.763 & 0.763 & 0.763 \\ \cline{1-1} & 20 & 0.95 & 0.946 & 0.934 & 0.762 & 0.773 & 0.76 \\ \cline{1-1} & 20 & 0.9 & 0.9 & 0.931 & **0.794** & 0.818 & 0.789 \\ \cline{1-1} & 20 & 0.8 & 0.814 & 0.93 & 0.762 & **0.847** & **0.819** \\ \cline{1-1} & 20 & 0.7 & 0.699 & 0.927 & 0.703 & 0.818 & 0.815 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for IIC, PICA and CC applied on CIFAR10 with DivClust. CNF and Mean Acc. are calculated by averaging the corresponding metrics over all clusterings, while Max Acc. refers to the best performing base clustering. The DivClust Acc. metric measures the accuracy of a consensus clustering produced with the _DivClust C_ method. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{\(D^{T}\)} & \multicolumn{4}{c}{\(D^{R}\)} \\ \cline{2-5} & CIFAR10 & CIFAR100 & ImageNet-10 & ImageNet-Dogs \\ \hline 1. & 0.976 & 0.939 & 0.987 & 0.941 \\ 0.95 & 0.946 & 0.926 & 0.948 & 0.945 \\ 0.9 & 0.9 & 0.848 & 0.897 & 0.87 \\ 0.8 & 0.814 & 0.806 & 0.807 & 0.795 \\ 0.7 & 0.699 & 0.705 & 0.696 & 0.702 \\ \hline \hline \end{tabular} \end{table} Table 2: Avg. inter-clustering similarity scores \(D^{R}\) for clustering sets produced by DivClust combined with CC for various diversity targets \(D^{T}\). The objective of DivClust is that \(D^{R}\leq D^{T}\). results only for the hybrid aggregation method **DivClust C**, which we found to be the most robust. Detailed results for all three approaches are provided in supplementary Tab. 5. ### Results Initially, we apply IIC, PICA and CC on CIFAR10, and present the outcomes in Tab. 1. We find that, for all three frameworks, DivClust effectively controls diversity, as \(D^{R}\) is consistently close to or lower than \(D^{T}\). Furthermore, results indicate that DivClust is _necessary_ to produce diverse clusterings in deep clustering frameworks, as, without it, they tend to converge to near identical solutions (when \(D^{T}=1\), \(D^{R}\to 1\)). Regarding cluster separability, assignment confidence \(CNF\) remains high for various diversity targets \(D^{T}\), despite the increased complexity of optimizing both the main deep clustering loss and DivClust's objective. Finally, we observe that, for most diversity targets \(D^{T}\), the mean and max. accuracy, as well as the consensus clustering accuracy produced by the aggregation method **DivClust C**, increase relative to the single clustering model. Notably, consensus clustering accuracy is higher than the mean clustering accuracy for most cases, which highlights the effectiveness of our approach. We stress that identifying clusterings in the ensemble whose performance matches the mean or max. accuracy is not trivial, which is why consensus clustering is necessary to reach a single clustering solution. Having established that DivClust is effective across frameworks, we focus on CC and apply it on CIFAR10, CIFAR100, ImageNet-Dogs and Imagenet-10. We compare DivClust with the standard implementation of CC, which is trained to learn a single clustering (**CC**), as well as with alternative methods of ensemble clustering. Specifically, we apply the typical methods of ensemble generation by extracting the features learned by the single-clustering CC model and running K-means 20 times on the entire dataset (**CC-Kmeans**), on random subsets of the dataset (**CC-Kmeans/S**) and on random subsets of the feature space (**CC-Kmeans/F**), following [4]. In all three cases, SCCBG \begin{table} \begin{tabular}{c|c|c c c|c c c|c c c|c c} \hline \hline Dataset & \(D^{T}\) & \multicolumn{3}{c|}{CIFAR10} & \multicolumn{3}{c|}{CIFAR100} & \multicolumn{3}{c|}{ImageNet-10} & \multicolumn{3}{c}{ImageNet-Dogs} \\ \hline Metric & NMI & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI \\ \hline K-means [51] & - & 0.087 & 0.229 & 0.049 & 0.084 & 0.130 & 0.028 & 0.119 & 0.241 & 0.057 & 0.55 & 0.105 & 0.020 \\ AC [22] & - & 0.105 & 0.228 & 0.065 & 0.098 & 0.138 & 0.034 & 0.138 & 0.242 & 0.067 & 0.037 & 0.139 & 0.021 \\ NMF [5] & - & 0.081 & 0.190 & 0.034 & 0.079 & 0.118 & 0.026 & 0.132 & 0.230 & 0.065 & 0.044 & 0.118 & 0.016 \\ AE [3] & - & 0.237 & 0.314 & 0.169 & 0.100 & 0.165 & 0.048 & 0.210 & 0.317 & 0.152 & 0.104 & 0.185 & 0.073 \\ DAE [66] & - & 0.251 & 0.297 & 0.163 & 0.111 & 0.151 & 0.046 & 0.206 & 0.304 & 0.138 & 0.104 & 0.190 & 0.078 \\ DCGAN [58] & - & 0.265 & 0.315 & 0.176 & 0.120 & 0.151 & 0.045 & 0.225 & 0.346 & 0.157 & 0.121 & 0.174 & 0.078 \\ DeCNN [78] & - & 0.240 & 0.282 & 0.174 & 0.092 & 0.133 & 0.038 & 0.186 & 0.313 & 0.142 & 0.098 & 0.175 & 0.073 \\ VAE [43] & - & 0.245 & 0.291 & 0.167 & 0.108 & 0.152 & 0.040 & 0.193 & 0.334 & 0.168 & 0.107 & 0.179 & 0.079 \\ JULE [74] & - & 0.192 & 0.272 & 0.138 & 0.103 & 0.137 & 0.033 & 0.175 & 0.300 & 0.138 & 0.054 & 0.138 & 0.028 \\ DEC [72] & - & 0.257 & 0.301 & 0.161 & 0.136 & 0.185 & 0.050 & 0.282 & 0.381 & 0.203 & 0.122 & 0.195 & 0.079 \\ DAC [10] & - & 0.396 & 0.522 & 0.306 & 0.185 & 0.238 & 0.088 & 0.394 & 0.527 & 0.302 & 0.219 & 0.275 & 0.111 \\ ADC [27] & - & - & 0.325 & - & - & 0.160 & - & - & 0.530 & - & - & - & - \\ DDC [9] & - & 0.424 & 0.524 & 0.329 & - & - & - & 0.433 & 0.577 & 0.345 & - & - & - \\ DCCM [71] & - & 0.496 & 0.623 & 0.408 & 0.285 & 0.327 & 0.173 & 0.608 & 0.710 & 0.555 & 0.321 & 0.383 & 0.182 \\ IIC [41] & - & - & 0.617 & - & - & 0.257 & - & - & - & - & - & - \\ PICA [36] & - & 0.591 & 0.696 & 0.512 & 0.310 & 0.337 & 0.171 & 0.802 & 0.870 & 0.761 & 0.352 & 0.352 & 0.201 \\ CC [49] & - & 0.705 & 0.790 & 0.637 & 0.431 & 0.429 & 0.266 & 0.859 & 0.893 & 0.822 & 0.445 & 0.429 & 0.274 \\ \hline CC-Kmeans & - & 0.654 & 0.698 & 0.523 & 0.429 & 0.405 & 0.235 & 0.792 & 0.841 & 0.669 & 0.457 & 0.444 & 0.284 \\ CC-Kmeans/S & - & 0.674 & 0.69 & 0.554 & 0.428 & 0.402 & 0.228 & 0.792 & 0.842 & 0.673 & 0.456 & 0.444 & 0.283 \\ CC-Kmeans/F & - & 0.684 & 0.762 & 0.599 & 0.438 & 0.409 & 0.210 & 0.797 & 0.847 & 0.685 & 0.458 & 0.444 & 0.285 \\ DeepClue [31] & - & **0.727** & 0.764 & 0.646 & **0.472** & **0.457** & **0.288** & 0.882 & 0.924 & 0.856 & 0.448 & 0.416 & 0.273 \\ \hline & 1. & 0.678 & 0.763 & 0.604 & 0.418 & 0.424 & 0.257 & 0.86 & 0.895 & 0.825 & 0.459 & 0.451 & 0.298 \\ & 0.95 & 0.677 & 0.76 & 0.602 & 0.431 & 0.434 & 0.276 & **0.891** & **0.936** & **0.878** & 0.461 & 0.451 & 0.297 \\ **DivClust C** & 0.9 & 0.678 & 0.789 & 0.641 & 0.422 & 0.426 & 0.258 & 0.879 & 0.92 & 0.859 & 0.48 & 0.487 & 0.332 \\ & 0.8 & 0.724 & **0.819** & **0.681** & 0.422 & 0.414 & 0.26 & 0.879 & 0.918 & 0.851 & 0.458 & 0.448 & 0.296 \\ & 0.7 & 0.71 & 0.815 & 0.675 & 0.44 & 0.437 & 0.283 & 0.85 & 0.90 & 0.819 & **0.516** & **0.529** & **0.376** \\ \hline \hline \end{tabular} \end{table} Table 3: Results combining DivClust with CC for various diversity targets \(D^{T}\). We underline DivClust results that outperform the single-clustering baseline CC, and note with **bold** the best results for each metric across all methods and diversity levels. We emphasize that the NMI in this table measures the similarity between the single clustering produced by each method and the ground truth classes. The NMI values representing inter-clustering similarity \(D^{R}\) in ensembles produced by DivClust for the same experiments are presented is used to aggregate the resulting clusterings. Furthermore, we compare with **DeepCluE**[31], to the best of our knowledge the only other work that examines consensus clustering in the context of deep clustering, and which is also built on top of CC, allowing for a fair comparison. We note that DeepCluE is not mutually exclusive with DivClust, and could be used jointly with it. Inter-clustering similarity scores \(D^{R}\) for this set of experiments are presented in Tab. 2, where it is seen that DivClust successfully controls diversity. Results for consensus clustering, the main task for which DivClust is intended, are presented in Tab. 3 for CC across 4 datasets, where we also include results from other deep clustering frameworks for reference. Detailed results, including aggregation methods DivClust A and DivClust B, as well as mean/max scores for DivClust's clustering ensembles, are provided in supplementary Tab. 5. Results in Tab. 3 demonstrate that, for most diversity targets \(D^{T}\), DivClust outperforms the single-clustering baseline CC and typical ensemble generation methods, and is competitive with the alternative consensus clustering method DeepCluE. Notably, DivClust is competitive with the baseline across diversity levels. This robustness is very significant, given that identifying what properties (including diversity) lead to optimal outcomes in clustering ensembles is an open problem [18, 26]. To summarize, the results of Tabs. 1 to 3 demonstrate that DivClust: a) effectively controls inter-clustering diversity in deep clustering frameworks in accordance with user-defined objectives, b) does not degrade the quality of the clusterings and in fact produces better solutions than single-clustering models, and c), it can be used with consensus clustering to identify single-clustering solutions superior to those of the corresponding single-clustering frameworks. ## 5 Discussion ### Diversity Control & Consensus Clustering Performance Results presented in Sec. 4 demonstrate the effectiveness of DivClust both in controlling inter-clustering diversity and in producing clustering ensembles that lead to consensus clustering outcomes superior to single clustering baselines. Specifically, Tabs. 1 and 2 show that the inter-clustering similarity \(D^{R}\) of ensembles produced by DivClust is consistently lower than the targets \(D^{T}\). In the few cases where \(D^{R}\geq D^{T}\), it is by very small margins (the greatest deviation was +0.017), which may be attributed to our use of a memory bank to estimate \(D^{R}\) for the update rule of the threshold \(d\). Regarding consensus clustering accuracy, despite the sensitivity of consensus clustering to the properties of the ensembles and, specifically, to different inter-clustering diversity levels, our method proves particularly robust to varying the diversity targets \(D^{T}\), outperforming baselines for most settings. This indicates that DivClust learns clusterings with a good quality-diversity trade-off and can be reliably used for consensus clustering. Finally, we note that DivClust's ability to explicitly control inter-clustering diversity can facilitate future research on the impact of diversity in clustering ensembles and toward methodologies that determine desirable diversity levels for specific settings [57]. ### Complexity & Computational Cost The complexity of DivClust's objective is \(O(nK^{2}C^{2})\), where \(n\) is the batch size, \(K\) is the number of clusterings, and \(C\) is the number of clusters in each clustering. Importantly, the cost of DivClust relates only to the computation of the loss and the additional projection heads, and is therefore _fixed_ for fixed \(n\), \(K\), \(C\) values, regardless of model size and data dimensionality, which are generally the computational bottleneck in Deep Learning applications. Therefore, DivClust is scalable to large datasets. Finally, we note that, in practice, the computational overhead introduced by DivClust is minimal. Specifically, for experiments in this work, DivClust learned ensembles with \(K=20\) clusterings, with training time increasing between 10%-50% relative to the time it took to train the baseline single-clustering models. Contrasted with the alternative of training a single-clustering model 20 times (which would not allow for controlling diversity), DivClust provides an efficient approach for applying consensus clustering with deep clustering frameworks. A detailed analysis on complexity and runtimes is provided in supplementary Sec. C. ## 6 Conclusion We introduce DivClust, a method that can be incorporated into existing deep clustering frameworks to learn multiple clusterings while controlling inter-clustering diversity. To the best of our knowledge, this is the first method that can explicitly control inter-clustering diversity based on user-defined targets, and that is compatible with deep clustering frameworks that learn features and clusters end-to-end. Our experiments, conducted with multiple datasets and deep clustering frameworks, confirm the effectiveness of DivClust in controlling inter-clustering diversity and its adaptability, in terms of it being compatible with various frameworks without requiring modifications and/or hyperparameter tuning. Furthermore, results demonstrate that DivClust learns high quality clusterings, which, in the context of consensus clustering, lead to improved performance compared to single clustering baselines and alternative ensemble clustering methods. **Acknowledgments:** This work was supported by the EU H2020 AI4Media No. 951911 project.
2302.13910
Impact of reconstruction schemes on interpreting lattice Boltzmann results -- A study using the Taylor-Green vortex problem
In this note, we show how reconstruction schemes can have a significant impact on interpreting lattice Boltzmann simulation data. To reconstruct turbulence quantities, e.g., the kinetic energy dissipation rate and enstrophy, schemes higher than second-order for spatial derivatives can greatly improve the prediction of these quantities in the Taylor-Green vortex problem. In contrast, a second-order reconstruction of the time series data indicates very good accuracy for the kinetic energy dissipation rate. The present findings can be considered as further numerical evidence of the capability of the lattice Boltzmann method to simulate turbulent flows, which is consistent with its proven feature of low numerical diffusion.
Jianping Meng, Xiao-Jun Gu, David R. Emerson
2023-02-27T16:00:31Z
http://arxiv.org/abs/2302.13910v1
# Impact of reconstruction schemes on interpreting lattice Boltzmann results ###### Abstract In this note, we show how reconstruction schemes can have a significant impact on interpreting lattice Boltzmann simulation data. To reconstruct turbulence quantities, e.g., the kinetic energy dissipation rate and enstrophy, schemes higher than second-order for spatial derivatives can greatly improve the prediction of these quantities in the Taylor-Green vortex problem. In contrast, a second-order reconstruction of the time series data indicates very good accuracy for the kinetic energy dissipation rate. The present findings can be considered as further numerical evidence of the capability of the lattice Boltzmann method to simulate turbulent flows, which is consistent with its proven feature of low numerical diffusion. Lattice Boltzmann method Numerical accuracy Taylor-Green vortex ## 1 Introduction Numerical modeling is an important approach for studying complex fluid behavior occurring in both nature and industrial processes. For this purpose, a key step is to design accurate and simple numerical schemes. As an example, low numerical dissipation and dispersion are often desired features for many applications, e.g., direct numerical simulation of turbulence and aeroacoustics [1]. The lattice Boltzmann method (LBM) is an emerging tool suitable for modeling many types of fluid flow [2]. The method originates from lattice gas automata (LGA) [3, 4], and was subsequently identified as a special discrete velocity method for the Boltzmann transport equation (BTE) [5, 6, 7, 8]. Only a minimal set of discrete velocities are kept to retrieve part of the BTE solution domain, typically incompressible or weakly compressible flows at continuum and near-continuum regimes. In addition, it also inherits the simplicity of the LGA, i.e., its particle dynamics algorithm that only requires information from its near neighbors with fixed directions (i.e., no neighborhood search is required). Thus, the method provides an ideal platform for simulating complex fluid phenomena [9] and is ideal for high-performance computing without worrying about complex numerical and computational details. The Chapman-Enskog expansion proves that the incompressible Navier-Stokes (NS) equations are a first-order asymptotic solution, in terms of the Knudsen number (i.e., approaching the continuum limit), of the lattice Boltzmann equation. In particular, the numerical diffusion can be absorbed into the physical viscosity, which leads to very low numerical diffusion error. In [10], it was revealed that LBM demonstrates remarkable lower numerical diffusion and dispersion error, even compared with high-order numerical discretization of the NS equations. This finding indicates that the numerical accuracy of LBM at the macroscopic scale might not be strictly correlated or limited to its formal second-order accuracy measured relative to the BTE. Recently, its accuracy has been numerically assessed against the classical Taylor-Green vortex (TGV) and other benchmark problems associated with turbulent flow regimes [11, 12, 13]. In general, satisfactory accuracy is identified, although there are differences among various collision terms, such as the single-relaxation model, the multiple-relaxation model, and the entropic collision model etc. For the TGV problem, the kinetic energy can be predicted with fairly good accuracy using a mesh of \(128^{3}\). However, Mimeau _et al._ observed that a relatively high mesh resolution is required to predict the enstrophy [11], where spatial derivatives of velocity are needed. Interestingly, better accuracy, although with oscillations, is found for the kinetic energy dissipation rate reconstructed from the kinetic energy time series. However, there should be only a constant factor of difference between the enstrophy and the kinetic energy dissipation rate if they are evaluated using spatial derivatives for this particular problem. In LBM simulations, the evolutionary variables are distribution functions and not macroscopic quantities, like density and velocity. Thus, the kinetic energy dissipation rate and enstrophy are purely reconstructed from the calculated velocity data. More specifically, evaluating macroscopic velocity-gradient terms are not involved in LBM simulations, and the reconstruction process is totally independent from the simulation itself. Thus, in principle, there is no restriction on the choice of reconstruction scheme. In previous work, however, the reconstruction process is often conducted with second-order schemes by default e.g., in [11]. In this work, we investigate the impact of the reconstruction scheme on interpreting lattice Boltzmann simulation results, which may explain the reason for different accuracy observed in previous predictions of the kinetic energy dissipation rate and enstrophy. For this purpose, we employ the TGV problem and use various reconstruction schemes based on the same set of simulation data. In the following, we briefly introduce the lattice Boltzmann method and the Taylor-Green vortex problem, and discuss the impact of reconstruction schemes. ## 2 Lattice Boltzmann method To derive the LBM, we can start from the Boltzmann-BGK type kinetic equation [5, 6, 7, 8], \[\frac{\partial f}{\partial t}+\mathbf{C}\cdot\nabla f=-\frac{1}{\tau}\left(f-f^{ eq}\right), \tag{1}\] which describes fluid flows using the distribution function \(f(\mathbf{r},\mathbf{C},t)\) at position \(\mathbf{r}=(x,y,z)\), molecular velocity \(\mathbf{C}=(C_{x},C_{y},C_{z})\), and time, \(t\). The molecular collision is modeled by a relaxation term towards the equilibrium distribution function \[f^{eq}=\rho\left(\frac{1}{2\pi RT}\right)^{D/2}\exp\left[-\frac{(\mathbf{C}-\mathbf{U} )^{2}}{2RT}\right] \tag{2}\] which is determined by the fluid density, \(\rho\), the fluid velocity, \(\mathbf{U}=(u,v,w)\), and the temperature, \(T\). In the continuum and near-continuum limit, Eqs. (1) and (2) can recover the NS equations via the first-order Chapman-Enskog expansion, i.e., the NS equations are a type of asymptotic solution of the kinetic equation in terms of the Knudsen number. In particular, \(\tau\), is related to the fluid viscosity, \(\mu\), and the pressure, \(p\), via the Chapman-Enskog expansion, i.e., \(\mu=p\tau\). Naturally, the Reynolds number can be defined as \(Re=\rho_{0}U_{0}L/\mu_{0}=U_{0}L/(\tau_{0}RT_{0})\), where the subscript \(0\) denotes the reference value and \(L\) the characteristic length of the system. The ideal gas law, \(p_{0}=\rho_{0}RT_{0}\), is used for deriving the form of the Reynolds number. The Knudsen number can also be written as \(\mu_{0}\sqrt{RT_{0}}/(p_{0}L)\). If \(\sqrt{RT_{0}}\) is conveniently considered as the sound speed, although there is a constant factor difference, we have \(Kn=Re/Ma\) where \(Ma=U_{0}/\sqrt{RT_{0}}\). With Eq. (2), however, the thermal conductivity is not independent of the viscosity, which can be corrected by using other kinetic equations if necessary. Since there are \(6+1\) degrees of freedom, i.e., the physical space, \(\mathbf{r}\), the molecular velocity space, \(C\), and the time, Eq. (1) can be very expensive for numerical solutions. However, for a broad range of flow problems, it is possible to greatly reduce the complexity by truncating the Maxwellian and discretizing the molecular velocity space by using Gauss-Hermite quadrature. After the discretization in the molecular velocity space, Eq. (1) becomes \[\frac{\partial f_{\alpha}}{\partial t}+\mathbf{C}_{\alpha}\cdot\nabla f_{\alpha} =-\frac{1}{\tau}(f_{\alpha}-f_{\alpha}^{eq}), \tag{3}\] where \(\mathbf{C}_{\alpha}\) is an abscissa of a Gauss-Hermite quadrature. For simulating incompressible and isothermal flows, it is common to use second-order truncation of the Maxwellian function, i.e., \[f_{\alpha}^{eq}=w_{\alpha}\rho\left[1+\frac{\mathbf{U}\cdot\mathbf{C}_{\alpha}}{RT_{ 0}}+\frac{1}{2}\frac{(\mathbf{U}\cdot\mathbf{C}_{\alpha})^{2}}{(RT_{0})^{2}}-\frac{\bm {U}\cdot\mathbf{U}}{2RT_{0}]}\right], \tag{4}\] where \(w_{\alpha}\) is the weight of the Gauss-Hermite quadrature. Moreover, the density and velocity are now calculated by summation operations, i.e., \[\rho=\sum_{\alpha}f_{\alpha}=\sum_{\alpha}f_{\alpha}^{eq}, \tag{5}\] and \[\rho\mathbf{U}=\sum_{\alpha}f_{\alpha}\mathbf{C}_{\alpha}=\sum_{\alpha}f_{\alpha}^{eq} \mathbf{C}_{\alpha}. \tag{6}\] With the obtained density, the pressure can be calculated using \(p=\rho RT_{0}\) in isothermal flows since the temperature is constant, i.e., \(T_{0}\). To numerically solve Eq. (3), a trapezoidal scheme in time [14], can be employed at the right hand side and the left hand side can be analytically integrated over time. In particular, by introducing \[\tilde{f}_{\alpha}=f_{\alpha}+\frac{dt}{2\tau}(f_{\alpha}-f_{\alpha}^{eq}), \tag{7}\] the implicitness of the trapezoidal scheme can be eliminated and we obtain an explicit scheme \[\tilde{f}_{\alpha}(\mathbf{r}+\mathbf{C}_{\alpha}dt,t+dt)-\tilde{f}_{\alpha}(\mathbf{r},t) =-\frac{dt}{\tau+0.5dt}\left[\tilde{f}_{\alpha}(\mathbf{r},t)-f_{\alpha}^{eq}(\mathbf{ r},t)\right], \tag{8}\] which is ready for implementing the stream-collision scheme. At the same time, the macroscopic quantities can be calculated as \[\rho=\sum_{\alpha}\tilde{f}_{\alpha},\text{ and, }\rho\mathbf{U}=\sum_{\alpha}\mathbf{C}_ {\alpha}\tilde{f}_{\alpha}. \tag{9}\] For three-dimensional flows, there are a few commonly used lattices, e.g., D3Q15, D3Q19 and D3Q27. Here, we choose the D3Q19 lattice where there are nineteen discrete velocities (\(\alpha=1..19\)), \[C_{\alpha,x}=\sqrt{3RT_{0}}[0,-1,0,0,-1,-1,-1,-1,0,0,1,0,0,1,1,1,1,0,0], \tag{10}\] \[C_{\alpha,y}=\sqrt{3RT_{0}}[0,0,-1,0,-1,1,0,0,-1,-1,0,1,0,1,-1,0,0,1,1], \tag{11}\] \[C_{\alpha,z}=\sqrt{3RT_{0}}[0,0,0,-1,0,0,-1,1,-1,1,0,0,1,0,0,1,-1,1,-1], \tag{12}\] and the corresponding weights \(w_{\alpha}\) are \[[\frac{1}{3},\frac{1}{18},\frac{1}{18},\frac{1}{18},\frac{1}{18},\frac{1}{36},\frac{1}{36},\frac{1}{36},\frac{1}{36},\frac{1}{136},\frac{1}{18},\frac{1}{18},\frac{1}{18},\frac{1}{36},\frac{1}{36},\frac{1}{36},\frac{1}{36},\frac{1}{36},\frac{1}{36}]. \tag{13}\] As previously discussed, it is now ready to implement the stream-collision algorithm. The only trick is to tie the space and time step together as \(d\mathbf{r}=\mathbf{C}_{\alpha}dt\)[2, 8]. For instance, assuming the system length is \(\mathcal{L}\), we may set the spatial step \(dx=\mathcal{L}/N\) and then \(dt=\mathcal{L}/(N\sqrt{3RT_{0}})\), where \(N\) is the number of computational cells. This ensures the "particles" are jumping on a uniform grid system. Although the LBM algorithm is very simple, there are two important features. First, the mass and momentum conservation are kept exactly at mesoscale (numerically within machine precision), cf. Eqs. (5) and (6), which might lead to highly accurate solutions of density and velocity. Second, Eq. (8) is a second-order scheme in both time and space for Eq. (1) or Eq. (3), which recovers the NS equations without error in viscosity through the Chapman-Enskog expansion. This feature can also be proved if the algorithm is derived from the lattice gas automata, cf. [2]. Thus, the LBM simulation tends to produce very low numerical diffusion error at the macroscopic level. As has been shown, macroscopic quantities such as density, \(\rho\), and velocity, \(\mathbf{U}\), are not primitive variables in LBM simulations. Moreover, the non-linear advection term in the NS equations is replaced with a molecular streaming process, and the momentum diffusivity automatically emerges from the collective behavior of molecular collisions [15]. Hence, no gradient terms are evaluated during simulations. The macroscopically defined turbulence quantities, like enstrophy, can only be reconstructed through the extracted macroscopic velocity data. As shown in [10], the second-order accuracy of Eq. (8) at mesoscopic level might not be directly correlated with the accuracy at the macroscopic level. Consequently, reconstruction schemes, which may have a great impact on the accuracy, can be chosen not necessarily following the accuracy order of Eq. (8) in space and time. ## 3 3D Taylor-Green vortex and turbulence quantities The 3D Taylor-Green vortex is a classical problem for evaluating the performance of a numerical method, particularly the numerical diffusion caused by discretization schemes. It belongs to a class of decaying homogeneous isotropic turbulence, where the flow is defined in a three-dimensional periodic box of length \(2\pi L\) and initialized with \[\mathbf{U}(0)=\begin{pmatrix}U_{0}\sin(\frac{\pi}{L})\cos(\frac{\pi}{L})\cos(\frac{z }{L})\\ -U_{0}\cos(\frac{\pi}{L})\sin(\frac{\pi}{L})\cos(\frac{\pi}{L})\\ 0\end{pmatrix}, \tag{14}\] and \[p(0)=p_{\infty}+\frac{\rho_{0}U_{0}^{2}}{16}\left[\cos\left(\frac{2x}{L}\right) +\cos\left(\frac{2y}{L}\right)\right]\left[\cos\left(\frac{2z}{L}\right)+2 \right], \tag{15}\] where \(L\) is a characteristic length scale. In LBM simulations, any macroscopic initial condition must be transformed into the form of the distribution function. For this purpose, we employ the first-order Chapman-Enskog solution, \[f_{\alpha}(0)=f_{\alpha}^{eq}\left[\rho(0),\mathbf{U}(0)\right]-\tau f_{\alpha}^{ eq}\left[\rho(0),\mathbf{U}(0)\right]\left[\mathbf{C}_{\alpha}\mathbf{C}_{\alpha}:\mathbf{ \nabla}\mathbf{U}\left(0\right)\right] \tag{16}\] where the symbol \(:\) denotes the full tensor contraction. This form of the solution is already simplified according to the assumptions of an isothermal and incompressible flow. For the full solution, one can refer to Chapter 4 in [16]. The initial density profile is related to the initial pressure profile via the gas state equation \(p(0)=\rho(0)RT_{0}\). The equilibrium term, \(f_{\alpha}^{eq}\), can be calculated using Eq. (4). We also note that the "first-order" means the order in terms of the Knudsen number for the asymptotic Chapman-Enskog expansion. It has no bearing on the numerical accuracy in space and time. For this particular problem, three turbulence quantities are often of interest, i.e., the integral of kinetic energy, \[E=\frac{1}{|\Omega|}\int_{\Omega}\frac{1}{2}\left(u^{2}+v^{2}+w^{2}\right)\,d \mathbf{r}, \tag{17}\] the kinetic energy dissipation rate \(\epsilon=-dE/dt\), and the integral of enstrophy, \[\mathcal{Z}=\frac{1}{2|\Omega|}\int_{\Omega}\left[\left(v_{x}-u_{y}\right)^{2 }+\left(u_{z}-w_{x}\right)^{2}+\left(w_{y}-v_{z}\right)^{2}\right]d\mathbf{r}, \tag{18}\] where \(\Omega\) denotes the periodic box domain. For the present TGV problem, the kinetic energy dissipation rate can also be calculated as \[\epsilon=\frac{\mu_{0}}{\rho_{0}|\Omega|}\int_{\Omega}\left[\left(u_{y}+v_{x} \right)^{2}+\frac{1}{2}\left(u_{z}+w_{x}\right)^{2}+\frac{1}{2}\left(v_{z}+w_{ y}\right)^{2}+u_{x}^{2}+v_{y}^{2}+w_{z}^{2}\right]d\mathbf{r} \tag{19}\] by using spatial derivatives. To facilitate the comparison, we use the solution of the NS equations numerically solved by using a de-aliased pseudo-spectral code developed at Universitye catholique de Louvain with a low-storage three-step Runge-Kutta scheme for time integration. The data were generated using a mesh resolution of \(512^{3}\), and can be downloaded from the website of the 1st International Workshop on High-Order CFD Methods [17]. In the following figures, the data will be labeled as "Ref". ## 4 Numerical analysis ### Implementation The lattice Boltzmann scheme Eq. (8) is implemented in a code based on the OPS library [18], which is designed following the domain-specific language approach [19]. The calculation of turbulence quantities is implemented as "user kernel" functions for the TGV application. The code can be found at Github [20]. During simulations, it is convenient to work with a non-dimensional system with \(\sqrt{RT_{0}}\) as the characteristic speed, and \(L\) as the reference length [21]. With such a non-dimensional system, the mesh size, \(h\), and the time step, \(dt\), for each mesh resolution can be listed in Table 1. It is worth noting that the Mach number is kept constant when the mesh is refined. ### Reconstruction schemes To reconstruct the turbulence quantities from the velocity data, we use various orders of central difference schemes to evaluate the first-order derivative of a function, \(\varphi\), at a grid point \((i,j,k)\). Using \(\varphi_{x}\) as example, they are the second-order, \[\frac{\varphi_{i+1}-\varphi_{i-1}}{2h},\] the fourth-order \[\frac{-\varphi_{i+2}+8\varphi_{i+1}-8\varphi_{i-1}+\varphi_{i-2}}{12h},\] the sixth-order \[\frac{\varphi_{i+3}-9\varphi_{i+2}+45\varphi_{i+1}-45\varphi_{i-1}+9\varphi_{i -2}-\varphi_{i-3}}{60h},\] and the eighth-order \[\frac{1}{h}\left[\frac{1}{280}(\varphi_{i-4}-\varphi_{i+4})+\frac{4}{105}( \varphi_{i+3}-\varphi_{i-3})+\frac{1}{5}(\varphi_{i-2}+\varphi_{i+2})+\frac{4}{ 5}(\varphi_{i+1}-\varphi_{i-1})\right].\] In these formulations, we omit the index \(j\) and \(k\) for convenience, which are not changed for calculating the derivative in the \(x\) direction. Moreover, we do not need to specifically treat the boundary points for this periodic problem. During simulations, we do not store velocity data if they are not required for further analysis. Instead, we implement the reconstruction process into the code by "user kernel" functions, and just store the turbulence quantities. The kinetic energy dissipation rate can be also be reconstructed from the time series of kinetic energy. For this purpose, we use a third-order spline interpolation. This is implemented by using the "interp1d" function provided by the open-source SciPy package. Then, the time derivative is evaluated by a central difference scheme, which is also implemented by using the "derivative" function of SciPy. This procedure is particularly useful as we cannot align the LBM timesteps with those of the pseudo-spectral data. Due to the tie to the space step, the LBM timestep can only be those as shown in Table 1 but the timestep of pseudo-spectral data with a resolution of \(512^{3}\) is \(0.01\). To assess the accuracy, the \(L^{2}\) norm is utilized. Considering a vector \(S\) for results and a vector \(\mathcal{C}\) for "correct" values, the error \(\sigma\) is calculated as \[\sigma=\sqrt{\frac{(S-\mathcal{C})\cdot(S-\mathcal{C})}{\mathcal{C}\cdot \mathcal{C}}}. \tag{20}\] It will be applied to calculating the error of kinetic energy \(\sigma_{E}\), the error of kinetic energy dissipation rate \(\sigma_{e}\), and the error of enstrophy \(\sigma_{\mathcal{Z}}\). The "correct" solution vectors are those of the pseudo-spectral data. ### Kinetic energy The simulations predict kinetic energy with considerably high accuracy, see Fig. 1. The error is already less than \(2\%\) with a mesh resolution of \(128^{3}\). Interestingly, a higher resolution of \(256^{3}\) does not improve the accuracy. A similar observation was also found in [11], cf. Fig. (11). However, it will be shown later that the resolution of \(256^{3}\) does improve the prediction of the kinetic energy dissipation rate. \begin{table} \begin{tabular}{|l|c c c|} \hline & \(128^{3}\) & \(256^{3}\) & \(512^{3}\) \\ \hline \(h=2\pi/N\) & 0.0494739 & 0.0246399 & 0.0122959 \\ \(dt=h/\sqrt{3}\) & 0.0285638 & 0.0142259 & 0.00709902 \\ \hline \end{tabular} \end{table} Table 1: Mesh size and time step ### Kinetic energy dissipation We first focus on the reconstruction of the kinetic energy dissipation rate from the time series of kinetic energy. For this purpose, we use the second-order central difference scheme for \(-dE/dt\) from the kinetic energy data between \(t=0.1\) and \(t=19.9\) with a step \(dt=0.05\), which are passed to the "derivative" function of SciPy. The obtained data are plotted in Fig. 2. It can seen that there are significant wiggles from the results of \(128^{3}\). This is also observed in [11] for both LBM predictions and those produced by the semi-Lagrangian vortex method for solving the NS equations using a low resolution mesh. The errors are also listed in Table 2. The results tend to agree with the pseudo-spectral data well. Moreover, it is found that increasing the order of the central difference scheme for the discretization of \(-dE/dt\) does not help to improve the accuracy. Next we consider the kinetic energy dissipation rate reconstructed using spatial derivatives, i.e. Eq. (19). For this purpose, we use four central difference schemes to reconstruct \(\epsilon\). The obtained profiles are plotted in Fig. 3 while the errors are presented in Table 3. It is shown that the high-order central difference schemes can significantly improve the accuracy when the mesh resolution is low. For both \(128^{3}\) and \(256^{3}\), the errors of the eighth-order scheme is nearly \(3\) times lower than those of the second-order scheme. For \(512^{3}\), the second-order scheme can also achieve an error less than \(5\%\) while the predictions of other three higher-order schemes show trend of convergence. Compared with the reconstruction from the time series, the results of \(512^{3}\) and \(256^{3}\) start to match the accuracy from the fourth-order and sixth-order schemes, respectively. For \(128^{3}\), the prediction by using a spatial derivative appears unable to match the accuracy with the tested central difference scheme. The present finding also explains the inconsistent observation in the accuracy of predicting the kinetic energy dissipation rate and enstrophy in [11]. To further evaluate the accuracy of LBM, we also include the results of a fourth-order discontinuous-Galerkin (DG) finite-element scheme for the compressible NS equations in Fig. 3. The results of DG are obtained by digitizing Fig.1 in [22]. Due to the legend of Fig.1 in [22], the data between \(t=13\) and \(t=15\) are missing but this should not have great impact. From the comparison, we can see that, with the higher-order reconstruction scheme, the LBM predicts the dissipation rate with accuracy comparable to the fourth-order DG scheme for the NS equations. \begin{table} \begin{tabular}{|c|c c c|} \hline & \(128^{3}\) & \(256^{3}\) & \(512^{3}\) \\ \hline Error(\(\sigma_{e}\)) & \(6.36\%\) & \(4.70\%\) & \(2.97\%\) \\ \hline \end{tabular} \end{table} Table 2: Error of kinetic energy dissipation rate reconstructed from time series Figure 1: Predicted profiles of kinetic energy. The errors (\(\sigma_{E}\)) are \(1.3729\%\), \(1.3735\%\) and \(0.49827\%\) with the mesh resolutions of \(128^{3}\), \(256^{3}\) and \(512^{3}\), respectively. We also calculate the order of accuracy (i.e., the rate of convergence to the "correct" solution) demonstrated by Eq. (8) during the numerical experiments with different mesh resolution as \[\mathcal{O}=\frac{\log\frac{\sigma_{N_{c}}}{\sigma_{N_{c}}}}{\log\frac{N_{c}}{N_{ c}}} \tag{21}\] based on the calculated error \(E\) with a finer mesh resolution \(N_{r}\) and a coarser mesh resolution \(N_{c}\). The values are listed in Table 4. With a second-order reconstruction, the lattice Boltzmann simulation demonstrates an average order of accuracy \(1.46\), while the order is reduced to \(1.05\) if using an eighth-order reconstruction. This is perhaps due to the fact that the results with the eighth-order reconstruction are already fairly accurate with a mesh resolution of \(256^{3}\). Overall, LBM simulations tend to present very accurate kinetic energy dissipation rate. This should be due to its features of low numerical dissipation error and exactly satisfying mass and momentum conservation. The observations suggest the necessity of using schemes at least of fourth-order accuracy for evaluating spatial derivatives in processing LBM simulation results apart from its formal second-order accuracy relative to Eq. (3). ### Enstrophy The enstrophy can only be reconstructed from spatial derivatives, which is proportional to the kinetic energy dissipation rate for this problem. Therefore, we observe similar accuracy for this quantity, see Fig. 4 and Table 5. Figure 2: Kinetic energy dissipation rate reconstructed from time series. Figure 3: Kinetic energy dissipation rate reconstructed from spatial derivatives. Figure 4: Kinetic energy dissipation rate reconstructed from spatial derivatives. In Fig. 4, there are comparisons with the results obtained from the multiple relaxation time model by Mimeau _et al._[11]. Apart from the larger difference for the case of \(128^{3}\), perhaps due to the different collision terms, the present results reconstructed using the second-order scheme agree in general with those of Mimeau _et al._, which were also reconstructed using a second-order scheme. In particular, they agree with each other well at \(512^{3}\). Thus, the present simulations also confirm previous validations on this problem. Moreover, this further acknowledges the importance of using higher-order reconstruction schemes which significantly improve the results in comparison with those of the second-order reconstruction. ## 5 Concluding remarks We have studied the impact of reconstruction schemes on interpreting LBM simulation results. The classical Taylor-Green vortex problem is chosen for the purpose. Simulations are conducted for the well-documented case at Reynolds number of \(1600\). It is found that using high-order central difference schemes can significantly improve the prediction of the kinetic energy dissipation rate and enstrophy using spatial derivatives. This is particularly true with a low resolution mesh \(128^{3}\) where the eighth-order scheme can still reduce \(\sim 2\%\) of error in comparison to the sixth-order scheme. The most significant improvement happens between the fourth-order and the second-order schemes where the error is reduced by about half for all three meshes. However, there is no significant improvement among fourth-order, sixth-order and eighth-order schemes for \(256^{3}\) and \(512^{3}\). We also studied the reconstruction of the kinetic energy dissipation rate from the kinetic energy time series. It is shown that a second-order scheme can already reconstruct the kinetic energy dissipation rate well. Overall, the LBM simulations demonstrates excellent accuracy for this particular problem considering its second-order accuracy relative to the Boltzmann transport equation. In fact, it demonstrates similar accuracy to a fourth-order discontinuous-Galerkin finite-element scheme for the Navier-Stokes equations. This should be attributed to its features of very low numerical diffusion and satisfying mass and momentum conservation at the mesoscopic level. This observation is very encouraging since its algorithm is very simple and requires only the computational cost of a typical first-order numerical scheme. Moreover, the Poisson equation is not needed for simulating incompressible flows. Our observation suggests that high-order reconstruction is necessary for extracting non-primitive flow quantities concerning spatial derivatives from LBM simulations. Based on Tables 3 and 5, a fourth-order reconstruction can balance the accuracy and the computational expense if such quantities are needed dynamically. For example, evaluating \begin{table} \begin{tabular}{|c|c c c|} \hline \hline Order & \(256^{3}\) & \(512^{3}\) & Average \\ \hline 2nd & 1.36 & 1.55 & 1.46 \\ 4th & 1.77 & 0.98 & 1.38 \\ 6th & 1.67 & 0.67 & 1.17 \\ 8th & 1.53 & 0.57 & 1.05 \\ \hline \hline \end{tabular} \end{table} Table 4: Order of accuracy with various mesh resolutions and reconstruction schemes the fluid force on moving particles immersed in the fluid. On the other hand, the good accuracy of reconstruction from time series favors extracting the fluid force through the momentum exchange method [23]. It is not only a convenient numerical technique but could also have higher accuracy over calculating spatial derivatives. ## Acknowledgments This work was conducted under the support of the Computational Science Centre for Research Communities, and the EPSRC grants EP/P022243/1, EP/T026170/1 and EP/X035875/1. The simulations were performed by utilizing both nodes of the STFC Cloud Service and the ARCHER2 machine. We also thank Dr. Jian Fang for providing and pointing out the source of the pseudo-spectral data.
2304.11251
Machine Learning and the Future of Bayesian Computation
Bayesian models are a powerful tool for studying complex data, allowing the analyst to encode rich hierarchical dependencies and leverage prior information. Most importantly, they facilitate a complete characterization of uncertainty through the posterior distribution. Practical posterior computation is commonly performed via MCMC, which can be computationally infeasible for high dimensional models with many observations. In this article we discuss the potential to improve posterior computation using ideas from machine learning. Concrete future directions are explored in vignettes on normalizing flows, Bayesian coresets, distributed Bayesian inference, and variational inference.
Steven Winter, Trevor Campbell, Lizhen Lin, Sanvesh Srivastava, David B. Dunson
2023-04-21T21:03:01Z
http://arxiv.org/abs/2304.11251v1
# Machine Learning and the Future of Bayesian Computation ###### Abstract Bayesian models are a powerful tool for studying complex data, allowing the analyst to encode rich hierarchical dependencies and leverage prior information. Most importantly, they facilitate a complete characterization of uncertainty through the posterior distribution. Practical posterior computation is commonly performed via MCMC, which can be computationally infeasible for high dimensional models with many observations. In this article we discuss the potential to improve posterior computation using ideas from machine learning. Concrete future directions are explored in vignettes on normalizing flows, Bayesian coresets, distributed Bayesian inference, and variational inference. C (2019) ## 1 Introduction There is immense interest in performing inference and prediction for complicated real-world processes within science, industry, and policy. Bayesian models are appealing because they allow specification of rich generative models encompassing hierarchical structures in the data, natural inclusion of information from experts and/or previous research via priors, and a complete characterization of uncertainty in learning/inference/prediction through posterior and predictive distributions. The primary hurdle in applying Bayesian statistics to complex real-world data is posterior computation. In practice, posterior computation - evaluating posterior probabilities/expectations, credible intervals for parameters, posterior inclusion probabilities for features, posterior predictive intervals, etc - is typically based on posterior samples using Markov chain Monte Carlo (MCMC). Standard MCMC approaches often fail to converge when the posterior has complicated geometry, such as multiple distant modes or geometric/manifold constraints. Even sampling from simple posteriors can be challenging when the data has tens or hundreds of millions of observations. This article focuses on the future of Bayesian computation, with emphasis on posterior inference for high dimensional, geometrically complicated targets with potentially millions of datapoints. The recent explosive success of machine learning is key in shaping our vision for the future of Bayesian computation. To make our vision concrete, we have prepared four vignettes covering disjoint cutting-edge computational techniques, all involving ideas from machine learning. The first vignette describes normalizing flows as a new tool for adaptive MCMC with complicated targets; the second describes Bayesian coresets as a method of data compression prior to sampling; the third describes distributed Bayesian inference for huge datasets; the fourth describes modern variational inference for settings where the previous techniques falter. All sections focus heavily on promising avenues for future research. ## 2 Sampling Using Deep Generative Models The Metropolis Hastings (MH) algorithm (often within Gibbs) is by far the most popular tool for sampling posterior distributions [33]. Good mixing is critically dependent on how closely the MH proposal distribution mimics the target distribution. Higher dimensional targets with increasingly complicated geometry require increasingly flexible proposal distributions which become difficult to tune. Consequently, it is routine to settle for simpler proposals which provide a good local approximation to the target, such as a multivariate Gaussian. Parameters are then tuned to encourage efficient exploration, e.g. by adaptively learning the posterior covariance [53, 139, 153] or by discretizing dynamics driven by the target [106, 86]. A major limitation of local methods is their practical inability to cross low-probability regions, resulting in poor convergence rates for multimodal distributions [90]. Many solutions have been proposed, ranging from slightly modifying local kernels to encourage crossing low probability regions [73, 114, 85] to constructing entirely new kernels which are mixtures of a local and global component [7, 2, 129]. Despite these advances, there is still no general solution for efficiently sampling complicated high dimensional distributions. We believe deep learning will play an integral role in developing better general solutions. Deep generative models have demonstrated remarkable success in estimating and approximately sampling complicated, high dimensional distributions, achieving state-of-the-art performance in image/audio/video synthesis, computer graphics, physical/engineering simulations, drug discovery, and other domains [56, 70]. In this vignette we discuss the use of deep generative models to design better proposal distributions for use in MH, both by augmenting existing kernels and by constructing entirely new distributions. Most deep generative models use a neural network (NN) to transform a simple base distribution to closely match a pre-specified empirical distribution. The setting of posterior computation via MH introduces two practical problems. First, samples from the target are not available prior to sampling, complicating the process of training the NN. Second, each iteration of MH requires computing the acceptance probability, hence evaluating the proposal density. If the proposal is a simple distribution transformed by a NN, then this requires inverting a NN, which is generally impossible, and computing the Jacobian, which can be numerically intractable in high dimensions. In this vignette we discuss adaptively tuning normalizing flow (NF) proposals as a means of resolving these challenges. Section 2.1 introduces NFs; Sections 2.2-2.3 cover applications to MH and straightforward generalizations; Section 2.4 discusses exciting future research. ### Introduction to normalizing flows In this section we provide a brief introduction to NFs and highlight useful properties. One method for generating a flexible class of proposal distributions is to transform a simple \(D\)-dimensional random variable \(Z\) (e.g., \(Z\sim N(0,I_{D})\)) with a diffeomorphism \(f\) parameterized by a NN. Carefully tuning \(f\) can result in proposals \(Y=f(Z)\) that closely conform to the target. Computing the acceptance probability in each iteration of MH requires evaluating the proposal density, \[\pi_{Y}(y)=\pi_{Z}(f^{-1}(y))|J_{f^{-1}}(f^{-1}(y))| \tag{1}\] where \(\pi_{Z}\) is the density of \(Z\) and \(J_{f^{-1}}\) is the Jacobian of \(f^{-1}\). Inverting NNs is generally intractable, and evaluating Jacobians is \(O(D^{3})\) in the worst case. NFs impose additional structure on \(f\) to resolve these problems. Specifically, discrete NFs (DNFs) decompose \(f\) as the composition of \(K\) simple component functions: \[f=f_{K}\circ\cdots\circ f_{1}. \tag{2}\] Component functions are constructed to facilitate fast inversion (either exactly or approximately) and fast Jacobian calculations (e.g., by ensuring Jacobians are upper/lower triangular). The change of variables rule becomes \[\pi_{Y}(y)=\pi_{Z}(f^{-1}(y))\prod_{i=1}^{K}|J_{f_{i}^{-1}}(z_{i})| \tag{3}\] where \(f^{-1}=f_{1}^{-1}\circ\cdots\circ f_{K}^{-1}\) and \(z_{i}=f_{i+1}^{-1}\circ\cdots\circ f_{K}^{-1}(y)\) with \(z_{K}=y\). By the inverse function theorem, \(J_{f_{i}^{-1}}=J_{f_{i}}^{-1}\), so it is sufficient to compute the Jacobian of \(f_{i}\) or \(f_{i}^{-1}\). For example, a _planar_ normalizing flow [137] uses component functions \[f_{i}(z)=z+a_{i}h(w_{i}^{T}z+b_{i}) \tag{4}\] where \(a_{i},w_{i}\in\mathbb{R}^{D}\), \(b_{i}\in\mathbb{R}\) are parameters to be tuned and \(h\) is an invertible, differentiable nonlinearity applied elementwise. The matrix determinant lemma allows one to express the Jacobian as \[|J_{f_{i}}(z)|=1+h^{\prime}(w_{i}^{T}z+b)a_{i}^{T}w_{i} \tag{5}\] which is \(O(D)\) to compute. Planar flows are not invertible for all choices of parameters and nonlinearities, however efficient constrained optimization algorithms are available which ensure invertibility [137]. Planar flows have relatively limited expressivity, and many layers may be needed to construct suitably complicated high dimensional proposals. Improved component functions have been proposed, including radial [137], spline [34], coupling [31], autoregressive [69], etc. See [70] for a review of NFs and [122] for theory on the expressively of discrete flows. Continuous normalizing flows (CNFs) [25] are an extension of the discrete framework, potentially enhancing expressivity while requiring fewer parameters and lower memory complexity. The key insight is to reconceptualize DNFs as a method for computing the path \(x(t)\) of a particle at discrete times \(t\in\{0,1/K,2/K...,1\}\). The initial location \(x(0)\) is drawn from \(Z\). At time \(1/K\), the location is updated to \(x(1/K)=f_{1}(x(0))\). This is repeated iteratively, moving from \(x(i/K)\) at time \(i/K\) to \(x((i+1)/K)=f_{i}(x(i/K))\) at time \(i+1\). The result is a path \((x(0),...,x(1))\) where the final location is a sample from \(Y\). CNFs consider the limit \(K\to\infty\), with the intuition that one can obtain a more flexible distribution for \(Y\) by flowing samples of \(Z\) through continuous paths instead of discrete paths. This can be formalized as the initial value problem \[\frac{dx(t)}{dt}=f(x(t),t) \tag{6}\] where \(f\) is a function parameterized by a NN and \(x(0)\) is a sample from \(Z\). In practice equation (6) cannot be solved analytically, however approximate samples of \(Y\) can be generated using an ODE solver. Euler's method with a step size of \(1/K\) exactly recovers a DNF with \(K\) layers, but greater expressivity can be obtained using higher order solvers. This framework has a number of surprising technical advantages; see [25] for an exposition. ### Normalizing flow proposals In this section we outline modern methods for constructing proposals with NFs. Throughout, we denote the \(D\)-dimensional target density by \[\pi(x)\propto\exp(-U(x)) \tag{7}\] with unknown normalizing constant and known potential \(U:\mathbb{R}^{D}\rightarrow\mathbb{R}\). A NF with parameters \(\phi\) will be denoted \(f_{\phi}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\); this yields a new density \(\hat{\pi}_{\phi}\) by pushing forward a simple random variable \(Z\) with density \(\pi_{Z}\). Independent proposalsThe simplest approach is to use a NF to generate proposals in independent MH [15]. At each iteration, a proposed state \(x^{\prime}\) is generated by pushing a sample of \(Z\) through the NF. This state is accepted with probability \[\text{acc}(x,x^{\prime})=\min\left\{1,\frac{\pi(x^{\prime})\hat{\pi}_{\phi}(x )}{\pi(x)\hat{\pi}_{\phi}(x^{\prime})}\right\} \tag{8}\] where \(x\) is the current state. In high dimensions, almost all choices of \(\phi\) will result in low overlap between \(\hat{\pi}_{\phi}\) and \(\pi\), hence small acceptance ratios and poor mixing. Consequently, we focus our discussion on more elaborate proposals which result in better practical performance. Dependent proposalsA more practical approach is to let proposals depend on the current state. This can be achieved by using a larger NF \(f_{\phi}:\mathbb{R}^{D}\times\mathbb{R}^{M}\rightarrow\mathbb{R}^{D}\times \mathbb{R}^{M}\) which maps the current state \(x\) and \(M\)-dimensional noise \(z\) to a proposal \(x^{\prime}\) and transformed noise \(z^{\prime}\). The \(M\) dimensional noise can be thought of as an auxillary parameter such as momentum or temperature in dynamics based MCMC. [147] construct a dependent proposal which is symmetric, thus eliminating the ratio of proposal densities in equation (8) and reducing the problem of extremely low early acceptance rates. The proposal is constructed in two stages: first, sample \(u\sim\text{Uniform}[0,1]\) and \(z\) from \(Z\). If \(u>0.5\), propose \(x^{\prime}\) using \((x^{\prime},z^{\prime})=f_{\phi}(x,z)\). Otherwise propose \(x^{\prime}\) using \((x^{\prime},z^{\prime})=f_{\phi}^{-1}(x,z)\). Using a mixture of \(f_{\phi}\) and \(f_{\phi}^{-1}\) ensures that \(x^{\prime}\) is as likely to be proposed when starting at \(x\) as \(x\) is to be proposed when starting at \(x^{\prime}\). Key to the proof of symmetry is the assumption that the NF is volume preserving. This is a restrictive assumption: current volume preserving architectures are outperformed by non-volume preserving architectures. Mixture kernelsHigher initial acceptance rates can be obtained by combining NF proposals with classical kernels, for example by alternating proposing samples with HMC and a conditional flow. Samples from the classical kernel provide data with which to tune the NF. Eventually, the NF becomes a good approximation to the posterior, proposing efficient global moves and resulting in better mixing than the classical kernel alone. [40] construct a proposal which deterministically alternates between approximately \(10\) MALA proposals for every one independent NF proposal. The resulting sampler efficiently explores multi-modal distributions: MALA locally explores each mode, and NF teleports the chain between modes. It is critical to initialize the sampler with at least one particle in each mode, as the local dynamics are unlikely to discover new modes on their own. The algorithm is shown to converge with an exponential rate in the continuous time limit. Partial ergodic theory is available when the flow is adaptively learned by minimizing the KL divergence, although other loss functions remain unstudied. Augmenting existing kernelsThe previously discussed mixtures rely on classical kernels for local exploration until there is sufficient data to train the NF. An alternate approach is to use NFs to augment classical kernels - that is, to improve the classical kernel as the chain runs instead of tuning a separate, auxillary kernel. We use HMC as an example, wherein a new state \(x^{\prime}\) is proposed by drawing a momentum \(\nu\sim N(0,I_{D})\) and approximating the resulting Hamiltonian dynamics (usually) with the leapfrog integrator. One time step of the approximation proceeds by taking a half step of the momentum \[\nu_{1/2}=\nu-\frac{\varepsilon}{2}\nabla U(x) \tag{9}\] where \(x\) is the current state and \(\varepsilon\) is the step size of the integrator. This is used to update the position \[x^{\prime}=x+\varepsilon\nu_{1/2} \tag{10}\] which is then used to update the momentum, \[\nu^{\prime}=\nu-\frac{\varepsilon}{2}\nabla U(x^{\prime}) \tag{11}\] The process is repeated a prespecified number of times to generate a final proposal; the final momentum is disregarded. The resulting proposal is symmetric and volume preserving, resulting in a simple acceptance ratio. Crossing low-probability regions requires a large velocity, which is unlikely if the momentum is sampled from a Gaussian. [77] use NFs to learn a collection of maps which dynamically rescale the momentum and position to encourage exploration across low probability regions. Specifically, the momentum half step is replaced by \[\nu_{1/2}=\exp(S_{\nu}(x))\odot\nu-\frac{\varepsilon}{2}\exp(Q_{\nu}(x))\odot \nabla U(x)+T_{\nu}(x) \tag{12}\] where \(\odot\) is the elementwise product, \(S_{\nu}\) is a NF that rescales the momentum, \(Q_{\nu}\) is a NF that rescales the gradient, and \(T_{\nu}\) is a NF that translates the momentum. Similarly, the position update is replaced with (13) \[x^{\prime}=\exp(S_{x}(\nu_{1/2}))\odot x+\varepsilon\exp(Q_{x}(\nu_{1/2})) \odot\nu_{1/2}+T_{x}(\nu_{1/2})\] where \(S_{x}\), \(Q_{x}\), and \(T_{x}\) are NFs. The momentum is updated again with equation (12) using \(x^{\prime}\) in place of \(x\), and the entire procedure is iterated. When all of these NFs are zero, we exactly recover HMC. Allowing the NFs to be nonzero results in a very flexible family of proposal distributions which can be adaptively tuned to propel the sampler out of low probability regions by rescaling and translating the momentum/position. The invertibility and tractable Jacobians allows efficient calculation of the proposal density. This presentation has been simplified from [77], which also includes random directions, random masking, and conditions NFs on the leapfrog iteration. So far, the above augmentation technique has only been applied to HMC. However there is a broad class of dynamical systems that can be used to generate proposals, including Langevin dynamics, relativistic dynamics, Nose-Hoover thermostats, and others [86]. NFs can be used to augment all of these algorithms using the same recipe as above. ### Tuning proposals Appropriately tuning NF parameters is critical for good mixing. In practice, tuning is often performed by adaptively minimizing a loss. In this section we cover a variety of candidate loss functions, including measure-theoretic losses, summary statistics, and adversarial approaches. #### 2.3.1 Statistical deviance The simplest approach is to define a function \(d\) measuring how close the proposal is to the target and then to find NF parameters minimizing \(d(\hat{\pi}_{\phi},\pi)\). Let \(\mathcal{G}\) be a space of probability densities and \(d:\mathcal{G}\times\mathcal{G}\rightarrow\mathbb{R}\) be any function measuring the distance/discrepancy/deviance between two probability measures. We assume 1. \(d(\rho,\rho)=0\) for all \(\rho\in\mathcal{G}\). 2. \(d(\rho,\rho^{\prime})>0\) for \(\rho\neq\rho^{\prime}\in\mathcal{G}\), with equality interpreted as equality almost everywhere. 3. \(d(\hat{\pi}_{\phi},\pi)\) has a gradient with respect to \(\phi\), \(\nabla_{\phi}d(\pi_{\phi},\pi)\). Conditions \((1)\) and \((2)\) ensure \(d(\hat{\pi}_{\phi},\pi)=0\) if and only if \(\hat{\pi}_{\phi}=\pi\), hence minimizing \(d\) is a reasonable way to approximate the target. Condition \((3)\) allows optimization with gradient based methods. Weaker notions of differentiability are sufficient, such as having a tractable subgradient. For example, \(d\) may be the forward KL divergence, \[\mathrm{D}_{\mathrm{KL}}\left(\pi\|\hat{\pi}_{\phi}\right)=\int_{\mathbb{R}^{ D}}\pi(x)\log\left(\frac{\pi(x)}{\hat{\pi}_{\phi}(x)}\right)dx \tag{14}\] Adaptive estimation can be performed by alternating between generating a sample via MH and updating NF parameters using the gradient of (14) [15]. The gradient can be estimated via Monte Carlo using previous samples. Under technical assumptions on the NF and the target, the resulting Markov chain is ergodic with the correct limiting distribution [15]. Other viable choices for \(d\) include the Hellinger distance, the (sliced) Wasserstein distance, the total variation distance, etc. Many of these are as-of-yet unexplored as a means of adaptively estimating flows, and it is unclear which will result in the best performance. The main limitation with approaches in this class is that minimizing a difference only indirectly targets good mixing; in the following we consider directly targeting good mixing with MCMC diagnostics. #### 2.3.2 Mixing summary statistics A high quality global approximation of the target may not be required for sufficiently good mixing, especially if NFs are used in conjunction with local kernels such as HMC. Using distance based losses in these situations is unnecessarily ambitious and better practical performance may be attained by switching to a loss function which directly targets good mixing. Ideally one would maximize the effective sample size, but this depends on the entire history of the chain and is in general slow to compute. Instead [77] propose minimizing the lag-1 autocorrelation, which is equivalent to maximizing the expected squared jump distance [124]: \[\text{lag}(\hat{\pi}_{\phi},\pi)=E[||x-x^{\prime}||_{2}^{2}\text{acc}(x,x^{ \prime})] \tag{15}\] where the expectation is over the target and any auxiliary variables used to sample \(x^{\prime}\). This can be estimated using samples \(x_{i}\), \(i=1,...,S\) from the first \(S\) iterations of the chain by generating a proposal \(x^{\prime}_{i}\) starting at each \(x_{i}\) and averaging: \[\text{lag}(\hat{\pi}_{\phi},\pi)\approx\frac{1}{S}\sum_{i=1}^{S}||x_{i}-x^{ \prime}_{i}||_{2}^{2}\text{acc}(x_{i},x^{\prime}_{i}). \tag{16}\] This loss depends on \(\phi\) implicitly through the \(x^{\prime}_{i}\). Naively optimizing this loss does not guarantee good mixing across the entire space - for example, the chain may bounce between two distant modes. To solve these problems, [77] add a reciprocal term and instead optimize \[\ell_{\lambda}(\hat{\pi}_{\phi},\pi)=\frac{\lambda}{\text{lag}(\hat{\pi}_{\phi },\pi)}-\frac{\text{lag}(\hat{\pi}_{\phi},\pi)}{\lambda} \tag{17}\] where \(\lambda>0\) is a tuning parameter. The reciprocal term penalizes states where the expected squared jump distance is small. [77] add a term of the same form to encourage faster burn-in. The composite loss is used to train an augmented variant of HMC and results in a sampler which efficiently moves between well-separated modes. Other summary statistics can be integrated into this framework, possibly considering lag-\(k\) autocorrelations or multiple chain summaries such as the Gelman-Rubin statistic [42]. One concern with this class of loss functions is that no single summary statistic can detect when a chain has mixed, and naively optimizing one statistic may result in pathological behaviour that is hard to detect. In the following we discuss a different strategy which may strike a middle ground between ambitious distance based methods and narrow summary statistic based methods. _Adversarial training_ Generative adversarial networks (GANs) [47, 51] pit two NNs against each other in a minimax game. The first player is a generator which transforms noise into samples that look like real data; the second player is a discriminator which tries to determine whether an arbitrary sample is synthetic or real. GANs may be applied to MCMC by taking the proposal distribution to be the generator and training a discriminator to distinguish between proposals and previous samples of the target. [147] use this idea to adaptively train a NF proposal which dramatically outperforms HMC on multimodal distributions. Many improvements are possible by leveraging modern ideas from the GAN literature. Conditional GANs [101] allow the discriminator and generator to condition on external variables. For example, one could construct a tempered adversarial algorithm by conditioning on a temperature variable, possibly accelerating the mixing of annealed MCMC. Complicated GAN structures are prone to mode collapse, hence these generalizations will likely require modified loss functions [152, 93, 64, 163] and regularization [126, 140, 52, 102]. ### Future directions We have introduced several different kernel structures and losses which can be combined to develop new adaptive MCMC algorithms. In this section we discuss shortcomings of the proposed approach, as well as avenues for exciting long-term research. _Theoretical guarantees_ So far, partial ergodic theory is only available in the simplest case of tuning an independent NF proposal by adaptively minimizing the KL divergence [15]. Dependent/conditional proposals and augmented kernels are not well studied, and no guarantees are available when adaptively minimizing summary statistic or adversarial based losses. This is particularly concerning for summary statistic based losses, as it is not clear that minimizing (e.g., lag-1 autocorrelation) is enough to guarantee ergodic averages converge to the correct values. Precise theoretical results will provide insights into when/why these methods succeed/fail, and are a necessary precursor to widespread adoption of NF sampling. _Constrained posteriors_ In this vignette we only consider the case where the target is supported over Euclidean space, however in some applications the target is supported over a Riemannian manifold (e.g., the sphere or positive semidefinite matrices). Most manifold sampling algorithms rely on approximating dynamics defined either intrinsically on the manifold or induced by projecting from ambient space. These dynamics based methods may be inferior to NF kernels for multimodal distributions. Recent work has successfully generalized NFs to Riemannian manifolds, although these constructions typically place significant restrictions on geometry (e.g., diffeomorphic to a cross product of spheres [138]) or rely on high-variance estimates of Jacobian terms [94]. Loss functions measuring the distance between a proposal and the target may be harder to define and compute over manifolds. New architectures for manifold valued NFs and improved estimation techniques could facilitate efficient sampling in a wide class of models with non-Euclidean supports. Our discussion also neglected to mention discrete parameters. Discrete parameters occur routinely in Bayesian applications, including clustering/discrete mixture models, latent class models, and variable selection. Specific NF architectures have been constructed to handle discrete data [151, 177], but current approaches are relatively inflexible and cannot be made more flexible by naively adding more NN layers, limiting their utility within MH. A more promising direction is to leverage the flexibility of continuous NFs by embedding discrete parameters in Euclidean space and sampling from an augmented posterior. Several variants of HMC have been proposed to accommodate piecewise discontinuous potential functions [121, 103, 32], with recent implementations such as discontinuous HMC (DHMC) [115] achieving excellent practical performance sampling ordinal variables. However, embedding based methods struggle to sample unordered variables - here the embedding order is arbitrary, with most embeddings introducing multimodality in the augmented posterior. NFs have successfully augmented continuous HMC [77] to handle multimodal distributions; the same strategy is promising for improving DHMC. _Automated proposal selection A priori_ it is unclear which NF architecture, kernel structure, and loss will result in the most efficient mixing for sampling a given posterior. Running many Markov chains with different choices can be time consuming, and a large amount of computational effort may be wasted if some chains mix poorly. Tools for automatic architecture/kernel/loss selection would greatly improve the accessibility of the proposed methodology. This goal is difficult in general given (1) the space of possible samplers is huge, (2) different architectures and kernels are not always comparable, and (3) good mixing is impossible to quantify with a single numerical summary. Ideas from reinforcement learning, sequential decision making, and control theory could provide principled algorithms for exploring the space of possible samplers. One could define a state space of kernel/loss pairs, \((\hat{\pi}_{\phi},L)\), which an agent interacts with by running adaptive MCMC. After each action, the agent observes sampler outputs such as trace plots and summary statistics. The goal is to develop a policy for choosing the next kernel/loss pair to run while maximizing some cumulative reward, such as cumulative effective sample sizes across all chains. As an initial attempt, one could restrict kernels to all have the same structure, such as HMC/NF mixtures where only the NF architecture is changing, and the loss function to be a simple parametric family, such as the lag-1 loss with different tuning parameters. This facilitates a parameterization of the state-space and allows application of existing continuous-armed bandit algorithms [1, 159]. Constructing a sequential decision making algorithm that can efficiently explore kernel/loss pairs with fundamentally different kernel structures and loss functions is an open challenge, which will likely require better understanding of the theoretical relationships between the different proposed kernel structures as well as the dynamics which result from minimizing different types of losses. We expect broad patterns to emerge with increasing use of NF, with certain architectures/kernels/losses performing consistently well in specific classes of problems. For example, the authors have observed that discrete spline flows work very well for sampling from Gaussian mixture models. These heuristics could be collected in a community reference manual, allowing statisticians to quickly find promising candidate algorithms for their model class, dimension, features of the data, etc. Crowd-sourcing the construction and maintenance of this manual could enable statisticians to stay up-to-date with NFs, despite the rapid pace of ML research. _Accelerated tuning_ The recipe presented in this vignette is to (1) choose a NF kernel structure, (2) choose a loss, and (3) adaptively estimate parameters starting from a random initialization. Starting from a random initialization in step (3) is inefficient. Transfer/meta learning may provide tools for accelerating tuning by avoiding random initialization. For example, iterative model development and sensitivity analysis often involve repeating the same inferences with slightly different prior specifications. NF parameters estimated for one prior specification could be used to initialize the sampler for other prior specifications, potentially eliminating the need for adaptive tuning. A more difficult task is handling targets with similar structures, but different dimensions. For example, consider a Bayesian sparse logistic regression model classifying Alzheimer's disease status using vectorized images of brains. Interest is in sampling coefficients \(\beta_{I}\) from the posterior \(\pi(\beta_{I}\mid A,I)\) where \(A=(A_{1},...,A_{n})\) is a set of disease indicators and \(I=(I_{1},...,I_{n})\) is a set of brain images. Perhaps additional covariates for each subject are collected at a later stage, such as gene expression vectors \(G=(G_{1},...,G_{n})\). Intuitively, there should be strong similarities between the updated posterior \(\pi(\beta_{I},\beta_{G}\mid A,I,G)\) and the original posterior \(\pi(\beta_{I}\mid A,I)\), however this is difficult to formalize because the posteriors have different dimensions. A promising approach is to parameterize the initial sampler in a dimension-free manner, for example by defining a kernel which proposes an update for the \(i\)th coefficient depending only on the potential \(U(\beta)\), the gradient in that direction \(\partial_{\beta_{i}}U(\beta)\), and auxillary variables in that direction. This kernel can be tuned while sampling \(\pi(\beta_{I}\mid A,I)\) with any of the aforementioned loss functions, and then automatically applied to sample \(\pi(\beta_{I},\beta_{G}\mid A,I,G)\). [46] introduce a related idea for stochastic gradient sampling of Bayesian neural networks with different activation functions. The general methodology remains unstudied for exact sampling. The proposed coordinate-wise strategy cannot leverage correlation between pairs of parameters to propose efficient block updates; solutions to this problem constitute ongoing research. ## 3 Bayesian Coresets Large-scale datasets--i.e., those where even a single pass over the complete data set is computationally costly--are now commonplace. MCMC typically requires many passes over the full data set; in the setting of large-scale data, this makes inference, iterative model development, tuning, and verification arduous and error-prone. To realize the full benefits of Bayesian methods in important modern applications, we need inference algorithms that handle the scale of modern datasets. In the past decade, there has been a flurry of work on approximate Bayesian inference methods that are computationally efficient in the large-scale data regime. One class of methods--including variational inference [66, 158, 11] and Laplace approximations [145, 54]--formulates inference as an optimization problem that can be solved via (scalable) stochastic gradient descent [57, 132]. Because the problem is generally nonconvex, these approaches come with little or no realizable guarantees, and tend to be sensitive to initialization, optimization hyperparameters, and stochasticity during optimization. Another class--subsampling MCMC [9, 71, 88, 166, 3]; see Quiroz et al. [130] for a recent survey--run a Markov chain whose transitions depend on a subset of data randomly chosen at each iteration. However, speed benefits can be outweighed by drawbacks, as uniformly subsampling at each step causes MCMC to either mix slowly or provide poor approximation [63, 104, 10, 130, 131]. It is possible to circumvent this restriction by design of an effective control variate for the log-likelihood (see Quiroz et al. [130], Nemeth and Fearnhead [108]), but this is in general model-specific. At its core, the problem of working with large-scale data efficiently is a question of how to exploit _redundancy_ in the data. To draw principled conclusions about a large data set based on a small fraction of examples, one must rule out the presence of unique or interesting additional information in the (vast) remainder of unexamined data. One approach incorporates redundancy directly into its formulation: _Bayesian coresets_[59]. The key idea is to represent the large-scale data by a small, weighted subset. The coreset can then be passed to any standard (automated) inference algorithm, providing posterior inference at a reduced computational cost. Coresets come with a number of compelling advantages. First, and perhaps most importantly, **coresets preserve important model structure**. If the original Bayesian posterior distribution exhibits symmetry, weak identifiability, discrete variables, heavy tails, low-dimensional subspace structure, or otherwise, the coreset posterior typically will exhibit that same structure, because it is constructed using the same likelihood and prior as the original model. This makes coresets appealing for use in complex models where, e.g., a Gaussian asymptotic assumption is inappropriate. Second, **coresets are composable**: coresets for two data sets can often be combined trivially to form a coreset for the union of data sets [37]. This makes coresets naturally applicable to streaming and distributed contexts [19, Section 4.3]. Third, **coresets are inference algorithm-agnostic**, in the sense that once a coreset is built, it can be passed to most downstream inference methods--in particular, exact MCMC methods with guarantees--with enhanced scalability. Finally, **coresets tend to come with guarantees** relating the size of the coreset to the quality of posterior approximation. In this vignette, we will cover the basics of Bayesian coresets as well as recent advances in Sections 3.1 and 3.2, and discuss open problems and exciting directions for future work in Section 3.3. ### Introduction to Bayesian coresets #### 3.1.1 Setup We are given a target probability density \(\pi(\theta)\) for \(\theta\in\Theta\) that that is comprised of \(N\) potentials \((f_{n}(\theta))_{n=1}^{N}\) and a base density \(\pi_{0}(\theta)\), \[\pi(\theta)=\frac{1}{Z}\exp\left(\sum_{n=1}^{N}f_{n}(\theta)\right)\pi_{0}( \theta), \tag{18}\] where the normalization constant \(Z\) is not known. This setup corresponds to a Bayesian statistical model with prior \(\pi_{0}\) and i.i.d. data \(X_{n}\) conditioned on \(\theta\), where \(f_{n}(\theta)=\log p(X_{n}|\theta)\). The goal is to compute or approximate expectations under \(\pi\); in the Bayesian scenario, \(\pi\) is the posterior distribution. A key challenge arises in the large \(N\) setting. Bayesian posterior computation algorithms tend to become intractable. For example, MCMC typically has computational complexity \(\Theta(NT)\) to obtain \(T\) draws, since \(\sum_{n}f_{n}(\theta)\) (and often its gradient) needs to be evaluated at each step. In order to reduce this \(\Theta(NT)\) cost, _Bayesian coresets_[59] replace the target with a surrogate density \[\pi_{w}(\theta)=\frac{1}{Z(w)}\exp\left(\sum_{n=1}^{N}w_{n}f_{n}(\theta) \right)\pi_{0}(\theta), \tag{19}\] where \(w\in\mathbb{R}^{N}\), \(w\geq 0\) are a set of weights, and \(Z(w)\) is the new normalizing constant. If \(w\) has at most \(M\ll N\) nonzeros, the \(\Theta(M)\) cost of evaluating \(\sum_{n}w_{n}f_{n}(\theta)\) (and its gradient) is a significant improvement upon the original \(\Theta(N)\) cost. The goal is then to develop an algorithm for coreset construction--i.e., selecting the weights \(w\)--that: 1. produces a small coreset with \(M\ll N\), so that computation with \(\pi_{w}\) is efficient; 2. produces a high-quality coreset with \(\pi_{w}\approx\pi\), so that draws from \(\pi_{w}\) are similar to those from \(\pi\); and 3. runs quickly, so that building the coreset is actually worth the effort for subsequent fast draws from \(\pi_{w}\). These three desiderata are in tension with one another. The smaller a coreset is, the more "compressed" the data set becomes, and hence the worse the approximation \(\pi_{w}\approx\pi\) tends to be. Similarly, the more efficient the construction algorithm is, the less likely we are to find an optimal balance of coreset size and quality with guarantees. #### 3.1.2 Approaches to coreset construction There are three high-level strategies that have been used in the literature to construct Bayesian coresets. _Subsampling_ The baseline method is to uniformly randomly pick a subset \(\mathcal{I}\subseteq\{1,\ldots,N\}\) of \(|\mathcal{I}|=M\) data points and give each a weight of \(N/M\), i.e., \[w_{n}=\frac{N}{M}\quad\text{if }n\in\mathcal{I},\quad w_{n}=0\text{ otherwise}, \tag{20}\] resulting in the unbiased potential function approximation \[\sum_{n=1}^{N}f_{n}(\theta)\approx N\left(\frac{1}{M}\sum_{m\in\mathcal{I}}f_ {m}(\theta)\right). \tag{21}\] This method is simple and fast, but typically generates poor posterior approximations. Constructing the subset by selecting data with nonuniform probabilities does not improve results significantly [59]. Empirical and theoretical results hint that in order to maintain a bounded approximation error, the subsampled coreset must grow in size proportional to \(N\), making it a poor candidate for efficient large-scale inference. Coresets therefore generally require more careful optimization. _Sparse regression_ One can formulate coreset construction as a sparse regression problem [19, 18, 175], \[w^{\star}=\operatorname*{arg\,min}_{w\in\mathbb{R}_{+}^{N}}\left\|\sum_{n=1}^ {N}f_{n}-\sum_{n=1}^{N}w_{n}f_{n}\right\|^{2}\quad\text{s.t.}\quad\|w\|_{0} \leq M,\] where \(\|\cdot\|\) is some functional (semi)norm, and \(\|w\|_{0}\) is the number of nonzero entries in \(w\). This optimization problem can be solved using iterative greedy optimization strategies that provably, and empirically, provide a significant improvement in coreset quality over subsampling methods [19, 18, 175]. However, this approach requires the user to design--and tends to be quite sensitive to--the (semi)norm \(\|\cdot\|\), and so is not easy to use for the general practitioner. The (semi)norm also typically cannot be evaluated exactly, resulting in the need for Monte Carlo approximations with error that can dominate any improvement from more careful optimization. Variational inferenceCurrent state-of-the-art research formulates the coreset construction problem as variational inference in the family of coresets [17], \[w^{\star}=\operatorname*{arg\,min}_{w\in\mathbb{R}_{+}^{N}}\operatorname{D}_{ \mathrm{KL}}\left(\pi_{w}\|\pi\right)\quad\text{s.t.}\quad\|w\|_{0}\leq M. \tag{22}\] Unlike the sparse regression formulation, this optimization problem does not require expert user input. However, it is not straightforward to evaluate the KL objective, \[\log Z-\log Z(w)+\sum_{n=1}^{N}(w_{n}-1)\int\pi_{w}(\theta)f_{n}(\theta) \mathrm{d}\theta, \tag{23}\] even up to a constant in \(w\). The difficulty arises because Eq. (23) involves both the unknown normalization constant \(Z(w)\) and an expectation under \(\pi_{w}\), from which we cannot in general obtain exact draws. This is unlike a typical variational inference problem, where the normalization of the variational density is known and obtaining draws is straightforward. Current research on coreset construction is generally focused on addressing these issues; this is an active area of work, and a number of good solutions have been found [17, 92, 60, 105, 23, 91]. ### Notable recent advances The literature on Bayesian coresets is still in its early stages, and the field is developing quickly. We highlight some key recent developments here. Coreset data point selectionOptimization-based coreset construction methods have tended to take a "one-at-a-time" greedy selection strategy to building a coreset, thus requiring a slow, difficult to tune inner-outer-loop [19, 17]. Recent work [23, 105, 60] demonstrates coresets can be built without sacrificing quality by first uniformly subsampling the data set to select coreset points, followed by batch optimization of the weights. This is both significantly simpler and faster than past one-at-a-time selection approaches, while providing theoretical guarantees: for models with a strongly log-concave or exponential family likelihood, after subsampling, the KL divergence of the _optimally-weighted_ coreset posterior converges to 0 as \(N\to\infty\) as long as the coreset size \(M\gtrsim\log N\)[105]. This guarantee does not say anything about whether one can _find_ the optimal weights, but just that selecting coreset data points by subsampling does not limit achievable quality. Optimizing the KL divergenceGiven a selection of coreset points, there remains the problem of optimizing the KL objective over the coreset weights \(w\); this is challenging because one cannot obtain exact draws from \(\pi_{w}\), or compute its normalization constant. It is possible to use MCMC to draw from \(\pi_{w}\), and to circumvent the normalization constant issue by noting that derivatives are available via moments of the potential functions under \(\pi_{w}\), e.g., \[\frac{\partial}{\partial w_{n}}\operatorname{D}_{\mathrm{KL}}\left(\pi_{w}\| \pi\right)=-\operatorname{Cov}_{w}\left[f_{n}(\theta),\sum_{i=1}^{N}(1-w_{i}) f_{i}(\theta)\right], \tag{24}\] where \(\operatorname{Cov}_{w}\) denotes covariance under \(\pi_{w}\)[17, 105]. The key difficulty of this approach is that it requires tuning the MCMC method at each optimization iteration, as the weights \(w\) (and hence the target \(\pi_{w}\)) are changing. Second order methods reduce the number of optimization iterations required significantly [105], and hence the challenge posed by needing to tune MCMC. Another promising approach is to use a surrogate variational family that is parametrized by the coreset weights \(w\) but enables tractable draws and exact normalization constant evaluation [23, 60, 91]. For example, Chen, Xu and Campbell [23] propose using a variational surrogate family \(q_{w}\) such that for all \(w\), \(q_{w}\approx\pi_{w}\), and then optimizing the surrogate objective function \[w^{\star}=\operatorname*{arg\,min}_{w}\operatorname{D}_{\mathrm{KL}}\left(q_{w }\|\pi\right). \tag{25}\] Chen, Xu and Campbell [23] set \(q_{w}\) to be a normalizing flow based on sparse Hamiltonian dynamics targeting \(\pi_{w}\). Concurrent work by Jankowiak and Phan [60] proposes a similar idea, but based on variational annealed importance sampling [141] as opposed to normalizing flows. In either case, the optimization problem is then just a standard KL minimization over parameters \(w\). Manousakas, Ritter and Karaletsos [91], in contrast, propose using a generic variational family \(q_{\lambda}\) parametrized by some auxiliary parameter \(\lambda\) to take draws, and adds an additional penalty to the optimization objective to tune \(q_{\lambda}\) to approximate \(\pi_{w}\): \[w^{\star},\lambda^{\star}=\operatorname*{arg\,min}_{w,\lambda}\operatorname{D }_{\mathrm{KL}}\left(\pi_{w}\|\pi\right)+\operatorname{D}_{\mathrm{KL}}\left(q _{\lambda}\|\pi_{w}\right). \tag{26}\] The unknown normalization constant on \(\pi_{w}\) cancels in the two KL divergence terms, and the \(\operatorname{D}_{\mathrm{KL}}\left(\pi_{w}\|\pi\right)\) term is estimated using self-normalized importance sampling based on draws from \(q_{\lambda}\) (which should be close to \(\pi_{w}\), ideally, due to the additional penalty term). Manousakas, Ritter and Karaletsos [91] use a diagonal-covariance Gaussian family for \(q_{\lambda}\), and use an inner-outer loop optimization method in which the inner loop optimizes \(\lambda\) to help ensure that \(q_{\lambda}\) remains close to \(\pi_{w}\). These two approaches are strongly connected. Consider the optimal auxiliary parameter \[\lambda^{\star}(w)=\operatorname*{arg\,min}_{\lambda}\operatorname{D}_{ \mathrm{KL}}\left(q_{\lambda}\|\pi_{w}\right), \tag{27}\] and assume that the family \(q_{\lambda}\) is flexible enough such that \(q_{\lambda^{*}(w)}=\pi_{w}\) for all \(w\). Then the two approaches are equivalent if we define \(q_{w}=q_{\lambda^{*}(w)}\): \[\mathrm{D}_{\mathrm{KL}}\left(\pi_{w}\|\pi\right)+\mathrm{D}_{\mathrm{KL}}\left( q_{\lambda}^{*}(w)\|\pi_{w}\right)=\mathrm{D}_{\mathrm{KL}}\left(q_{w}\|\pi \right). \tag{28}\] The advantage of using a generic family \(q_{\lambda}\) is that it is much easier (and more flexible) than being forced to design a family \(q_{w}\) satisfying \(q_{w}\approx\pi_{w}\). But self-normalized importance sampling is well-known to often work poorly [22] even when the reverse KL divergence is small, and we still need to take draws from \(\pi_{w}\) once the coreset is built. The approach of directly designing \(q_{w}\) requires more up front effort, but the optimization is well-behaved, and one can obtain i.i.d. draws directly from \(q_{w}\) afterward. The tradeoff between the three current state-of-the-art approaches--second-order methods with draws from \(\pi_{w}\) using MCMC [105], direct surrogate variational methods with \(q_{w}\approx\pi_{w}\)[23], and parametrized surrogate variational methods using \(q_{\lambda}\approx\pi_{w}\)[91]--has not yet been explored empirically, and is an open direction for future research. _Optimization guarantees_ Although variational inference in general is nonconvex, the coreset variational inference problem Eq. (22) facilitates guarantees. In particular, Naik, Rousseau and Campbell [105] obtain geometric convergence to a point near the optimal coreset via a quasi-Newton optimization scheme: \[\|w_{k}-w_{k}^{\star}\|\leq\eta^{k}\|w_{0}-w_{0}^{\star}\|+C, \tag{29}\] where \(w_{k}\) is the \(k^{\text{th}}\) iterate, and \(w_{k}^{\star}\) is its projection onto a subset of optimal coreset weights (the optimum may not be unique). The constants \(\eta\) and \(C\) are related to how good of an approximation the _optimal_ coreset is. If the optimal coreset is exact, then \(0<\eta<1\) and \(C=0\). ### Open questions and future directions Recent advances in coreset construction methods and theory have paved the way for a variety of new developments. In this section we highlight important open problems and areas for investigation. _Complex model structure, data, and symmetry_ The coresets methodology and theory is now starting to coalesce for the basic model setup in Eq. (18) with a finite-dimensional parameter and conditionally i.i.d. data. Many popular models do not fit into this framework, such as certain network models [58], continuous time Markov chains [6], etc. Some of these models involve computational cost that scales poorly in \(N\)--e.g., Gaussian process regression with \(O(N^{3})\) complexity [167]--and would greatly benefit from a summarization approach. Even some models that technically fit in the framework of Eq. (18), such as certain hierarchical models [12], may be better summarized if more of their latent structure is exposed to the coreset construction algorithm. Moving beyond the conditionally i.i.d. data setup, we advocate thinking about this problem as _model and data summarization_, broadly construed, as opposed to just the specific case of coresets. At an abstract level, Bayesian coresets are just one particular example of how one can construct a computationally inexpensive parametrized variational family \(\pi_{w}\) that provably contains (a distribution near) the true posterior \(\pi\). In general, there is no reason this has to be associated with a sparse, weighted subset of data; we could, e.g., summarize networks with subgraphs [118], summarize high-dimensional data with low-dimensional sketches [89], summarize expensive, complicated neural network structures with simpler ones [117], summarize expensive matrices with low-rank randomized approximations [168], etc. The major question to answer is: _What is the natural extension of coresets, or summarization more broadly, to more sophisticated models beyond Eq. (18)? Is there a common underlying principle, or is efficient summarization a problem that must be solved in a case-by-case manner?_ We believe that the key to answering these questions is to understand the connections between Bayesian coresets, subsampling, probabilistic symmetries, and sufficiency in statistical models; see, e.g., [29, 74, 119]. Indeed, the fact that Bayesian coresets work at all is a reflection of the fact that one can use a small subset of data potentials as "approximately sufficient statistics," combined with the symmetry of their generating process. Assuming a fruitful connection is made, we expect that current Bayesian coreset construction methods--which are based on subsampling to select a "dictionary" of potentials, followed by optimization to tune the approximation--will serve as a good template in more general models. _Improved surrogates and optimization_ Early Bayesian coresets literature [59, 18, 17, 19] suffered from the requirement of taking draws from \(\pi_{w}\) both during and after construction. Sampling _during_ construction poses a particular challenge: if one intends to use MCMC to take draws from \(\pi_{w}\), one needs to continually adapt the MCMC kernel to a changing target \(\pi_{w}\) as the weights \(w\) are refined. Recent developments discussed in Section 3.2 suggest that an easier way to approach the problem is to construct a tractable variational family \(q_{w}\) such that \(q_{w}\approx\pi_{w}\) for all weights \(w\)--whether that is a normalizing flow [23], a variational annealed importance distribution [60], or an optimized parametric surrogate [91]--and then to tune the weights \(w\) so that \(q_{w}\approx\pi\). The benefit of this approach is the ability to take exact i.i.d. draws and evaluate the density, which circumvents challenges of adaptive in-the-loop MCMC tuning. This leads to the following question: _How should we construct a tractable, summarization-based variational family such that \(q_{w}\approx\pi_{w}\) for all \(w\)?_ For methods based on parametric surrogates [91] that set \(q_{w}=q_{\lambda}^{\star}(w)\), where \(\lambda^{\star}(w)=\arg\min_{\lambda}\mathrm{D}_{\mathrm{KL}}\left(q_{\lambda} \|\pi_{w}\right)\) there are two major avenues for improvement. The first--and more likely achievable--goal is in the optimization of the parametric surrogate. In particular, the methodology currently involves slow inner-loop optimization of the surrogate, as well as potentially high-variance gradient estimates based on self-normalized importance sampling. Handling these two issues would be a major step forward for this approach. The second important area for future work--which may be far more challenging--is to provide theoretical guarantees on the quality of the coreset that is constructed using this method. The primary difficulty is that the surrogate optimization is as hard to analyze as other generic variational inference problems. For methods based on direct surrogates [60, 23] where \(q_{w}\approx\pi_{w}\) for all \(w\), there are again two major areas for improvement. First, current methods involve Hamiltonian dynamics, and so are limited in scope to models with multidimensional real-valued variables; future work should extend these methods to models with a wider class of latent variables. The second area is once again to obtain rigorous theoretical guarantees on the quality of the surrogate. This is likely to be much easier than in the general parametric surrogate case above, as \(q_{w}\) is designed to approximate \(\pi_{w}\) directly, as opposed to just being a stationary point of a nonconvex optimization problem. _Privacy, pseudo-data, and distributed learning_ Distributed (or federated) learning is a task in which data are stored in separate data centers, and the goal is to perform a global inference given all the data under the constraint that the data are not transmitted between centers. Both exact [27, 21] and approximate [143, 14] methods exist to perform Bayesian inference in this setting. A common additional constraint is that the data within each center are kept private, in some sense, from the other centers. Coresets provide a potentially very simple solution to the distributed learning problem (both standard and privacy-preserving). In particular, coresets are often _composable_: if one builds subcoresets (independently and without communication) for subsets of a data set, one can combine these trivially to obtain a coreset for the full data set [36]. Coresets have also been extended to the privacy-aware setting, where one either trains pseudopoints with a differentially private scheme [92] or appropriately noises the coreset before sharing [38]. Subsequently, the data centers can share their privatized summaries freely with one another, or with a centralized repository that performs inference. There is some initial work on distributed Bayesian coresets constructed via sparse regression techniques [19, Section 4.3], but this work was done prior to the advent of modern construction methods. Beyond this, there is no study in the literature dedicated to theory and methods for distributed Bayesian coresets, either privacy-preserving or otherwise. _How do we leverage recent advances in coreset construction to efficiently construct differentially-private Bayesian coresets suitable for distributed learning problems? What theoretical guarantees on communication cost and coreset quality are possible?_ _Amortized and minimax coreset construction_ Bayesian coresets are currently constructed in a model-specific manner by minimizing the KL objective in Eq. (22). In situations where multiple models are under consideration--in exploratory analysis or sensitivity analysis, for example--one would need to re-tune the coreset weights for each model under consideration. Given that these re-tuning problems all involve the same data, they should be closely related; but it is currently an open question how to construct multiple related coresets efficiently. In particular: _How generalizable are coresets? Is there a way to construct one optimized coreset that is appropriate for multiple models? Is there a way to amortize the cost of constructing multiple coresets for multiple models?_ One potential direction of future work is to formulate a minimax optimization problem that is similar to Eq. (22), but where there is an outer maximization over a set of candidate models. A major question along these lines is whether it is actually possible to summarize a data set with a single coreset of \(M\ll N\) data points such that the coreset provides a reasonable approximation for the worst-case model under consideration. Another possible way to tackle the problem is to amortize the cost of multiple coreset construction, in the spirit of _inference compilation_[75]. Rather than constructing individual coresets, we train a "coreset construction artifact:" a function that takes as input a candidate model and data subsample, and outputs a set of coreset weights. In other words, we _learn how to construct coresets_. The most likely candidate for such an artifact is a recurrent deep neural network, as is commonly-used in methods like inference compilation. A major question about this direction to consider is in which data analysis scenarios the cost of building such an artifact is worth the subsequent fast generation of coreset weights. _High-dimensional data and models_ The coresets approach is designed with a focus on large-scale problems in the sense of the number of data points, \(N\). But in practice, modern large-scale problems tend to also involve high-dimensional data and latent model parameters; the dimension may even grow with \(N\). Empirical results have shown that coresets can be effective in problems with \(10\)-\(100\)-dimensional data and parameters, while _pseudoc_oresets [92, 91]--which involves summarizing data with synthetic pseudodata points--have been used successfully on larger problems with \(60\),\(000\)-dimensional parameters and \(800\)-dimensional data. But results in this domain are limited, which leads to the following questions for future work: _When do we expect the coresets approach to work with high-dimensional data and high-dimensional model parameters in general? Is there any modification to the (pseudo)coresets approach required to achieve rigorous guarantees in this setting? How does the difficulty of the coreset weight optimization scale in high dimensions?_ We begin with a negative (albeit pathological) example. When a large fraction of the potential functions \((f_{n})_{n=1}^{N}\) encode unique information in the posterior, the coresets approach breaks down; it is not possible to maintain a good posterior approximation upon removing potentials. Manousakas et al. [92, Proposition 1] makes this intuition precise with a simple example. In a \(d\)-dimensional Gaussian location model with prior \(\theta\sim\mathcal{N}(0,I)\), likelihood \(\mathcal{N}(\theta,I)\), and data generated via \(X_{n}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,I)\), the _optimal_ coreset of any size \(M<d\) satisfies \[\mathrm{D}_{\mathrm{KL}}\left(\pi_{w^{*}}\|\pi\right)\gtrsim d\quad\text{as} \quad d\to\infty, \tag{30}\] with high probability1. In some sense, this is unsurprising; the Gaussian location model with large \(d\), despite is mathematical simplicity, is a worst-case scenario for data summarization, as one needs at least \(d\) potential functions \(f_{n}\) to span a \(d\)-dimensional space. Footnote 1: The result by Manousakas et al. [92] is stated in terms of the inverse CDF of a \(\chi^{2}\) distribution with \(d-M\) degrees of freedom. The \(\Omega(d)\) lower bound follows directly by noting that \(X\sim\chi^{2}(d-M)\) implies \[\frac{X-(d-M)}{\sqrt{2(d-M)}}\stackrel{{ d}}{{\to}}\mathcal{N}(0,1) \qquad d\to\infty. \tag{31}\] But in practice, high-dimensional data do not typically exhibit this worst-case behaviour; they often instead exhibit some simpler, lower-dimensional structure. Developing (pseudo)coreset methods that take advantage of that structure is a key step needed to make summarization a worthwhile approach in large-scale modern problems. Furthermore, assuming that the coreset size should generally increase with dimension, additional work is needed to understand how the difficulty of the stochastic weight optimization scales. It is worth investigating whether the recently-developed literature on data distillation in deep learning [165] contains any insights applicable to the Bayesian setting. _Improved automation and accessibility_ Recent advances in research have, for the first time, made coresets a practical approach to efficient Bayesian computation. However, there is still much work to do to make their use possible by nonexperts. First and foremost, there is a need to develop a general, well-engineered code base that interfaces with common probabilistic programming libraries like Stan and Turing [41, 20]. In addition, there is a need for automated methods to (a) select coreset weight optimization tuning parameters, (b) select coreset size, and (c) assess and summarize the quality of the coreset. _Other divergences_ Currently, all variational coreset construction approaches optimize the reverse Kullback-Leibler divergence. A straightforward direction for future work would be to investigate the effect of using alternative divergences, e.g. the Renyi divergence [80] or \(\chi^{2}\) divergence [30], in Eq. (22). These will all likely pose similar issues with the unknown normalization constant \(Z(w)\), but divergences other than the reverse KL may provide coresets with distinct statistical properties. ## 4 Distributed Bayesian Inference Distributed methods for Bayesian inference address the challenges posed by massive data using a divide-and-conquer technique. They exploit distributed computing to reduce the time complexity of Monte Carlo algorithms that require multiple sweeps through the data in every iteration. During the last decade, three main groups of distributed methods have been developed for Bayesian inference. The first class of methods is the simplest and has three steps: divide the data into disjoint subsets and store them across multiple machines, run a Monte Carlo algorithm in parallel on all the machines, and combine parameter draws from all the subsets on a central machine. The last step requires one round of communication, so these approaches belong to the class of _one-shot learning_ methods [161, 99, 107, 149, 164, 144, 100, 109, 142, 48, 27, 171, 65, 170, 98, 49, 50, 96, 28]. They are based on a key insight that the subset parameter draws provide a noisy approximation of the true posterior distribution and differ mainly in their combination schemes. The second class of methods relies on distributed extensions of stochastic gradient MCMC [4, 72, 24, 35], which are typically based on stochastic gradient Langevin dynamics (SGLD) [166, 87]. They also split the data into subsets but have several rounds of communication among the machines. In every iteration, they select a subset with a certain probability, draw the parameter using a modified SGLD update, and communicate the parameter draw to the central machine. The high variance of the stochastic gradients and high communication costs have motivated the development of the third set of methods [16, 127]. They are stochastic extensions of global consensus methods for distributed optimization, such as Alternating Direction Method of Multipliers (ADMM) [123, 13]. They divide the data into subsets, store them on machines, and augment the posterior density with auxiliary variables. These variables are conditionally independent given the parameter, and the parameter's marginal distribution reduces to the target under certain limiting assumptions. The former assumption is crucial for drawing the auxiliary variables in parallel, whereas the latter condition ensures asymptotic accuracy. Every iteration consists of synchronous updates where the machines storing the data draw the auxiliary variables and send them to the central machine that uses them to draw the parameter [154, 136, 155, 127, 156]. Distributed Bayesian methods have three main advantages. First, most of them are algorithm-agnostic and are easily used with any Monte Carlo algorithm. Second, distributed methods come with asymptotic guarantees about their accuracy. Such results show that approximated and target posterior distributions are asymptotically equivalent under mild regularity assumptions. Finally, they are easily extended to handle application-specific constraints, such as clustering of samples in nonparametric models [111] and privacy-preserving federated learning [67]. We cover the basics of distributed Bayesian inference and recent advances in Section 4.1-Section 4.3, and discuss future research directions in Section 4.4. ### One-shot learning We provide a brief overview of one-shot learning approaches for distributed Bayesian inference. There is a rich variety of such algorithms available in the literature. We start with the most common setup that assumes the observations are conditionally independent given the parameter, leading to a product form for the likelihood. Let \(Y_{1}^{N}=(Y_{1},\ldots,Y_{N})\) denote the observed data. The model is specified using the distribution \(\mathbb{P}_{\theta}\) with density \(p(y\mid\theta)\) and \(p\)-dimensional parameter \(\theta\in\Theta\subseteq\mathbb{R}^{p}\). Assume that \(Y_{1}^{N}\) are randomly partitioned into \(K\) disjoint subsets. Let \(Y_{(j)}=\{Y_{(j)1},\ldots,Y_{(j)M}\}\) be the \(j\)th subset (\(j=1,\ldots,K\)), where we have assumed that all the subset sample sizes equal \(M\) for simplicity. The true and subset \(j\) likelihoods are \(\ell_{N}(\theta)=\prod_{i=1}^{N}p(Y_{i}\mid\theta)\) and \(\ell_{jM}(\theta)=\prod_{i=1}^{M}p(Y_{(j)i}\mid\theta)\). Let \(\Pi\) be a prior distribution on \(\Theta\) with density \(\pi(\theta)\). Then, the posterior density of \(\theta\) given \(Y_{1}^{N}\) is \(\pi_{N}(\theta\mid Y_{1}^{N})=\ell_{N}(\theta)\pi(\theta)/C_{N}\), where \(C_{N}=\int_{\Theta}\ell_{N}(\theta)\pi(\theta)d\theta\) and \(C_{N}\) is finite. Consensus Monte Carlo (CMC) and its generalizationsThese methods exploit the observation that the full data posterior can be factored as a product of subset posteriors with tempered priors [144]: \[\pi_{N}(\theta\mid Y_{1}^{N}) =C_{N}^{-1}\prod_{j=1}^{K}\{\pi(\theta)\}^{1/K}\ell_{jM}(\theta) \tag{32}\] \[\propto\prod_{j=1}^{K}\pi_{M}(\theta\mid Y_{(j)})\equiv\prod_{j= 1}^{K}\pi_{j}(\theta).\] Here \(\pi_{M}(\theta\mid Y_{(j)})\) (or \(\pi_{j}(\theta)\)) is the \(j\)th subset posterior density of \(\theta\) computed using likelihood and prior \(\ell_{jM}(\theta)\) and \(\{\pi(\theta)\}^{1/K}\). Let \(\theta_{(j)t}\) be the parameter draws obtained from \(\pi_{j}(\theta)\) using a Monte Carlo algorithm (\(j=1,\ldots,K\); \(t=1,\ldots,T\)) and \(\tilde{\pi}_{j}(\theta)\) be an estimate of \(\pi_{j}(\theta)\) obtained using \(\theta_{(j)t}\)s. Then, \(\prod_{j=1}^{K}\tilde{\pi}_{j}(\theta)\) is proportional to an estimate of \(\pi_{N}(\theta\mid Y_{1}^{N})\). In the special case that \(\pi_{j}(\theta)\)s are Gaussian, then so is \(\pi_{N}(\theta\mid Y_{1}^{N})\) and weighted average of \(\theta_{(j)t}\)s correspond to draws from \(\pi_{N}(\theta\mid Y_{1}^{N})\)[144]. More accurate combination algorithms estimate \(\pi_{j}(\theta)\) using kernel density estimation [107], Weierstrass transform [161], random partition trees [164], Gaussian process regression [109], and normalizing flows [96], where the last two approaches also use importance sampling to select promising \(\theta_{(j)t}\)s for better approximation accuracy. Median and mean posterior distributionsThese methods combine the subset posterior distributions using their geometric center, such as the median and mean posterior distributions. The main difference between them and CMC-type approaches is the definition of subset posterior densities. Specifically, the \(j\)th subset posterior density is \[\pi_{M}(\theta\mid Y_{(j)})=C_{M}^{-1}\{\ell_{jM}(\theta)\}^{K}\pi(\theta) \equiv\tilde{\pi}_{j}(\theta), \tag{33}\] where \(C_{M}=\int_{\Theta}\{\ell_{jM}(\theta)\}^{K}\pi(\theta)d\theta\) is assumed to be finite for posterior propriety. The pseudo-likelihood \(\{\ell_{jM}(\theta)\}^{K}\) in (33) is the likelihood of a pseudo sample resulting from replicating every sample in the \(j\)th subset \(K\) times [99]. This pseudo-likelihood ensures the posterior variance of the subset and true posterior densities are calibrated up to \(o_{P}(N^{-1})\) terms [148, 79, 100]. Similar to the previous methods, \(\theta_{(j)t}\)s are drawn in parallel from \(\tilde{\pi}_{j}(\theta)\)s using any Monte Carlo algorithm. Let \(\tilde{\Pi}_{j}\) be the \(j\)th subset posterior distribution with density \(\tilde{\pi}_{j}(\theta)\). Then, its empirical approximation supported on the \(\theta_{(j)t}\)s is \(\hat{\Pi}_{j}=T^{-1}\sum_{t=1}^{T}\delta_{\theta_{(j)t}}(\cdot)\), where \(\delta_{\theta}(\cdot)\) is the delta measure supported on \(\theta\). The median and mean posterior distributions are approximated using empirical measures \(\hat{\Pi}^{*}\) and \(\hat{\overline{\Pi}}\) that are supported on \(\theta_{(j)t}\)s. The weights of \(\theta_{(j)t}\)s are estimated via optimization such that \(\sum_{j=1}^{K}\mathsf{d}(\hat{\Pi}^{*},\hat{\Pi}_{j})\) and \(\sum_{j=1}^{K}\mathsf{d}^{2}(\hat{\overline{\Pi}},\hat{\Pi}_{j})\) are minimized, respectively, where \(\mathsf{d}\) is a metric on probability measures [149, 99]. If \(\theta\) is one dimensional and \(\mathsf{d}\) is the \(2\)-Wasserstein distance, then the \(\alpha\)th quantile of the mean posterior equals the average of \(\alpha\)th quantiles of the \(K\) subset posteriors [79]. Mixture of recentered subset posteriorsThe final combination algorithm uses a \(K\)-component mixture of recentered subset posterior densities in (33). Let \(\overline{\theta}_{(j)}\) be the mean of \(\pi_{M}(\theta\mid Y_{(j)})\) and \(\overline{\theta}=\sum_{j=1}^{K}\overline{\theta}_{(j)}/K\). Then, the distributed posterior distribution with density \[\tilde{\pi}(\theta\mid Y_{1}^{N})=\sum_{j=1}^{K}\frac{1}{K}\tilde{\pi}_{j}( \theta-\overline{\theta}+\overline{\theta}_{j}), \tag{34}\] approximates \(\pi_{N}(\theta\mid Y_{1}^{N})\), where \(\tilde{\pi}_{j}\) is defined in (33) [170, 171]. To generate draws from \(\tilde{\pi}(\theta\mid Y_{1}^{N})\) in (34), we obtain the empirical approximation of the distributed posterior \(\tilde{\Pi}\) with density \(\tilde{\pi}(\theta\mid Y_{1}^{N})\) as \[\hat{\overline{\Pi}}=\sum_{j=1}^{K}\sum_{l=1}^{T}\frac{1}{KT}\delta_{\hat{ \theta}+\theta_{(j)t}-\hat{\theta}_{j}}(\cdot), \tag{35}\] where \(\hat{\theta}_{j}=\sum_{l=1}^{T}\theta_{(j)l}/T\) and \(\hat{\theta}=\sum_{j=1}^{K}\hat{\theta}_{j}/K\). The \(K\)-mixture \(\hat{\overline{\Pi}}\) and geometric centers \(\hat{\Pi}^{*},\hat{\overline{\Pi}}\) are similar in that the atoms of the empirical measures are transformations of the subset posterior draws. The main difference between them lies in their approach to estimating the weights of the atoms. All the atoms of \(\hat{\Pi}\) have equal weights (i.e., \((KT)^{-1}\)), whereas the atom weights of \(\hat{\Pi}^{*}\) and \(\hat{\Pi}\) are non-uniform and estimated via an optimization algorithm. _Asymptotics_ The large sample properties of the posterior estimated in one-shot learning, denoted as \(\Pi_{\text{D},N}\), are justified via a Bernstein-von Mises (BvM) theorem; however, these results are only known for the last two methods and not for the CMC-type approaches [79, 100]. A BvM for \(\Pi_{\text{D},N}\) shows that it is asymptotically normal under mild assumptions as \(K\) and \(N\) tend to infinity. The center of the limiting distribution is specific to the combination algorithm, but the asymptotic covariance matrix equals \(I_{0}/N\), where \(I_{0}\) is the Fisher information matrix computed using \(Y\sim\mathbb{P}_{\theta_{0}}\). This shows that the asymptotic covariance of the true and distributed posteriors are calibrated up to \(o_{P}(N^{-1})\) terms. Under these assumptions, \[\left\|\Pi_{\text{D},N}(\cdot\,|\,Y_{1}^{N})-\Pi_{N}(\cdot\,|\,Y_{1}^{N}) \right\|_{\text{TV}}\leq\|\tilde{\theta}-\hat{\theta}\|_{2} \tag{36}\] ignoring \(o_{P}(N^{-1/2})\) terms, where \(\|\cdot\|_{\text{TV}}\) is the total variation distance, \(\hat{\theta}\) is the maximum likelihood estimate (MLE) of \(\theta\) computed using \(Y_{1}^{N}\), and \(\tilde{\theta}\) is a center of the \(K\) subset MLEs of \(\theta\): \(\hat{\theta}_{1},\ldots,\hat{\theta}_{K}\). They satisfy \(\|\hat{\theta}_{j}-\theta_{0}\|_{2}=o_{P}(M^{-1/2})\), so \(\|\hat{\theta}-\theta_{0}\|_{2}=o_{P}(M^{-1/2})\) because \(\tilde{\theta}\) is a center of the subset MLEs. Furthermore, \(\|\hat{\theta}-\theta_{0}\|_{2}=o_{P}(N^{-1/2})\) and combining it with the previous result implies that \(\|\tilde{\theta}-\hat{\theta}\|_{2}=o_{P}(M^{-1/2})\), which does not scale in \(K\). This shows that the bias of \(\Pi_{\text{D},N}\) in approximating \(\Pi_{N}\) does not decrease as \(K\) increases, and that \(K\) does not generally impact the approximation accuracy of \(\Pi_{\text{D},N}\), unless \(\hat{\theta}\) is a root-\(N\) consistent estimator of \(\theta_{0}\). _Notable recent advances_ One-shot learning, except CMC-type methods, has been generalized to dependent data. In time series data, smaller blocks of consecutive observations form the subsets and preserve the ordering of samples. A measure of dependence, such as the mixing coefficient, dictates the choice of \(K\). The subset pseudo-likelihood in (33) is modified to condition on the immediately preceding time block to model the dependence and raised to a power of \(K\). For one-shot learning in hidden Markov models with mixing coefficient \(\rho\), the distributed posterior estimated using (35) with the modified pseudo-likelihood and \(K=o(\rho^{-M})\) satisfies (36) [162]. These results have been generalized to a broader class of models, but guidance on the choice of \(K\) remains underexplored [120]. Posterior computations in Gaussian process (GP) regression fail to scale even for moderately large \(N\)[133, 8]. One-shot learning has addressed this challenge but with no theoretical results [99, 149]. The choice of \(K\) here depends on the smoothness of the regression function. Assuming a higher order of smoothness of regression functions guarantees accurate estimation on the subsets for larger values of \(K\). Specifically, if the regression function is infinitely smooth, the predictor lies in \([0,1]\), and \(K=O(N/\log^{2}N)\), then the decay rates of estimation risks for the distributed and true posterior distributions depend only on \(N\) and are asymptotically equivalent. In more general problems where the regression function belongs to the Holder class of functions on \([0,1]^{D}\) with smoothness index \(\alpha\), the upper bound for \(K\) depends on \(N,D,\) and \(\alpha\) for guaranteeing optimal decay rate of the estimation risk [49]. These results have been generalized to varying coefficients models [50]. _Limitations_ The main limitation of one-shot learning methods is their reliance on the normality of the subset posterior distributions. Scaling of the parameter draws on the subsets helps in some cases but fails to generalize beyond the family of elliptical posterior distributions [146, 157]. [28] identify three additional problems for one-shot learning. First, subset posteriors fail to capture the support of a multimodal posterior with a high probability. Second, a subset posterior can be substantially biased and fail to be a reasonable approximation of the true posterior, violating another major assumption. Finally, subset posterior draws fail to provide information about the tails of the true posterior, resulting in poor estimates of tail event probabilities. A key observation of [28] is that communication among machines is necessary for improving the approximation accuracy of subset posteriors. ### Distributed stochastic gradient MCMC Langevin Monte Carlo uses the gradient of \(\log\pi_{N}(\theta\,|\,Y_{1}^{N})\) for generating \(\theta\) proposals in a Metropolis-Hastings sampling scheme [106]. The gradient computation requires cycling through all the samples, which is prohibitively slow for a large \(N\). SGLD bypasses this problem by subsampling a size \(n\) subset \(S_{n}\) of \(\{1,\ldots,N\}\) and proposing \(\theta\) in the \((t+1)\)th iteration given \(\theta_{t}\) using a noisy approximation of the gradient \(g_{N}(\theta)=\nabla\log\pi_{N}(\theta\,|\,Y_{1}^{N})\) as follows: \[\theta_{t+1} =\theta_{t}+\frac{h_{t}}{2}\,\hat{g}_{n}(\theta_{t})+\epsilon_{t}, \quad\epsilon_{t}\sim\mathcal{N}(0,h_{t}I), \tag{37}\] \[\hat{g}_{n}(\theta) =\nabla\log\pi(\theta)+\frac{N}{n}\sum_{i\in S_{n}}\nabla\log p(Y_ {i}\,|\,\theta),\] where \(\hat{g}_{n}(\theta)\) is a noisy estimate of \(g_{N}(\theta)\) in that \[\mathbb{E}[\hat{g}_{N}(\theta)/N]\approx\mathbb{E}[g_{N}(\theta)/N]\] for every \(\theta\). The step size \(h_{t}\) decreases to 0 such that \(\sum_{t=1}^{\infty}h_{t}=\infty\) and \(\sum_{t=1}^{\infty}h_{t}^{2}<\infty\). The discretization error of the Langevin dynamics is negligible as \(h_{t}\to 0\), so the rejection probability of \(\theta_{t}\) in the Metropolis-Hastings step approaches 0 [166]. In practice, however, \(h_{t}\propto 1/N\) for better mixing and efficiency [16]. This produces a chain \(\{\theta_{t}\}\) that does not have the target as the stationary distribution, but it mimics the true continuous-time Langevin dynamics closely and hence has "approximately" the right target. The distributed SGLD extension (DSGLD) performs the SGLD update on randomly selected subsets [4]. Let \(p=(p_{1},\ldots,p_{K})\) be a vector of positive probabilities such that \(p_{1}+\ldots+p_{K}=1\), and \(p_{j}\) is the probability of selecting subset \(j\) for the SGLD update. At the \((t+1)\)th iteration, simple distributed SGLD extension selects a subset \(j_{t}\sim\text{Categorical}(p)\) and defines \[\theta_{t+1} =\theta_{t}+\frac{h_{t}}{2}\,\hat{g}_{mj_{t}}(\theta_{t})+ \epsilon_{t},\quad\epsilon_{t}\sim\mathcal{N}(0,h_{t}I), \tag{38}\] \[\hat{g}_{mj_{t}}(\theta) =\nabla\log\pi(\theta)+\frac{M}{p_{j_{t}}m}\sum_{i\in S_{m}} \nabla\log p(Y_{(j_{t})i}\,|\,\theta),\] where \(S_{m}\) is a size \(m\) subsample of \(\{1,\ldots,M\}\). The chain \(\{\theta_{t}\}\) jumps to the next worker selected for the SGLD update, and this process continues until convergence. This scheme is undesirable due to communication overload; therefore, DGLD samples \(\theta\) using (38) multiple times on the selected subset before the chain \(\{\theta_{t}\}\) jumps to a new subset. Additionally, the communication bottlenecks are minimized by selecting "optimal" workers with minimum wait times before the chain \(\{\theta_{t}\}\) jumps; see Section 3.2 in [4]. The efficiency gains in DSGLD come at the cost of loss in asymptotic accuracy. The main reason is that the smaller subset sizes imply that the possible subsample combinations on a subset are much smaller than those obtained using the full data in the standard SGLD update. Better gradient surrogates with smaller variance and higher asymptotic accuracy have been developed [24; 35], but the variance of stochastic gradients increases with \(N,K\), and data heterogeneity, resulting in convergence failures [16]. ### Asymptotically exact data augmentation Asymptotically exact data augmentation (AXDA) generalizes DA using stochastic extensions of global consensus optimization algorithms such as ADMM [136; 154; 155]. AXDA has subset-specific auxiliary variables \(z=(z_{1},\ldots,z_{K})\in\prod_{k=1}^{K}\mathbb{R}^{M}\) and tolerance parameter \(\rho\in\mathbb{R}_{+}\), which are similar to "missing data" in DA and tolerance parameter in ADMM. Using the notation in (32), \(z\) is chosen such that the augmented density satisfies \[\pi_{\rho}(\theta,z_{1},\ldots,z_{K}\,|\,\,Y_{1}^{N})\propto\pi(\theta)\prod_ {k=1}^{K}\ell_{k,\rho}(\theta,z_{k}), \tag{39}\] where \(\ell_{k,\rho}(\theta,z_{k})=p_{k}(z_{k},Y_{(k)})\kappa_{\rho}(z_{k},\theta)\), \(\kappa_{\rho}\) is a kernel such that \(\kappa_{\rho}(\cdot,\theta)\) converges weakly to \(\delta_{\theta}(\cdot)\) as \(\rho\to 0\), and \(p_{k}(z_{k},Y_{(k)})\) is such that \(\lim_{\rho\to 0}\int\ell_{k,\rho}(\theta,z_{k})\,dz_{k}=\ell_{kM}(\theta)= \prod_{i=1}^{M}p(Y_{(k)i}\,|\,\,\theta)\); that is, \(z\) plays the role of missing data and preserves the observed data model as \(\rho\to 0\), justifying that AXDA is asymptotically exact. The advantage of the density in (39) is that the \(z_{k}\)s are conditionally independent given \(\theta\). In every iteration, \(z_{k}\)s are drawn in parallel on the machines storing \(Y_{(k)}\)s. These draws are communicated to the central machine that uses them to draw \(\theta\) and generates a Markov chain for \(\theta\), whose stationary density equals \(\pi_{N}(\theta\,|\,Y_{1}^{n})\) under mild assumptions. AXDA has been used for Bayesian inference in generalized linear models and nonparametric regression [136; 155], but proper choices of \(p_{k}(z_{k},Y_{(k)})\), \(\rho\), and \(\kappa_{\rho}\) limit the broader application of AXDA. [156] and [127] develop AXDA using ADMM-type variable splitting and Langevin Monte Carlo algorithms. Like DSGLD, repeated communications among the machines diminish the computation gains from distributed computations. ### Open questions and future directions This section highlights the limitations of distributed inference methodology, important open problems, and areas for future investigation. #### High dimensional and dependent data models A variety of options exist for distributed Bayesian inference in independent data models, but they fail to generalize to high-dimensional models. The literature on distributed methods for inference in high dimensional models is sparsely populated [65]. The development of distributed methods that exploit the low dimensional structure in high dimensional problems is desired. Most distributed methods assume that the likelihood has a product form; see (32). This assumption fails for many time series and spatial models. There are one-shot learning methods for hidden Markov models [162], but they are inapplicable beyond the family of elliptical posterior distributions. No dependent data extensions are available for DSGLD and AXDA algorithms. #### Bias and variance reduction The bias between the true and distributed posterior in one-shot learning fails to decay as \(K\) increases. For parametric models, (36) shows that the distributed distribution has a bias of the order \(o_{P}(M^{-1/2})\), which is suboptimal compared to \(o_{P}(N^{-1/2})\) order bias of the true posterior. This means that increasing \(K\) has no impact on the accuracy of the distributed posterior. One way to bypass this problem is by centering the distributed posterior at a root-\(N\) consistent estimator; see [162]. Addressing this problem is useful for Bayesian federated learning, where one-shot learning is increasingly used due to its simplicity [67]. Similarly, developing gradient surrogates with smaller variances is crucial for Bayesian federated learning using Langevin Monte Carlo. #### Asynchronous updates Synchronous updates are crucial for convergence guarantees of DSGLD and Langevin Monte Carlo algorithms based on AXDA; however, synchronous updates become expensive as the number of subsets increases, resulting in diminishing benefits of distributed computations. Asynchronous updates bypass such problems when the subset sizes are similar, but they imply that the \(\{\theta_{t}\}\) chain is not Markov, which rules out conventional tools for proving convergence guarantees. Asynchronous DSGLD and AXDA extensions have numerous practical benefits. [176] have developed asynchronous DA for variable selection and mixed effects model, but its extension to a broader class of models remains unknown. _Generalized likelihoods_ Bayesian inference using generalized likelihoods has several advantages, including robustness and targeted inference; however, the current literature relies heavily on exploiting the structure of the hierarchical model. Preliminary results are available about the commonalities between AXDA and approximate Bayesian inference [155]. For broader applications, it is interesting to explore distributed extensions of the _cut_ posterior in misspecified models [128] and distributed inference in Bayesian models based on generalized likelihoods. _Applications_ Distributed Bayesian inference has found applications in federated learning [67]. These methods are ideal for Bayesian analysis of multi-center longitudinal clinical studies because the data cannot be moved to a central location due to privacy concerns. Limited examples of such applications are available; therefore, it is interesting to explore such privacy-preserving extensions of distributed methods. _Automated diagnostics and accessibility_ Automated application and model diagnostics for distributed methods have received little attention. One-shot learning methods are easily implemented using the parallel R package [150]; however, a similar general purpose software for deploying the distributed algorithms in practice remains to be developed. Addressing these challenges is crucial in facilitating the wide applicability of distributed methods. ## 5 Variational Bayes Although variational approximations are mentioned in passing within previous sections, in this section we provide a vignette focused specifically on Variational Bayesian (VB) methods, which approximate the posterior distribution by a member in a simpler class of distributions through minimizing the KL divergence. Below, we review some recent developments on theory and computation for variational Bayes and outline future directions. ### Introduction to variational Bayes We first describe our setup for a _statistical experiment_, defined as a pair of a sample space and a set of distributions on the sample space. For each sample size \(n\in\mathbb{N}\), suppose that we observe a \(\mathcal{X}_{n}\)-valued sample \(\mathbf{X}^{(n)}\), where \(\mathcal{X}_{n}\) is a measurable _sample space_ equipped with a reference \(\sigma\)-finite measure \(\mu_{n}\). The sample is modeled with a distribution \(P_{\theta}^{(n)}\in\mathcal{P}(\mathbf{X}_{n})\) determined by a parameter \(\theta\) in a measurable parameter space \(\Theta_{n}\). Let \(\Pi(\theta)\) be a prior distribution of \(\theta\) on \(\Theta_{n}\) which often comes with a prior density \(\pi(\theta)\). If a collection of distributions \(\{\mathbf{\rho}_{\theta}:\theta\in\Theta_{n}\}\) is dominated by some measure \(\mu\), then Bayes's rule gives the posterior distribution \[\Pi(\mathrm{d}\theta|\mathbf{X}_{n})\propto\underbrace{\frac{\mathrm{d}\mathbf{ \rho}_{\theta}}{\mathrm{d}\mu_{n}}(\mathbf{X}_{n})}_{\text{likelihood}}\underbrace{ \Pi(\mathrm{d}\theta)}_{\text{prior}}.\] Variational Bayes (VB) aims to provide an approximation to the posterior distribution \(\Pi(\cdot|\mathbf{X}_{n})\). More specifically, VB turns Bayesian computation into a trackable optimization problem. To do this, one first posits a family of approximate distributions \(\mathcal{Q}\) called a _variational family_, which is a set of distributions on \(\Theta\). The goal is then to find a member of the variational family that minimizes the KL divergence to the exact posterior \(\Pi(\cdot|\mathbf{X}_{n})\): \[\widehat{Q}=\operatorname*{arg\,min}_{Q\in\mathcal{Q}}\mathrm{D}_{\mathrm{KL} }\left(Q\|\Pi(\cdot|\mathbf{X}_{n})\right) \tag{40}\] See Figure 1 for a simple graphical illustration. The posterior is approximated with the optimal member \(\widehat{Q}\) of the family, which is called a _variational posterior_. Statistical inference is then based on the variational posterior \(\widehat{Q}\). In solving the optimization problem (40), one first writes \(\Pi(\mathrm{d}\theta|\mathbf{X}_{n})=p_{\theta}(\mathbf{X}_{n})\Pi(\mathrm{d}\theta)/ p(\mathbf{X}_{n})\), where \(p(\mathbf{X}_{n})\coloneqq\int p_{\theta}(\mathbf{X}_{n})\Pi(\mathrm{d}\theta)\) is the marginal likelihood of \(\mathbf{X}_{n}\). The KL-divergence above can be written as \[\mathrm{D}_{\mathrm{KL}}\left(Q\|\Pi(\cdot|\mathbf{X}_{n})\right)= \int\log\left(\frac{p(\mathbf{X}_{n})Q(\mathrm{d}\theta)}{p_{\theta}(\mathbf{X}_{n}) \Pi(\mathrm{d}\theta)}\right)Q(\mathrm{d}\theta) \tag{41}\] \[=\underbrace{-\int\log p_{\theta}(\mathbf{X}_{n})Q(\mathrm{d}\theta) +\mathrm{D}_{\mathrm{KL}}\left(Q\|\Pi\right)}_{=:\Psi(Q,\Pi,\mathbf{X}_{n}) \,:=-ELBO}+\log p(\mathbf{X}_{n}).\] In the above, we let \[\Psi(Q,\Pi,\mathbf{X}_{n})=-\int\log p_{\theta}(\mathbf{X}_{n})Q(\mathrm{d}\theta)+ \mathrm{D}_{\mathrm{KL}}\left(Q\|\Pi\right),\] which we call the _variational objective function_. This is also the negative of the _evidence lower bound (ELBO)_, where the ELBO is \(\int\log p_{\theta}(\mathbf{X}_{n})Q(\mathrm{d}\theta)-\mathrm{D}_{\mathrm{KL}} \left(Q\|\Pi\right)\) which provides a lower bound of the 'evidence' or the marginal likelihood \(\log p(\mathbf{X}_{n})\) as seen from (41). Figure 1: An illustration of variational Bayes Since \(p(\mathbf{X}_{n})\) is a constant with respect to \(Q\), one has \[\widehat{Q}=\operatorname*{arg\,min}_{Q\in\mathcal{Q}}\Psi(Q,\Pi,\mathbf{X}_{n})= \operatorname*{arg\,min}_{Q\in\mathcal{Q}}\operatorname{D_{KL}}\left(Q\|\Pi( \cdot|\mathbf{X}_{n})\right). \tag{42}\] Hence, minimizing the KL divergence between the variational family and the exact posterior distribution is equivalent to minimizing the variational objective \(\Psi(Q,\Pi,\mathbf{X}_{n})\) or maximizing the ELBO. When the variational family has certain simple structure, in particular, the so-called _mean field class_, there are efficient computational algorithms for finding \(\hat{Q}\), based on the well-known _CAVI (coordinate ascent variational inference)_ algorithm [66, 169] which guarantees convergence to a local minimizer [11]. Let \(\theta=(\theta_{1},\ldots,\theta_{d})\in\Theta\) be a \(d\)-dimensional parameter, with \(d\) potentially large. The mean-field class imposes posterior independence as: \[Q(\theta_{1},\ldots,\theta_{d})=\prod_{j=1}^{d}Q_{j}(\theta_{j}), \tag{43}\] where \(Q_{j}\) is a distribution for \(\theta_{j}\). By taking the derivative of the ELBO with respect to each of the \(Q_{j}(\theta_{j})\), one can arrive at the following coordinate ascent update: \[\widehat{Q}_{j}(\theta_{j}) \propto\exp\left(E_{Q_{-j}}\left[\log p(\theta_{j}|\theta_{-j}, \mathbf{X}_{n}]\right)\right. \tag{44}\] \[\propto\exp\left(E_{Q_{-j}}\left[\log p(\theta_{j},\theta_{-j}, \mathbf{X}_{n}]\right)\right.\] where \(\theta_{-j}=(\theta_{1},\ldots,\theta_{j-1},\theta_{j+1},\ldots,\theta_{n}),\) and the expectation \(E_{Q_{-j}}\) is taken with respect to all variational distributions but that of the \(j\)th component. CAVI iteratively updates each coordinate by first initializing \(Q_{j}(\theta_{j})\) and then updating the variational distribution of each coordinate conditioned on the others according to (44). When a statistical model has latent structures such as finite mixture models, topic models and stochastic block models, the dimension of latent variables is typically of the same order as the sample size. The CAVI algorithm is not very efficient for large data sets as it requires sweeping through the whole data set before updating the variational parameters at each iteration. _Stochastic variational inference (SVI)_[57] is a popular alternative in this setting. SVI employs stochastic gradient descent by computing the gradient of the ELBO based on mini batches. _Beyond the mean-field class_ CAVI and SVI critically depend on the mean-field assumption, with this assumption ruling out posterior dependence across parameters and leading to under-estimation of posterior uncertainty. This motivates more complex variational families, which tend to require tailored algorithms. _Black-box VI (BBVI)_ algorithms [132], including gradient based black-box VI, have emerged as a popular class of such algorithms. [62] propose to utilize stochastic natural gradients within black-box VI to improve efficiency and address the common problem of large variance of gradient estimates. _Amortized VB._ In traditional variational inference, parameters need to be optimized for each latent variable, which can be computationally intensive. Amortized VI decreases this cost by building a map from data points to the VB family. This map is typically modeled by a deep neural network trained on a data subset. The local VB parameter for the latent variable is computed using the output of the DNN map; this is "amortized" since past computation is used to simplify future computation. Let \(f_{\eta}:\mathcal{X}\to\Phi\) be a feedforward neural network with parameters \(\eta\) from the observation space \(\mathcal{X}\) to the parameter space \(\Phi\) of the variational family. For observation \(x_{i}\), the corresponding latent variable \(\theta_{i}\) has conditional distribution \(q_{f_{\eta}(x_{i})}(\theta_{i})\). Amortized variational Bayes finds \(\eta\) through: \[\eta^{*}=\operatorname*{arg\,min}_{\eta}\operatorname{D_{KL}}\left(\prod_{i=1}^ {n}Q_{f_{\eta}(x_{i})}(\theta_{i})\|\Pi(\theta_{i}\mid\mathbf{X}_{n})\right). \tag{45}\] Although amortized VI is a general framework, the most popular application is the _variational auto-encoder (VAE)_. The target generative model for data \(\mathbf{X}\) is \(\mathbf{X}=\mathbf{G}(\mathbf{Z})+\mathbf{\epsilon}\), with \(\mathbf{Z}\) latent data having a known distribution, \(\mathbf{\epsilon}\) an additive noise independent of \(\mathbf{Z}\), and \(\mathbf{G}\) parametrized by a deep neural network. VAEs are a popular alternative to GANs for training deep generative models. In a VAE, there is an encoder network where the distribution \(\Pi(\mathbf{Z}\mid\mathbf{X}_{n},\theta)\) is amortized by a neural network mapping the data points to the variational family. ### Theory of variational Bayes In order to verify the frequentist optimality properties of Bayesian posteriors, it is common to study contraction rates, model selection consistency, and asymptotic normality (known as Bernstein von-Mises (BVM) theorems). Under the variational Bayes framework, statistical inference is based on the variational posterior instead of the original posterior, so it is natural to study frequentist optimality of VB posteriors. In the asymptotic regime, we assume data \(\mathbf{X}^{(n)}\) are generated from \(\mathsf{P}^{(n)}_{\theta^{*}}\) and \(n\to\infty\). The variational posterior \[\widehat{Q}_{n}\in\operatorname*{arg\,min}_{Q\in\mathcal{Q}}\Psi(Q,\Pi,\mathbf{X}^ {(n)}),\] is said to have the contraction rate \(\epsilon_{n}\) if \[\mathsf{E}^{(n)}_{\theta^{*}}[\widehat{Q}_{n}(d(\theta,\theta^{*})\leq A_{n} \epsilon_{n}]\to 1 \tag{46}\] as \(n\to\infty\) for any diverging sequence \(A_{n}\to\infty\). If the contraction rate \(\epsilon_{n}\) matches the _minimax optimal rate_, we say that the variational posterior distribution is optimal. Recent work [174, 5, 173] provided theoretical conditions under which the variational posterior is optimal. These conditions imply that when the model is appropriately complex and the prior is sufficiently diffuse, which are standard conditions for establishing posterior contraction rates for the original posterior [43], then together with an assumption on the variational gap, the variational posterior distribution also has optimal contraction rates. The variational gap condition assumes there is \(Q\in\mathcal{Q}\) such that \[\int\mathrm{D}_{\mathrm{KL}}(\mathrm{P}_{\theta}^{(n)}\|\mathrm{P}_{\theta^{*}}^ {(n)})Q(\mathrm{d}\theta)+\mathrm{D}_{\mathrm{KL}}(Q\|\Pi)\lesssim n\epsilon_{ n}^{2}. \tag{47}\] The left side of (47) is an upper bound on the variational gap \(\mathrm{D}_{\mathrm{KL}}(\hat{Q}\|\Pi(\theta\mid\mathbf{X}_{n})\). This upper bound is verified by ensuring that each term on the left is of order \(O(n\epsilon_{n}^{2})\). [5] formulate this variational gap condition as an extension of prior mass conditions. If one restricts the VB family to be in the same class as the prior and the parameters to lie in a neighborhood of the true parameter, this condition reduces to the standard prior mass condition. In addition, [125] and [173] developed variational Bayes theoretic frameworks that can deal with latent variable models. [5] investigated the contraction properties of variational fractional posteriors with the likelihood raised to a fractional power. There are several studies that derived contraction rates of variational posteriors for specific statistical models - for example, mixture models [26], sparse (Gaussian) linear regression [135, 172], sparse logistic linear regression [134], and sparse factor models [113]. ### Adaptive Variational Bayes A notable recent development is a novel and general variational framework for adaptive statistical inference on a collection of model spaces [116]. The framework yields an _adaptive variational posterior_ that has optimal theoretical properties in terms of posterior contraction and model selection while enjoying tractable computation. In general, when performing statistical inference the "regularity" of the true parameter is unknown and adaptive inference aims to construct estimation procedures that are optimal with respect to the unknown true regularity. To do this, one typically prepares _multiple models_ with different complexities, e.g. sparse linear regression models with different sparsity, neural networks with different numbers of neurons or mixture models with different numbers of components, and then selects among them. To achieve adaptivity, frequentists usually conduct (fully data-dependent) model selection before parameter estimation: e.g., via cross-validation or penalization. There is some work on _Bayesian adaptation_ by imposing hierarchical priors on a collection of model spaces [44]. Let \(\mathcal{M}\) denote a set of model indices and \(\{\Theta_{m}\}_{m\in\mathcal{M}}\) multiple disjoint (sub-)models with different complexities. Let \(\Theta_{\mathcal{M}}:=\cup_{m\in\mathcal{M}}\Theta_{m}\) be an encompassing model. A (hierarchical) prior (illustrated by Figure 2) is given as \[\Pi=\sum_{m\in\mathcal{M}}\alpha_{m}\Pi_{m},\] where \(\alpha_{m}\) is the prior probability of model \(\Theta_{m}\), \(\sum_{m\in\mathcal{M}}\alpha_{m}=1\), and \(\Pi_{m}\) is the prior distribution of \(\theta\) within model \(\Theta_{m}\). The posterior distribution of \(\Theta_{\mathcal{M}}\) is \[\Pi(\cdot|\mathbf{X}_{n})=\sum_{m\in\mathcal{M}}\widehat{\alpha}_{m}\Pi(\cdot| \theta\in\Theta_{m},\mathbf{X}_{n}) \tag{48}\] where \(\widehat{\alpha}_{m}=\Pi(\theta\in\Theta_{m}|\mathbf{X}_{n})\), which can be understood as a weighted average of the posteriors \(\Pi(\cdot|\theta\in\Theta_{m},\mathbf{X}_{n})\) on the individual models \((\Theta_{m})_{m\in\mathcal{M}}\). If the prior model probabilities \((\alpha_{m})_{m\in\mathcal{M}}\) are appropriately chosen, the posterior distribution on the encompassing model can be adaptively optimal [76, 44, 55]. However, computing the posterior of \(\Theta_{\mathcal{M}}\) is challenging due to varying "dimensions" of the models and the need to evaluate marginal likelihoods. [116] address these challenges via _variational Bayes adaptation_. They approximate posterior (48) using a variational Bayes family over the encompassing model parameter space, using disjoint variational families \(\{\mathcal{Q}_{m}\}_{m\in\mathcal{M}}\) over individual models with \(\mathcal{Q}_{m}\subset\mathcal{P}(\Theta_{m})\): \[\mathcal{Q}_{\mathcal{M}}:=\left\{\sum_{m\in\mathcal{M}}\gamma_{m}Q_{m}\mid Q _{m}\in\mathcal{Q}_{m}\right\}.\] They show that the variational posterior \[\widehat{Q}_{n}\in\operatorname*{arg\,min}_{Q\in\mathcal{Q}_{\mathcal{M}}} \Psi(Q,\Pi,\mathbf{X}^{(n)})\] is of the form \[\widehat{Q}_{n}=\sum_{m\in\mathcal{M}}\widehat{\gamma}_{n,m}\widehat{Q}_{n,m} \in\mathcal{Q}_{\mathcal{M}} \tag{49}\] for some'mixing weight' \((\widehat{\gamma}_{n,m})_{m\in\mathcal{M}}\) and'mixture components' \(\widehat{Q}_{n,m}\in\mathcal{Q}_{m}\) for \(m\in\mathcal{M}\). The adaptive variational Bayes framework is summarized in Algorithm 1. Computation of the adaptive variational posterior reduces to computing variational approximations for each individual model. The framework is general and can be applied for adaptive inference in many statistical models where multiple submodels of different complexities are available. The adaptive variational posterior has optimal contraction rates and strong model selection consistency when the true model is in \(\mathcal{M}\). This theory has been applied to show optimal contraction for a rich variety of models, including finite mixtures, sparse factor models, deep neural networks and stochastic block models. Figure 2: The hierarchical prior distribution ### Open questions and future directions _Uncertainty quantification of the VB posterior_ It is well-known that variational posteriors tend to underestimate uncertainty of the posterior, so a central open question is how one can construct computationally efficient VB posteriors producing (a) credible balls with valid frequentist coverage and/or (b) posterior covariance matching that of the true posterior. There is limited work on theory for statistical inference using the variational posterior, including credible intervals and hypothesis testing. For this, we need theorems to reveal a limiting distribution of the variational posterior as the sample size goes to infinity, just as the Bernstein-von Mises (BvM) theorem guarantees that the original posterior distribution converges to a Gaussian distribution under certain regularity conditions. An initial promising result along these lines is [160], but there is substantial need for new research for broad classes of models and corresponding variational families. _Theoretical guarantees of gradient-based algorithms_ Existing theoretical guarantees for VB only apply to the global solution of the variational optimization problem. In practice, this optimization problem tends to be highly non-convex and algorithms are only guaranteed to converge to local optima. For certain variational families and model classes, these local optima can be dramatically different, so that there is a large sensitivity to the starting point of the algorithm. It is of critical importance to obtain guarantees on the algorithms being used and not just on inaccessible global optima. For example, can one obtain general theoretical guarantees for gradient-based black-box variational inference with or without warm-start conditions? There is a parallel and growing literature on nonconvex optimization in other contexts, including providing reassurance that local optima can be sufficiently close in some cases [95; 39; 83; 78; 112]. However, to the best of our knowledge, there is no such work on theoretical aspects of local optima produced by variational Bayes. _VB based on generative models_ Richer variational families can be constructed using deep generative models such as normalizing flows [137; 81]. Due to their impressive flexibility, the resulting variational posterior can approximate a very wide class of target posteriors accurately. Despite its practical usefulness and strong empirical performance, there is no theoretical support for such approaches - for example, providing upper bounds on the variational approximation gap or concentration properties. Choosing the neural network architecture and algorithmic tuning parameters involved in training to maximize computational efficiency and accuracy of posterior approximation is an additional important related area that may benefit from better theoretical understanding. _Online variational inference_ Given a prior distribution on an unknown parameter, the posterior distribution can be understood as an updated belief after observing the data. The updated posterior distribution can be used as a new prior distribution when new data arrive. The process can be repeated many times for analyzing streaming data [97; 45; 68; 61]. At each step, the VB posterior can be used as a new prior instead of the original one for computational convenience [82; 84; 110]. It would be intriguing to investigate the statistical properties of the sequentially updated VB posterior. ## 6 Discussion Tools for Bayesian computation are evolving at a rapid pace, thanks largely to recent developments in machine learning. We highlighted this phenomenon with four vignettes. The first vignette discussed sampling with the aid of generative models, particularly normalizing flows. The next two vignettes discussed different methods for handling the large \(N\) regime. Coresets take a variational approach to data compression, with recent methods leveraging deep neural networks to build flexible surrogate families; federated Bayesian learning methods instead distribute posterior computation over many computers. Finally, we covered variational inference, which replaces the posterior with a tractable approximation. Many more vignettes could be written on similar topics, such as accelerating sampling with diffusion based generative models or accelerating approximate Bayesian computation using deep neural networks for data compression. We close with three themes, applicable to all vignettes, that we believe should receive future attention: accelerating inference using previous calculations, improving accessibility with new software, and providing theoretical support for empirically promising algorithms. The status-quo in Bayesian computation is to start from scratch in each posterior inference problem, such as recomputing coresets after changing the prior, or estimating a new variaitonal approximation when applying an old model to new data. This is inefficient, as posterior inference in similar models must be somewhat informative about posterior inference in the current model. If the two models under consideration are directly comparable, such as posteriors under slightly different priors, then it may be easy to leverage previous calculations, e.g., by using warm starts in optimization routines. Problems arise when the two models have different dimensions, such a hierarchical models with an extra layer of parameters. We are hopeful that methods for similar problems in machine learning - particularly transfer learning - will play a role in developing general solutions for Bayesians. Another common theme was the need for improved automation and accessibility. Implementing methods involving neural networks or other machine learning techniques in a robust and reliable fashion is a nontrivial task, often requiring significant time and expert knowledge. Given the breakneck speed at which machine learning progresses, careful implementations can be outdated before they have a chance for widespread adoption. The focus should be on developing software which is modular enough to withstand the next machine learning revolution, as well as user-friendly enough to be applied en-masse. Finally, statisticians should be cautious with wholesale adoption of methods that achieve excellent practical performance at the expense of theoretical guarantees. Fast "approximations" to posterior distributions that can be arbitrarily far from the exact posterior may be useful for black box prediction but fall far short of what is needed for reliable and reproducible Bayesian inferences. This is particularly key in scientific and policy applications in which one needs to appropriately characterize uncertainty in learning from data, acknowledging complexities that arise in practice such as model uncertainty, data contamination etc. Guarantees are necessary to avoid highly misleading inferences and potentially catastrophic conclusions from the types of large and complex datasets that are being generated routinely in the sciences.
2306.11428
Intrinsic alignment from multiple shear estimates: A first application to data and forecasts for Stage IV
Without mitigation, the intrinsic alignment (IA) of galaxies poses a significant threat to achieving unbiased cosmological parameter constraints from precision weak lensing surveys. Here, we apply for the first time to data a method to extract the scale dependence of the IA contribution to galaxy-galaxy lensing, which takes advantage of the difference in alignment signal as measured by shear estimators with different sensitivities to galactic radii. Using data from Year 1 of the Dark Energy Survey, with shear estimators METACALIBRATION and IM3SHAPE, we investigate and address method systematics including non-trivial selection functions, differences in weighting between estimators, and multiplicative bias. We obtain a null detection of IA, which appears qualitatively consistent with existing work. We then forecast the application of this method to Rubin Observatory Legacy Survey of Space and Time (LSST) data and place requirements on a pair of shear estimators for detecting IA and constraining its 1-halo scale dependence. We find that for LSST Year 1, shear estimators should have at least a $40\%$ difference in IA amplitude, and the Pearson correlation coefficient of their shape noise should be at least $\rho=0.50$, to ensure a $1\sigma$ detection of IA and a constraint on its 1-halo scale dependence with a signal-to-noise ratio greater than $1$. For Year 10, a $1\sigma$ detection and constraint become possible for $20\%$ differences in alignment amplitude and $\rho=0.50$.
Charlie MacMahon-Gellér, C. Danielle Leonard
2023-06-20T10:15:50Z
http://arxiv.org/abs/2306.11428v3
Intrinsic alignment from multiple shear estimates: A first application to data and forecasts for Stage IV ###### Abstract Without mitigation, the intrinsic alignment (IA) of galaxies poses a significant threat to achieving unbiased cosmological parameter constraints from precision weak lensing surveys. Here, we apply for the first time to data a method to extract the scale dependence of the IA contribution to galaxy-galaxy lensing, which takes advantage of the difference in alignment signal as measured by shear estimators with different sensitivities to galactic radii. Using data from Year 1 of the Dark Energy Survey, with shear estimators METCALIBRATION and IM3SHAPE, we find that systematic uncertainties dominate our signal and claiming a detection of IA is not possible. In particular, uncertainty on multiplicative bias calibration poses a significant challenge. Building upon this, we forecast the application of this method to Rubin Observatory Legacy Survey of Space and Time (LSST) data. We develop a scheme to account for residual multiplicative bias within the measurement covariance, and forecast the requirements on a pair of shear estimators for detecting IA and constraining its 1-halo scale dependence. We find that for LSST Year 1, shear estimators should have at least a 40% difference in IA amplitude, and the Pearson correlation coefficient of their shape noise should be at least \(\rho=0.50\), to ensure a 1\(\sigma\) detection of IA and a constraint on its 1-halo scale dependence with a signal-to-noise ratio greater than 1. For Year 10, a 1\(\sigma\) detection and constraint become possible for 20% differences in alignment amplitude and \(\rho=0.50\). keywords: gravitational lensing: weak - galaxies: interactions - galaxies: haloes - galaxies: statistics - large scale structure of Universe - cosmology: theory. ## 1 Introduction Einstein's theory of General Relativity predicts that massive objects alter the geometry of surrounding space-time and consequently affect the path of light-rays passing close to them, in an effect known as gravitational lensing. On large scales, the subtle lensing of light-rays from background source galaxies by foreground lens masses is only detectable by correlating the shapes of many galaxies, as their intrinsic ellipticity is much larger than any observed change (shear) induced by large-scale lensing. This effect, referred to as _weak gravitational lensing_, has proven to be an important scientific tool for probing the matter distribution of the universe, which in turn has allowed us to develop our understanding of dark matter and dark energy (see, for example, Hu (2002); Baldauf et al. (2010); Weinberg et al. (2013); Abbott et al. (2018)). Weak lensing measurements typically consider auto-correlations between the shapes of source galaxies (_cosmic shear_; CS), or cross-correlate the shapes of source galaxies with the positions of foreground lens galaxies (_galaxy-galaxy lensing_; GGL) (for a review on the theory of weak lensing, see Bartelmann & Schneider (2001)). However, both cosmic shear and galaxy-galaxy lensing are susceptible to contamination by correlations resulting from local effects, known as _intrinsic alignments_ (IA). The origin of IA at different scales and for different galaxy types is an active area of research, but is generally understood to include tidal effects, as well as possible contributions from galaxy evolution history and environment (Croft & Metzler, 2000; Heavens et al., 2000; Troxel & Ishak, 2015). In GGL, which will be the focus of this work, correlations due to IA exist between lens and source galaxies before the weak lensing effect imprints on our observations. Consequently, if IA is not properly accounted for, it can result in biased lensing estimates and thus biased constraints on cosmological models. Current Stage III surveys - such as the Dark Energy Survey (DES; The Dark Energy Survey Collaboration (2005)), Kilo-Degree Survey (KiDS; de Jong et al. (2012)), and Hyper Suprime-Cam survey (HSC; Aihara et al. (2017)) - and upcoming Stage IV surveys - such as the Rubin Observatory's Legacy Survey of Space and Time (LSST; Ivezic et al. (2019)), and Euclid (Scaramella et al., 2022) - are vastly decreasing the statistical uncertainty on weak lensing measurements. With such a wealth of modern surveys providing greater statistical power, IA is becoming a significant source of uncertainty (see, e.g. Samuroff et al. (2019); Secco et al. (2022)) and is forecast to become even more so in the near future (Krause et al., 2016). Direct detection of intrinsic alignment, using smaller samples of more luminous'source' galaxies for which spectroscopic redshifts are available, has provided key insight into the physics of intrinsic alignment, by directly selecting only those'source' and 'lens' galaxies which are truly physically associated (Mandelbaum et al., 2006; Hirata et al., 2007; Okumura et al., 2009; Singh et al., 2015). However, in order to achieve the statistical power needed to make an accurate weak lensing measurement, millions of galaxy images are required. Thus, many surveys measure galaxy redshifts using _photometry_ with much broader spectral bands than spectroscopy, leading to larger associated uncertainties. Such surveys also do not always have a representative spectroscopic sub-sample available. Other methods for measuring or mitigating the IA contamination to GGL have exploited the redshift dependence of the effect using methods such as binning sources in photometric redshift (photo-z), to separate those which are closer or further in redshift from the lenses (Heymans et al., 2004; Hirata et al., 2004; Joachimi et al., 2011; Blazek et al., 2012). However, such methods could be impacted by potentially large photo-z uncertainties, which in the worst cases may even be incorrectly estimated (see e.g. Bernstein and Huterer (2010)). Advances are being made in the measurement of photo-z (see e.g. Bilicki et al. (2018)), and it may soon be possible to obtain a large enough spectroscopic sample to carry out high-precision weak lensing studies (see e.g. DESI Collaboration et al. (2016)). Nonetheless, characterising the associated uncertainties remains an important consideration for upcoming photometric lensing surveys, such as LSST and Euclid. Novel methods to measure the IA contamination are also being proposed to address the issues associated with photo-z, such as the self-calibration methods (targeting cosmic-shear; Zhang (2010); Troel and Ishak (2012); Yao et al. (2017, 2019)). However, these methods are still somewhat dependent on the photo-z uncertainty, although do include parameters to account for this. In Leonard and Mandelbaum (2018) (hereafter: L2018), a novel method for measuring and / or constraining the scale dependence of intrinsic alignment was proposed. This method attempts to use the dependence of the IA signal's amplitude on the radial scales within a galaxy, rather than its dependence on redshift. We expect the outer radial regions of galaxies to be more aligned with local structure than the inner radial regions, which results in twisting of the isophotes in a galaxy's light profile. Using observational data, Singh and Mandelbaum (2016) showed that shear estimators with sensitivity to different radial scales within galaxies, had different levels of IA contamination, due to isophotal twisting theorised to result from IA. Further observational evidence was shown in Georgiou et al. (2019), where altering the radial weighting of a shape estimator resulted in different measured alignment amplitudes. Tenneti et al. (2014) also showed this effect in simulated data. The method of L2018 (henceforth referred to as the multi-estimator method; MEM) therefore looks to compare weak lensing measurements from two different shear estimators, with sensitivity to different radial regions of galaxies. If the lensing contribution to shear can be shown to be the same in both estimators (as should ideally be the case), taking the difference of the two estimates would 'cancel out' the lensing signal (since the lensing effect does not depend on the radial region of the galaxy from which the light originated). This leaves a portion of the IA signal, determined by the difference in IA amplitude between the radial regions of the galaxy which each estimator probes. The cancellation of the lensing signal requires some assumptions, which will be discussed in greater detail later as a key subject of this work. The advantages of such a method are twofold. Firstly, since the lensing signal is cancelled, it does not need to be measured and removed. This may make such a method particularly robust in the case of catastrophic photo-z error estimations, which could become more likely as upcoming surveys image further and fainter sources than ever before. Secondly, correlations in shape noise and cosmic variance between the two estimators could reduce the uncertainty in the measured IA signal. This would allows us to test IA at small scales within the 1-halo regime, which current models struggle to describe due to non-linear effects. This paper is structured as follows: in Section 2, we review the formalism of the MEM from L2018 and introduce a series of assumptions made in its most basic construction. We then go on to derive a new fundamental expression in the absence of one of these assumptions and discuss other potential complications. In Section 3, we present a case study into the application of the MEM using the Dark Energy Survey Year 1 galaxy shape catalogues, and hence show how a failure to satisfy the assumptions of the basic formalism introduced in Section 2, can result in a spurious detection of IA. In Section 4, we consider the method in the context of upcoming stage IV lensing surveys and carry out forecasts to place requirements on shear estimators for use with the MEM, given the new expression obtained by removing one of the former assumptions. Finally, in Section 5, we conclude by placing these results in the context of classes of shear estimators planned for deployment in Stage IV surveys and discuss what our results tell us about the direction development of future bespoke estimators must follow, in order for the MEM to succeed. ## 2 Theory ### Basic method formalism In this section, we briefly review the mathematical formalism behind measuring IA with multiple shear estimators, as introduced in L2018. The key GGL observable considered here is the tangential shear, which measures the level of alignment and ellipticity distortion tangentially around a lens. We consider two different estimators for the tangential shear, as given by two different shear estimation methods, \(\gamma_{\rm t}\) and \(\gamma^{\prime}_{\rm t}\), \[\tilde{\gamma}_{\rm t}(\theta) =B(\theta)(1+m)\left(\frac{\sum\limits_{j}^{\rm lens}\tilde{w}_{ j}\,\tilde{\gamma}_{j}}{\sum\limits_{j}\tilde{w}_{j}}\right)\] \[=B(\theta)(1+m)\left(\frac{\sum\limits_{j}^{\rm lens}\tilde{w}_{ j}\,\gamma^{\prime}_{\rm L}}{\sum\limits_{j}\tilde{w}_{j}}+\frac{\sum\limits_{j} ^{\rm lens}\tilde{w}_{j}\,\gamma^{\prime}_{\rm IA}}{\sum\limits_{j}\tilde{w} _{j}}\right), \tag{1}\] \[\tilde{\gamma}^{\prime}_{\rm t}(\theta) =B(\theta)(1+m^{\prime})\left(\frac{\sum\limits_{j}^{\rm lens} \tilde{w}_{j}\,\tilde{\gamma}^{\prime}_{\rm L}}{\sum\limits_{j}\tilde{w}_{j}}\right)\] \[=B(\theta)(1+m^{\prime})\left(\frac{\sum\limits_{j}^{\rm lens} \tilde{w}_{j}\,\gamma^{\prime}_{\rm L}}{\sum\limits_{j}^{\rm lens}\tilde{w}_{j }}+\frac{\alpha\sum\limits_{j}^{\rm lens}\tilde{w}_{j}\,\gamma^{\prime}_{\rm IA }}{\sum\limits_{j}\tilde{w}_{j}}\right). \tag{2}\] Here, the label 'lens' indicates a sum over lens-source pairs. \(\gamma\) denotes shear, subscripts L and IA denote lensing and IA contributions respectively, and tilde denotes an observed quantity. \(\theta\) is the lens-source angular separation on the sky and \(\tilde{w}_{j}\) are weights given to each lens-source pair. \(a\) is a constant representing the offset in the IA amplitudes of the two estimators, due to isophotal twisting. \(m\) and \(m^{\prime}\) represent sample-level multiplicative bias, residual in the estimators post-calibration (note these are not the full multiplicative bias values, only a portion of the bias that remains due to uncertainty on the calibration values). The boost factor, \(B(\theta)\), accounts for 'excess' (meaning they are not expected to be present in a random sample) galaxies, which are physically associated with the lens due to clustering. It is given by: \[B(\theta)=\frac{N_{\text{rand}}}{N_{\text{lens}}}\times\frac{\sum\limits_{j}^{ \text{lens}}\tilde{w}_{j}}{\sum\limits_{j}^{\text{rand}}\tilde{w}_{j}}, \tag{3}\] where 'rand' indicates a sum over random-source pair i.e. sources paired with galaxies from a random catalogue generated to match the lenses. \(N_{\text{rand}}\) is the number of randoms and \(N_{\text{lens}}\) is the number of lens galaxies. They are included here to normalise the sum. We now apply the assumptions that the weights for the two methods are identical, and the multiplicative biases residual after calibration, \(m\) and \(m^{\prime}\), are demonstrably subdominant. We will later revisit these assumptions in detail. Given these assumptions, taking the difference of our two estimators by subtracting equation 2 from equation 1, gives: \[\tilde{\gamma}_{\text{I}}(\theta)-\tilde{\gamma}_{\text{I}}^{\prime}(\theta)= (1-a)\frac{\sum\limits_{j}^{\text{lens}}\tilde{w}_{j}\gamma_{\text{IA}}^{j}} {\sum\limits_{j}^{\text{rand}}\tilde{w}_{j}}. \tag{4}\] Now consider a sample of lens-source pairs in which the lens and source have small enough line-of-sight separation that we would expect them to be intrinsically aligned. The scale in which it is conventionally assumed IA could be present is 100 Mpc/h (see, e.g., L2018) and we adopt this assumption here. The quantity of interest to us is the tangential shear due to IA per contributing lens-source pair. To account only for contributing pairs, we divide equation 4 by the sum of the weights of contributing pairs. To express this, we make the following definition: \[\frac{\sum\limits_{j}^{\text{rand}}\tilde{w}_{j}}{\sum\limits_{j}^{\text{ excess}}\tilde{w}_{j}+\sum\limits_{j}^{\text{rand,close}}\tilde{w}_{j}}=\frac{1}{B( \theta)-1+F}, \tag{5}\] with F defined by: \[F\equiv\frac{\sum\limits_{j}^{\text{rand,close}}\tilde{w}_{j}}{\sum\limits_{ j}^{\text{rand}}\tilde{w}_{j}}, \tag{6}\] where 'excess' denotes a sum over intrinsically aligned pairs present due to clustering and 'rand,close' denotes a sum over sources within \(\Pi=100\) Mpc/h line-of-sight separation of random points drawn from the lens redshift distribution (L2018). The choice of 100Mpc/h does warrant further investigation, which we touch on again in Section 4 below. Finally, multiplying equation 4 by equation 5 gives us an expression to extract a portion of the IA signal, per intrinsically aligned lens-source pair, \((1-a)\tilde{\gamma}_{\text{IA}}\): \[(1-a)\tilde{\gamma}_{\text{IA}}(\theta)=\frac{\tilde{\gamma}_{\text{I}}( \theta)-\tilde{\gamma}_{\text{I}}^{\prime}(\theta)}{B(\theta)-1+F}. \tag{7}\] Equation 7 is the fundamental equation of this method. From it we can measure a portion of the IA contribution up to an amplitude determined by \(a\). For example, a value of \(a=0.8\) indicates a 20% difference the IA contamination of our two estimators. Even in the case where we only recover a small fraction of the IA contamination, provided the signal is above zero, this gives us the ability to extract information about the scale dependence of intrinsic alignment, potentially inside the non-linear 1-halo regime. Equation 7 represents the ideal case of this method. However, as mentioned above, to obtain it we have made several strong assumptions about the relative characteristics of the shear estimation methods. We now go beyond the method as introduced in L2018, to explore the consequences of relaxing these assumptions. ### The effect of residual multiplicative bias The work of L2018 assumed residual multiplicative bias due to uncertainty in the multiplicative bias calibration to be subdominant and thus ignored. This is because L2018 was considering future shear estimation methods with demonstrably subdominant calibration uncertainty, such as the Bayesian Fourier Domain method proposed by Bernstein and Armstrong (2014). In the case where the uncertainty on multiplicative bias calibration cannot be shown to be subdominant, residual multiplicative bias can remain in the estimators, and equation 7 must be re-formulated to include terms accounting for the residual bias in each estimator. Subtracting equation 2 from equation 1 in this instance yields: \[\tilde{\gamma}_{\text{I}}-\tilde{\gamma}_{\text{I}}^{\prime}=B( \theta)\left[(m-m^{\prime})\frac{\sum\limits_{j}^{\text{lens}}\tilde{w}_{j} \gamma_{\text{IA}}^{j}}{\sum\limits_{j}^{\text{lens}}\tilde{w}_{j}}\right.\] \[\left.+(m-am^{\prime})\frac{\sum\limits_{j}^{\text{lens}}\tilde{w}_ {j}\gamma_{\text{IA}}^{j}}{\sum\limits_{j}^{\text{lens}}\tilde{w}_{j}}+(1-a) \frac{\sum\limits_{j}^{\text{lens}}\tilde{w}_{j}\gamma_{\text{IA}}^{j}}{ \sum\limits_{j}^{\text{lens}}\tilde{w}_{j}}\right]. \tag{8}\] Normalising by the weighted number of physically associated pairs gives, \[\frac{\tilde{\gamma}_{\text{I}}-\tilde{\gamma}_{\text{I}}^{\prime}}{B(\theta)- 1+F}=(m-m^{\prime})\tilde{\gamma}_{\text{IA}}+(m-am^{\prime})\tilde{\gamma}_{ \text{IA}}+(1-a)\tilde{\gamma}_{\text{IA}}, \tag{9}\] where \(\tilde{\gamma}_{\text{IA,PA}}\) and \(\tilde{\gamma}_{\text{IA}}\) respectively represent the average lensing and IA contributions to tangential shear, per lens-source pair. Here, the subscript PA denotes the residual is normalised by the weighted number of physically associated pairs, not that only those pairs have contributed to this lensing signal. Essentially, this implies the lensing contribution to shear does not fully cancel, in the case where residual multiplicative bias in the estimators is not subdominant, leaving a lensing residual, \((m-m^{\prime})\tilde{\gamma}_{\text{IA,PA}}\). Note that the term 'lensing residual' refers generally to any part of the lensing signal that was not cancelled by taking the difference of the two tangential shears, whereas residual multiplicative bias specifically refers to a bias remaining in the tangential shear estimates due to the uncertainty on the multiplicative bias calibration. Due to the percent level contribution of the IA signal to the full tangential shear, even percent level uncertainty on bias calibration has the potential to leave a lensing residual which dominates the IA signal in the MEM. Accounting for multiplicative bias uncertainty in the case where it is not subdominant is therefore imperative to the success of this method. ### Weighted source redshift distributions Another potential source of a lensing residual arises when our two estimators have different weighting schemes. It is clear from equations 1 and 2 that different weights would result in different values of tangential shear, even in the absence of IA and multiplicative bias uncertainty, as well as different boost (equation 3) and \(F\) (equation 6) values. To see how the different weighting schemes can manifest as different tangential shears, it is simplest to express \(\widehat{\nu}_{1}\) as a Fourier space integral over the matter power spectrum. Following, for example, Prat et al. (2018), the tangential shear is given by, \[\widehat{\nu}_{1}(\theta) =b\frac{3}{2}\Omega_{m}\left(\frac{H_{0}}{c}\right)^{2}\int\frac {d\ell}{2\pi}\ell J_{2}(\theta\ell)\] \[\times\int\,dz\left[\frac{g(z)\mu_{\mathrm{L}}(z)}{a(z)\chi(z)}P_ {\,\delta\delta}\left(k=\frac{\ell}{\chi(z)},\chi(z)\right)\right], \tag{10}\] Here, we have assumed lens galaxies trace the underlying matter with a linear bias, such that \(\delta_{\mathrm{g}}=b\,\delta_{\mathrm{m}}\). \(J_{2}\) is the second order Bessel function, \(\ell\) is the angular wavenumber, \(k\) is the 3D wavenumber, \(a\) is the scale factor (not the IA offset parameter defined previously), \(\chi\) is comoving distance, and \(n_{\mathrm{L}}(z)\) is the lens redshift distribution. The quantity of interest that changes depending on the weighted redshift distribution of the sources is \(g(z)\), the lensing efficiency, given by: \[g(z)=\int_{z}^{\infty}dz^{\prime}\,\tilde{w}(z)n_{\mathrm{s}}(z)\frac{\chi(z^ {\prime})-\chi(z)}{\chi(z^{\prime})}, \tag{11}\] where we have given the weights as a function of redshift as, typically, fainter and noisier galaxies are more likely to be observed at higher redshifts and have lower associated weights. From equation 11 it is apparent that a redshift distribution weighted by two different schemes for each estimator (i.e. \(\tilde{w}(z)n_{\mathrm{s}}(z)\)), will result in different lensing efficiencies and thus different tangential shears. This is therefore another potential source of a lensing residual, which could contaminate attempts to measure the IA amplitude offset between the two estimators, if not adequately corrected for. ### Galaxy size and limiting resolution In order for the different radial weightings of the two estimators to have a meaningful physical interpretation, therefore capturing a difference in the IA amplitude, it is required that the survey in question is able to resolve the physical scales of both radial weightings. An illustration of this issue is shown in Figure 1; the galaxy has to be larger than the point spread function (PSF) by a greater degree than would be traditionally required with a single estimator, so that the radial sensitivity of a second estimator can peak at a smaller radius while still probing scales larger than the PSF. This therefore implies the galaxy sample used with the MEM has stricter requirements on the effective size of a galaxy than a typical lensing sample. Following the formalism of Chang et al. (2013), the effective size is given by, \[R=\frac{r_{\mathrm{gal}}^{2}}{r_{\mathrm{PSF}}^{2}}, \tag{12}\] where \(r_{\mathrm{gal}}\) is an estimate of the galaxy radius and \(r_{\mathrm{PSF}}\) is the radius of the point spread function for the galaxy in question. To determine if a galaxy is suitable for shear estimation, the measurement noise, \(\sigma_{m}\), is calculated from, \[\sigma_{m}(\nu,R)=\frac{a}{\nu}\left[1+\left(\frac{b}{R}\right)^{c}\right], \tag{13}\] where \(\nu\) is the signal-to-noise ratio of the galaxy image and \((a,b,c)\) are parameters specific to the shear estimator in consideration. Clearly, a galaxy with a small effective size could still be selected for shear estimation based on high signal-to-noise. Therefore, a cut on the effective size is necessary before determining the measurement noise on the remaining galaxies. This requirement on the effective size is also dependent on the exact radial weighting schemes used. We will address this issue again in section 4.1, in the context of LSST. ## 3 Observational case study: des y1 We now carry out the first application of the MEM to observational data, to probe the effectiveness of the method in the context of a recent weak lensing shape sample. Our primary objectives here are to provide a baseline procedure for applying the method to observational data, and highlight complications that may arise when applying the MEM to real data. ### Data and shear estimators We choose to use the DES Y1 lens and source catalogues, as there are two different shear estimators applied to the source sample, alongside a multitude of published work that has used these catalogues. This avoids the need to match galaxies between different surveys which have used different estimators, and provides resources for validation and comparison of intermediate measurements (Prat et al., 2018). #### 3.1.1 Shape catalogues We make use of the two public DES Y1 galaxy shape catalogues (Zuntz et al., 2018), which assign shears using the METCALIBRATION estimator (Sheldon & Huff, 2017, "MCAL" hereafter) and the IM3SHAPE estimator ('IM3' hereafter). The former contains \(34.8\times 10^{6}\) galaxies, and the latter \(21.9\times 10^{6}\) galaxies. As previously mentioned, the MEM requires the lensing signal in both estimators to be the same for the lensing shear to cancel. This can Figure 1: Illustration to show how the shear estimators with different radial weightings are limited by the size of the galaxy and the point spread function (red central circle; PSF). A galaxy with a maximum shear measurement radius \(r^{\prime}\) (purple dotted) would be suitable for a single shear measurement, but not for use with the MEM, as any smaller radial weighting - such as \(r^{\prime\prime}\) (blue solid) - would be smaller than the PSF. Therefore, a galaxy with a larger maximum measurement radius is required, such as \(r\) (green dashed), so that the radial weighting of the second estimator (e.g. \(r^{\prime}\)) still has a meaningful physical interpretation. naively (neglecting for the moment the complications discussed in Sections 2.2 and 2.3) be achieved by simply selecting only those galaxies for which shear estimates exist from both MCAL and IM3, to create a matched catalogue of galaxies. This matched catalogue contains \(17.8\times 10^{6}\) source galaxies, with an average effective number density of \(n_{\rm s}=3.26\) arcmin\({}^{-2}\). #### 3.1.2 Lens catalogue The DES Y1 lens catalogue (Elvin-Poole et al., 2018) contains \(660,000\) luminous red galaxy lenses, with redshifts determined via photometry. The redMaGiC (Rozo et al., 2016) algorithm has been applied to these lenses to select them so as to reduce photo-z error to \(\frac{\sigma_{z}}{1+z}<0.02\). For comparative purposes, the photo-z error in the source sample used by Blazek et al. (2012) was \(\frac{\sigma_{z}}{1(z)}\approx 0.11\). The lens catalogue covers a redshift range of \(0.15\leq z\leq 0.9\)Elvin-Poole et al. (2018) and has a number density of \(n_{\rm L}=0.138\) arcmin\({}^{-2}\). ### Application methodology Application of the MEM begins with the calculation of the correlation functions required to estimate tangential shear and the boost factor. We make use of the well established TreeCorr1(Jarvis, 2015) package to estimate these quantities which, for DES Y1 shear estimators, are given by, Footnote 1: [https://github.com/rmjarvis/TreeCorr](https://github.com/rmjarvis/TreeCorr) \[\hat{\gamma}_{\rm t}^{\rm im3}(\theta)=B(\theta)\left(\frac{\sum\limits_{j}^ {\rm lens}\hat{\psi}_{\rm L,j}\hat{\psi}_{\rm s,j}e_{\rm t}^{j}}{\sum\limits_{ j}^{\rm lens}\hat{\psi}_{\rm L,j}\hat{\psi}_{\rm s,j}(1+m_{j}^{\rm im3})}\right), \tag{14}\] \[\hat{\gamma}_{\rm t}^{\rm imcal}(\theta)=\frac{B(\theta)}{\langle R_{\gamma} \rangle+\langle R_{\rm S}\rangle}\left(\frac{\sum\limits_{j}^{\rm lens}\hat{ \psi}_{\rm L,j}e_{\rm t}^{j}}{\sum\limits_{j}^{\rm lens}\hat{\psi}_{\rm L,j}} \right), \tag{15}\] where \(\hat{\psi}_{\rm L,j}\) denotes the lens galaxy weights, \(\hat{\psi}_{\rm s,j}\) denotes the source galaxy weights, and \(e_{\rm t}^{j}\) the galaxy ellipticity estimates for each estimator. \(\langle R_{\gamma}\rangle\) is the average of the galaxies' responses to artificial shear, and \(\langle R_{\rm s}\rangle\) is an additional response that accounts for selection bias when cuts are made to the catalogue. Notice here that MCAL tangential shear does not incorporate individual galaxy weights, as weighting is implicit in a galaxy's response to artificial shear, hence why the sum is normalised by the average response. We use 10 angular separation bins log-spaced between 2.5' and 250', matching the angular separation range used in Prat et al. (2018), but reducing the number of bins from 20 to 10 for greater statistical power in each bin, due to the lower effective source density of the matched catalogue and small magnitude of the IA signal. To obtain \(F\), we do not consider the separations of all random-source pairs individually (as this would be computationally prohibitive). Instead, we split sources and randoms into narrow, weighted redshift bins and count the number of random-source pairs in only those bin combinations within 100 Mpc/h line-of-sight separation, as well as the total number of random-source pairs in all bins (see equation (6)). Finally, in order to obtain the covariance on the measurement, we use a jackknife method with 20 patches defined by a k-means algorithm (see Jarvis (2015) for more detail). The jackknife method can be mathematically expressed via, \[\tilde{C}=\frac{N_{\rm patch}-1}{N_{\rm patch}}\sum_{\rm t}(x_{i}-\bar{x})^ {T}(x_{i}-\bar{x}), \tag{16}\] where \(x_{i}\) represents an estimate obtained with patch \(i\) excluded, and \(\bar{x}\) represents the average of the \(x_{i}\) values. In this case, \(x=(1-a)\bar{\gamma}_{\rm IA}\). We use the entire lens sample to maximise statistical power and ensure a large overlap between the lens and source sample, therefore including as many intrinsically aligned pairs as possible. For future analyses, a narrow lens bin could be preferable to localise the measurement in redshift space for easier comparison to other measurements. However, for the purpose of this case study as a first application of the MEM, maximising the signal-to-noise ratio by including as many galaxy pairs as possible was deemed the appropriate choice. ### A naive measurement with the MEM: Results and (in-)validation Figure 2 shows the measurement made using the MEM with DES Y1 data, assuming the simple form of the method outlined in Section 2.1. Considering this result, we first note positively that a non-zero signal is measured, indicating that, in principle, DES Y1 appears to be sufficiently powerful to offer a detection with this method. We also see that the general scale-dependence of the signal - higher alignment at smaller scales with a power-law type drop-off - is consistent with general theoretical expectations for IA. However, the nature of the possible systematic uncertainties outlined in Sections 2.2 and 2.3 would, if dominant, result in a qualitatively similar signal (being due to residual lensing shear). We now proceed to investigate the significance of these complications in our measurement and why they lead us not to claim a detection of IA in this case. Figure 2: Measurement made using the DES Y1 IM3 and MCAL shear estimates with the MEM. A matched source galaxy sample covering the full DES Y1 redshift range and the full lens sample have been used. Error-bars are obtained from jackknife re-sampling. We expect this signal is systematic dominated and thus does not represent the IA signal. #### 3.3.1 Selection response in the NCAL estimator As discussed previously, if any selection is made to the full MCAL catalogue, a selection response must be calculated to re-calibrate the sample weighting in light of this selection. In the case where no change has been made to the source ellipticity distribution through this selection, \(\langle R_{\rm s}\rangle=0\). In the application to DES Y1, the matched catalogue can be thought of as a selection in the MCAL catalogue, based on galaxy parameters from the IM3 catalogue and vice versa. Such a selection is highly non-trivial to precisely determine _a posteriori_. However, it is reasonable to assume this selection would primarily depend on a galaxy's signal-to-noise ratio, size, and r-band flux, as these are key in determining if a galaxy is suitable for use with the IM3 estimator. We therefore attempt to estimate the potential selection we may have induced, by cutting on the aforementioned parameters in the NCAL catalogue and following the process detailed in Sheldon & Huff (2017) to calculate the selection response. The cut is designed to yield a'mock' matched catalogue which preserves, as closely as possible, the total number of galaxies and the mean signal-to-noise, size, and r-band flux of the true matched catalogue. The resulting mock catalogue contains \(17,844,302\) galaxies, \(53\%\) of which are also present in the matched catalogue. While this level of similarity suggests the two samples are not as closely matched as would ideally be the case, the similar number of galaxies cut can give an idea of the magnitude of the induced selection response. To confirm this is the case, we compute the Kullback-Liebler (KL) divergence (Kullback & Leibler, 1951; Joyce, 2011) between the \(e_{1}\) ellipticity probability distributions of the full MCAL catalogue and 3 other catalogues: matched, mock-matched, and a randomly selected catalogue with the same number of galaxies as the matched catalogue. We find KL values of \(1.37\times 10^{-5}\) between the full and random catalogue, \(1.56\times 10^{-3}\) between the full and matched catalogue and \(3.25\times 10^{-3}\) between the full and mock catalogue. These values further imply that the induced bias in the matched catalogue is comparable in size to the mock catalogue bias, as the KL values are on the same order of magnitude, with both being much larger than the KL value for the random sampling. Table 1 contains the estimated shear and selection responses for the mock catalogue. From these estimated values, we compute tangential shear from the mock catalogue in the cases where we include and exclude the selection response. We find that not including the selection response biases the tangential shear by an amount comparable to the size of our measured signal. This is illustrated in Figure 3, where the difference between the tangential shears including and excluding the selection response (normalised by the weighted number of physically associated pairs) is shown alongside the measured signal. We therefore highlight this as an important consideration for future applications of the MEM. Such applications will need to ensure any selections made to achieve the same lensing contribution in both estimators are rigorously understood, in order to correct for the biases they may induce. In an ideal scenario, those seeking to apply the MEM should work in tandem with shear estimation teams. For an estimator such as MCAL, responses could then either be derived purely for the MEM catalogue to avoid the need to calculate a selection bias, or biases could be determined with the benefit of appropriate additional per-galaxy information. #### 3.3.2 Galaxy weights and differences in effective source redshift distribution In addition to the specific case of the selection of galaxies done to create the matched catalogue, more general selection and weighting differences between the MCAL and IM3 estimators must be considered. Although using the matched catalogue in the case of IM3 and MCAL tangential shear measurements guarantees we use the same literal galaxies for both, the effective contribution to shear from each of these galaxies is different for each estimator. As detailed in (Hoyle et al., 2018), for the DES Y1 case, the effective redshift distribution (which governs the true expected tangential shear) from IM3 must account for per-galaxy explicit weights as well as effective weighting by \((1+m_{l})\). For MCAL, an effective weighting with respect to the per-galaxy response value is required. One might naively imagine re-weighting the version of the matched catalogue for one estimator by the per-galaxy effective weights of the other estimator, to achieve a unified weighting scheme. However, this is generally ill-advised due to correlations between per-galaxy weights and ellipticities even across different estimators, which could induce a severe bias to our measurement. Using the two effective weighted redshift distributions, we compute theoretical tangential shears (making use of the Core Cosmology Library, henceforth referred to as CCL; Chisari et al. (2019)) and take their difference, to estimate the potential lensing residual. The magnitude of this lensing residual is shown alongside the measurement in Figure 4. For this particular combination of effective weighting schemes, the lensing residual is clearly a significant concern. We therefore caution future applications that even a small difference in the weighted redshift distributions (here we saw only a 0.017 shift in the mean redshift between weighting schemes) can result in a large contamination to the signal. \begin{table} \begin{tabular}{c c c c} \hline & \(\langle{\bf R}_{\gamma}\rangle\) & \(\langle{\bf R}_{\rm s}\rangle\) & \(\langle{\bf R}_{\gamma}\rangle+\langle{\bf R}_{\rm s}\rangle\) \\ \hline R11 & 0.747 & 0.0236 & 0.771 \\ R22 & 0.749 & 0.0234 & 0.772 \\ \hline \end{tabular} \end{table} Table 1: Responses calculated for the selection made to create the mock catalogue. The cuts placed were lower limits of \(S/N\geq 16.8\), \(f_{\rm r-band}\geq 445\), and \(R\geq 0.3250\) arcsec\({}^{2}\). Figure 3: Measured signal from the MEM (yellow circles) and the magnitude of a potential selection bias induced in the creation of the matched catalogue (blue squares), calculated as the difference of tangential shears including and excluding the selection response. It is clear that the measured signal could be largely composed of a calibration error resulting from the missing selection response. We also consider this effect in relation to the the boost and \(F\), finding only a \(0.44\%\) difference in \(F\) between using either individual MCAL responses or IM3 weights and a \(0.03\%\) difference (averaged over angular separation bins) in the corresponding boost values. We therefore deem this choice to be arbitrary in our case, given that the weighting for each estimator does not result in a large percentage change in either \(F\) or the boost. #### 3.3.3 Multiplicative bias residuals The final potential contaminant we investigate is multiplicative bias residual in the tangential shear estimates (detailed in Section 2.2). To determine the \(1\sigma\) upper limit of any contamination due to residual multiplicative bias, we generate a theoretical tangential shear (again using CCL) from the unweighted matched catalogue source redshift distribution, and take the \(1\sigma\) uncertainty values from Zuntz et al. (2018) of \(m^{\rm{mcal}}=\pm 0.013\) and \(m^{\rm{m3}}=\pm 0.025\), to calculate a theoretical lensing residual, \((0.025+0.013)\bar{\gamma}_{\rm{L,PA}}\). This can be seen alongside the measured signal in Figure 5, where it is evident that our measurement could be significantly contaminated by this lensing residual. We will investigate an alternative approach where the multiplicative bias uncertainty is combined with the measurement covariance in Section 4.2, in the context of Stage IV surveys. ### Summary of findings from DES Y1 case study We briefly outline the key lessons from this first application of the MEM to observational data, to take forward when considering application of the MEM to Stage IV surveys (the focus of Section 4 and the remainder of this paper). * **The matched catalogue must be constructed in a way that ensures selection bias can be minimised and / or well-characterised**: Using similar shear estimators could be useful; for example, a modified MCAL estimator with a different radial weighting, used in conjunction with the standard MCAL estimator. * **Establishing a shared weighting schemes is crucial to ensure differences in weighting (explicit or effective) do not manifest as differences in the lensing shear**: Future applications should make it a priority to construct a matched weighting scheme for both estimators. Alternatively, another approach could be to forgo weighting galaxies altogether where signal-to-noise allows (see, e.g., Zhang et al. (2023)). * **Multiplicative bias uncertainty must be demonstrably subdominant, or accounted for within the overall measurement uncertainty**: Future applications should select estimators with the lowest levels of calibration uncertainty possible. The uncertainty should also be accounted for in conjunction with the measurement covariance. We will introduce a formalism for this in Section 4.2. ## 4 Forecasting for Stage IV surveys Given the considerations and procedure we have outlined using our findings from the DES Y1 case study, we now present a forecast for the performance of the MEM with synthetic data sets representative of a Stage IV lensing survey. Specifically, we consider LSST Y1 and Y10. Using this forecast, we will seek primarily to address the issue of residual multiplicative bias, by removing the _a priori_ assumption of sub-dominance and instead accounting for it within the measurement uncertainty. This will then allow us to place requirements on shear estimators in two cases: detection of IA with the MEM, and constraint of the IA scale dependence with the MEM. We choose to focus here on multiplicative bias because, as established in Section 3, issues with selection bias and differences of effective weighting schemes are more readily overcome with the benefit of foresight (e.g. by preserving the per-galaxy quantities required for selection bias correction) and therefore we anticipate being able to circumvent them for application of the MEM to stage IV surveys. For a pair of hypothetical shear estimators, we will forecast MEM performance with respect to their amplitude offset parameter, \(a\), and the Pearson correlation coefficient (Cohen et al., 2009) of their shape-noise, \(\rho\). It is important to note here that we do not seek to place strict limits or requirements on observational choices or shape estimators. Such limits would be specific to the analysis choices made here and the survey in question. Instead, we aim to provide more general guide Figure 4: Magnitude of lensing residual resulting from the different weighting schemes used by IM3 and MCAL (green triangles). Multiplicative bias residuals have not been considered in this case. Yellow circles are the measured signal from the MEM as in previous plots. Figure 5: Potential lensing residual resulting from the \(1\sigma\) ‘worst case’ post-calibration multiplicative bias residuals, \(m_{\rm{ini}}=0.025\) and \(m_{\rm{mcal}}=-0.013\) (purple diamonds). In computing the theoretical lensing residual in this case, the unweighted source redshift distribution has been used for the theoretical tangential shear. lines and targets for the development of bespoke shape estimators to be used with the MEM. ### Preparation of synthetic data vector #### 4.1.1 Redshift distributions In order to carry out the forecasting, we assume the prescriptions for lens and source galaxy samples given in the LSST DESC Science Requirements Document v1 (LSST DESC 2018; The LSST Dark Energy Science Collaboration et al. (2018)). Where sample-dependent parameters are referenced, unless otherwise stated, it can be assumed the associated values are taken from LSST DESC 2018. To address the issue of galaxy size discussed in Section 2.4, we impose a strict effective size size cut of \(R\geq 3\). Relating this to specific radial weightings and values of \((1-a)\) would require analysis of galaxy images, which is beyond the scope of this work and we defer to a future analysis. However, we anticipate that such a cut would likely be sufficient and potentially even excessive. Determining exact values for a given pair of estimators will be the subject of future analysis. Given this cut, we re-compute the LSST DESC 2018 redshift distributions 2 using WeakLensingDelending3(Sanchez et al., 2021) simulated galaxy catalogs for Y1 and Y10, where Y1 is defined as being 10% of the total 10-year exposure time. Figure 6 shows the raw distributions and best fit to the LSST DESC 2018 parametric distribution given by, Footnote 2: To do this, we use a modified version of this Jupyter notebook: [https://github.com/LSSTDESC/Requirements/blob/master/notebooks/RedshiftDistributions.ipynb](https://github.com/LSSTDESC/Requirements/blob/master/notebooks/RedshiftDistributions.ipynb) Footnote 3: [https://github.com/LSSTDESC/WeakLensingDelending/blob/master/docs/index.rst](https://github.com/LSSTDESC/WeakLensingDelending/blob/master/docs/index.rst) \[\frac{dN}{dz}\propto z^{2}\exp\left[-\left(\frac{z}{z_{0}}\right)^{\alpha} \right], \tag{17}\] with constants \(z_{0}\) and \(\alpha\) defined in LSST DESC 2018 for the Y1 and Y10 lens galaxy samples, and in Figure 6 for the Y1 and Y10 source galaxy samples used here. The larger drop in \(n_{\rm eff}\) for Y10 is due to the larger fraction of galaxies removed by the effective size cut in Y10, since Y1 will likely image the biggest and brightest galaxies first, with smaller, fainter ones resolved over the next 9 years. We use only one source and one lens tomographic bin, as was done in the DES Y1 application. For lenses, we use a narrower bin compared to the DES Y1 case study, in the range \(1.0\leq z_{\rm l}\leq 1.2\), which represents the highest redshift lens bin defined in LSST DESC 2018 for Y1. This choice, in a real survey, would allow IA to be probed at a specific redshift, and also limit the number of sources in the sample far behind the lenses, which contribute to lensing residuals but not the IA signal. For the source bin, we use the full redshift range between \(0.05\leq z_{\rm s}\leq 3.5\), for maximum statistical power. We also account for photo-z uncertainty where appropriate, by convolving equation 17 with a Gaussian uncertainty model from LSST DESC 2018, \[p(z_{\rm s},z_{\rm ph})=\frac{1}{\sqrt{2\pi}\sigma_{z}}\exp\left[-\frac{(z_{ \rm ph}-z_{\rm s})^{2}}{2\sigma_{z}^{2}}\right], \tag{18}\] where \(z_{\rm s}\) and \(z_{\rm ph}\) represent spectroscopic and photometric redshift respectively and \(\sigma_{z}\) is defined as \(0.05(1+z_{\rm s})\)for sources and \(0.03(1+z_{\rm s})\) for lenses. It is important to mention that including the full source sample behind the lenses has the potential to increase the magnitude of the lensing signal and thus any lensing residuals. In this work, we will only consider the full sample to try and obtain limits on the acceptable values of \(a\) and \(\rho\) in the instance where statistical uncertainty requires we use the full sample. However, in a real analysis, some benefit could be gained from placing a lower upper limit on the source redshift bin, to restrict the number of sources behind the lenses which are contributing to the lensing shear, but not the IA shear. The exact limit will depend on the survey in question, as a balance would need to be struck between retaining an acceptable signal-to-noise ratio for the overall shear, whilst minimising the lensing residual. #### 4.1.2 Halo Occupation Distributions To calculate theoretical tangential shears for lensing and IA, as well as the boost, we first require power spectra for the quantities of interest. To obtain predictions within the 1-halo regime, where we are Figure 6: LSST Y1 (left) and Y10 (right) source galaxy redshift distributions obtained after the effective size cut. The parametric model from LSST DESC 2018 has been re-fit to obtain a new model. The \(n_{\rm eff}\) values correspond to a loss of roughly 2 and 11 sources per square arcminute for the Y1 and Y10 cases respectively, when compared to the LSST DESC 2018 \(n_{\rm eff}\) values. most interested in using the MEM, we use the halo model formalism (Seljak, 2000; Peacock and Smith, 2000; Cooray and Sheth, 2002), \[P_{uv}(k)=\int dMn(M)\langle u(k|M)v(k|M)\rangle\] \[+\int dMn(M)b(M)\langle u(k|M)\rangle\,\int dMn(M)b(M)\langle v(k |M)\rangle P_{\rm lin}(k). \tag{19}\] Here, \(k\) is the wavenumber, \(M\) is halo mass, \(P_{\rm lin}(k)\) is the linear power spectrum, and \(u(k|M)\) and \(v(k|M)\) are the halo profiles for the auto / cross-correlation of interest. The first and second terms of equation 19 represent the 1-halo and 2-halo contributions respectively. In the context of this work, the profiles needed are; lens galaxy density, given by the halo occupation distribution (HOD) of Nicola et al. (2020); source galaxy density, given by the HOD of Zu and Mandelbaum (2015); matter density, given by the Navarro-Frenk-White profile (Navarro et al., 1996; Navarro et al., 1997); and a satellite shear HOD for intrinsic alignment in the 1-halo regime (Schneider and Bridle, 2010; Fortuna et al., 2021). The lens HOD was chosen as it is based upon Hyper Suprime Cam data and therefore able to model a sample with a number density somewhat representative of deep, LSST observations. In Appendix A, we verify our implementations of the 2-point cumulants in equation 19 are correct. From the halo model and the above HODs, we obtain the 1-halo and 2-halo terms for lensing shear and the boost, and the 1-halo IA term. The 2-halo IA term instead comes from an NLA model (Bridle and King, 2007). Having introduced our fundamental modelling choices, we will now go on to describe the modelling procedure in greater detail. #### 4.1.3 Lensing and IA shears For on-sky separation binning, we consider seven log-spaced bins in the projected separation range \(0.1\leq r_{\rm p}\leq 10\). Unlike in DES Y1, here, we choose to consider \(r_{\rm p}\) instead of angular separation \(\theta\), for easier comparison of this work to L2018 and Blazek et al. (2012). This choice means we are accessing the 1-halo regime, where we are most interested in using the MEM to study IA scale dependence. As in Section 3 above, the theoretical lensing shear is obtained using CCL (Chisari et al., 2019), but this time using the halo model to compute both the 1-halo and 2-halo contributions to the galaxy-matter power spectrum, \(P_{\rm gM}(k)\). The power spectrum can then be used to obtain the angular power spectrum via, \[C_{\rm gM}(\ell)^{\rm len}(\ell)=\frac{9}{4}\Omega_{m}^{2}H_{0}^{4}\int\frac{ dk}{k}P_{\rm gM}(k)W_{\rm g}(k)W_{\rm f}(k), \tag{20}\] where \(\ell\) is the angular multipole at which the spectrum is defined, g and M refer to galaxies and matter respectively, \(\Omega_{m}\) is the matter fraction of the universe, \(H_{0}\) is the Hubble constant, \(W_{\rm g}(k)\) is the Fourier transform of the redshift distribution, and \(W_{\rm f}(k)\) is the lensing efficiency kernel. Note that we take the fiducial cosmology to be the same as LSST DESC 2018. Assuming B-modes are zero, we can then find the projected correlation function in real space (which is analogous to the tangential shear in this context) using, \[\gamma_{\rm f}(\theta)=\sum_{\ell}\frac{2\ell+1}{4\pi}+C_{\rm gM}^{\rm len}( \ell)d_{0,2}^{\ell}(\theta), \tag{21}\] where \(\theta\) is the angular separation of the tracers in question and \(d_{0,2}^{\ell}\) are the Wigner-d matrices for tracers with spins 0 (galaxies) and 2 (shear). We solve this equation via a brute force sum to avoid instabilities that arise when considering the high \(\ell\) values required to probe the 1-halo regime. To model the theoretical IA signal, we draw on several areas of the literature. We compute the lens position and 1-halo satellite alignment angular cross spectrum, \(C_{\rm gI}^{\rm 1h}(\ell)\) from the prescription of Fortuna et al. (2021) (hereafter: F2021). To account for the different red and blue galaxy alignment amplitudes, we take a weighted average of the 1-halo amplitudes for red and blue galaxies given in F2021. For LSST Y1, we use an approximate red fraction of \(f_{\rm red}=0.10\), which decreases to \(f_{\rm red}=0.05\) for Y10. We note that these values are not rigorously determined, but rather qualitative estimates. We vary the value of \(f_{\rm red}\) within a reasonable range of \(0.00\leq f_{\rm red}\leq 0.30\) and find a negligible effect on the 1-halo signal, due to the small difference between the red and blue galaxy 1-halo amplitudes found in F2021. A more significant effect could arise from differences in the luminosity functions of F2021 and LSST. Here, a detailed analysis of these luminosity functions is not appropriate, so we instead determine lower limits on the average alignment amplitude, below which the maximum lensing residual dominates the 1-halo IA signal. These are \(a_{\rm 1h}>1.00\times 10^{-4}\) and \(a_{\rm 1h}>5.00\times 10^{-5}\) for Y1 and Y10 respectively. We also compute a second angular cross spectrum for the 2-halo regime, \(C_{\rm gI}^{\rm NLA}(\ell)\), using a redshift dependent NLA model (Hirata et al., 2007; Bridle and King, 2007) following equation 24 of Secco et al. (2022), with best fit parameters taken from row three of Table III in (Secco et al., 2022) (lens bias is also accounted for using the prescription given in LSST DESC 2018). It is important to note that our choice of fiducial IA model may not represent what is eventually seen in LSST data, due to the increased depth of LSST compared to the samples used in F2021 and DES Y3. However, it is nonetheless necessary for us to choose some fiducial IA model in the absence of any measurements truly representative of LSST. We refer the reader to Krause et al. (2016) for discussion on the complexities of forecasting the IA contamination to stage IV surveys. We caution that the findings of this study may not apply if the true signal is found to be vastly different to the models we have used here, but ultimately it is necessary for us to make some modelling choices. We defer a detailed comparison of different IA models to future work. Having obtained the 1-halo and 2-halo spectra, we then combine them, truncating each via a window function to avoid double counting in the 1-halo to 2-halo transition, \[C_{\rm gI}^{\rm 1h+NLA}(\ell)=C_{\rm gI}^{\rm 1h}\left(1-\exp\left[ \left(-\frac{\ell}{\ell_{\rm 1h}}\right)^{2}\right]\right)+C_{\rm gI}^{\rm NLA}\left(\exp \left[\left(-\frac{\ell}{\ell_{\rm 2h}}\right)^{2}\right]\right), \tag{22}\] with \(\ell_{\rm 1h}=1.4\times 10^{4}\) and \(\ell_{\rm 2h}=3\times 10^{4}\) chosen to give as smooth a transition as possible, and subscript I denoting an intrinsic shape tracer. We then use this angular power spectrum to compute the IA tangential shear for our choice of projected and tomographic bins. Figure 7 shows the various tangential shears discussed here. All quantities shown have been normalised by the estimated number of physically associated pairs, which is why we see a flattening of the lensing signal in the scales where the boost factor strongly dominates and contribution from \(F\) is negligible, as the boost factor has a similar scale dependence to the lensing signal. #### 4.1.4 Galaxy weights and a new definition for F In keeping with LSST DESC 2018, we adopt a simple weighting scheme for all source galaxies given by, \[\tilde{w}_{j}=\frac{1}{\sigma_{\gamma}^{2}+(\sigma_{e}^{j})^{2}}, \tag{23}\] with shape noise, \(\sigma_{\gamma}=0.12\) and per component measurement error, \(\sigma_{e}^{j}=0.26\), such that all galaxies are weighted equally. We also adopt a new definition for \(F\)(Blazek et al., 2015; Safari et al., 2023), by redefining the maximum line-of-sight separation at which we expect galaxies to be physically associated and therefore contributing to the average IA signal in a particular projected separation bin, \[\Pi(r_{\rm p})=\begin{cases}r_{\rm p}&\text{if }r_{\rm p}>2\\ 2&\text{if }r_{\rm p}\leq 2\end{cases}. \tag{24}\] This definition of \(F\) extends on the previous one to remove the contribution of pairs that are close in projected separation, but far in line-of-sight separation. The lower limit of 2 Mpc/h ensures that all galaxies within the same halo are included, regardless of how small their projected separation may be. We note that if this definition is used with large projected separations, some upper limit on the line-of-sight separation may be required to avoid artificially diluting the signal. However, this will not pose a problem in our analysis, as we do not consider \(r_{\rm p}\) beyond 10 Mpc/h. More detail on the calculation of \(F\) and the boost factors is given in Appendix B. ### Residual multiplicative bias in LSST-like estimators Because we are not looking to cross correlate different source tomographic bins, we assume a constant multiplicative bias across all redshifts. In LSST DESC 2018, the allowed levels of multiplicative bias uncertainty given are \(\pm 0.013\) and \(\pm 0.003\) for Y1 and Y10 respectively. #### 4.2.1 Forecasting procedure We use the TJPCov 4 package to estimate the statistical covariance on our forecasts, and validate our approach against the LSST DESC 2018 forecast covariance. It is important to note that inclusion of a residual multiplicative bias alters the covariance expression given in L2018; we address this in Appendix C, but find it amounts to only percent level corrections to the original expression, so therefore compute the statistical covariance in the same way as L2018. Footnote 4: [https://github.com/LSSTDESC/TJPCov](https://github.com/LSSTDESC/TJPCov) We estimate the IA component of the residual multiplicative bias, \((m-am^{\prime})\tilde{\gamma}_{\rm IA}\), but find it is only significant when \(a\) is very large. In this case, the lensing residual would dominate the IA signal, and any future applications (by nature of the method) should seek to minimise \(a\). We therefore assume this term to be negligible. Since we cannot determine the level of multiplicative bias residual in the estimators to greater precision than the associated uncertainty on the calibration, we choose to treat it as a systematic uncertainty of the MEM and combine it with the estimated statistical uncertainty. We start by assuming \((m-am^{\prime})\tilde{\gamma}_{\rm IA}\) is negligible, for reasons discussed above, and multiplicative bias is the only source of a lensing residual. In this case, the measured signal from the MEM is then, \[\Delta\tilde{\gamma}_{\rm I}=\delta m\tilde{\gamma}_{\rm L,PA}+(1-a)\tilde{ \gamma}_{\rm IA}, \tag{25}\] where \(\Delta\tilde{\gamma}_{\rm I}\) represents the difference in tangential shear between the two shape estimators and \(\delta m=(m-m^{\prime})\). If we consider the statistical covariance computed from TJPCov to be the covariance on \(\Delta\tilde{\gamma}_{\rm I}\), then the uncertainty on the quantity of interest, \((1-a)\gamma_{\rm IA}\), is given by: \[\begin{split}&\text{Cov}[(1-a)\tilde{\gamma}_{\rm IA}(r_{\rm p}^{ j}),(1-a)\tilde{\gamma}_{\rm IA}(r_{\rm p}^{j})]=\\ &\text{Cov}[\Delta\tilde{\gamma}_{\rm I}(r_{\rm p}^{j}),\Delta \tilde{\gamma}_{\rm I}(r_{\rm p}^{j})]+\text{Cov}[\delta m\tilde{\gamma}_{\rm L,PA}(r_{\rm p}^{j}),\delta m\tilde{\gamma}_{\rm L,PA}(r_{\rm p}^{j})]\\ &-\text{Cov}[\Delta\tilde{\gamma}_{\rm I}(r_{\rm p}^{j}),\delta m \tilde{\gamma}_{\rm L,PA}(r_{\rm p}^{j})]-\text{Cov}[\delta m\tilde{\gamma}_{ \rm L,PA}(r_{\rm p}^{j}),\Delta\tilde{\gamma}_{\rm I}(r_{\rm p}^{j})].\end{split} \tag{26}\] Where \(i\) and \(j\) denote individual \(r_{\rm p}\) bins. To find the lensing residual covariance, we construct a Gaussian distribution for \(\delta m\) with a mean \(\mu=0\) and standard deviation \(\sigma=\sqrt{m^{2}+(m^{\prime})^{2}}\). Using \(N=1000\) random draws from this distribution and multiplying them by our forecast lensing signal normalised over the boost and F, we calculate the lensing residual covariance using the re-sampling formula, \[\text{Cov}[x^{i},x^{j}]=\frac{1}{N-1}\sum_{k=1}^{N}(x_{k}^{i}-\bar{x}^{i})(x_{ k}^{j}-\bar{x}^{i})^{T}, \tag{27}\] where \(x_{k}\) represent individual samples, \(\bar{x}\) is the mean value of all samples, \(i\) and \(j\) again represent \(r_{\rm p}\) bins, and \(T\) denotes the transpose. We estimate the cross-covariance between the lensing residual and the measured signal in a similar fashion, by adding the mean of the lensing residual samples to our fiducial IA signal to obtain \(\Delta\tilde{\gamma}_{\rm I}\), and drawing 1000 samples from a multivariate Gaussian defined by \(\Delta\tilde{\gamma}_{\rm I}(r_{\rm p})\) and the statistical covariance from TJPCov, then using the equation 27 to determine the two cross-covariance terms. In this forecasting scenario, we are capable of isolating the IA component and measuring the signal-to-noise, and so for the following results we will focus on this to better understand where we should ideally target in the \(a\)-\(\rho\) parameter space. In an observational scenario, this is not the case. Despite this, because the mean of the residual multiplicative bias distribution should be zero, in most cases, the sample mean of the lensing residuals is much smaller than the IA signal, and equation 25 is dominated by the IA component. Therefore, in an observational context, following the above procedure is Figure 7: Comparison of tangential shears for the different IA models discussed. The windowed tangential shear (yellow squares) is the model used in our forecasting. It is obtained via a truncated combination of the 1-halo (black crosses) and NLA (red crosses) models. The maximum lensing residual (purple triangles) is shown for comparison. We show the absolute value of the IA signal for easier visual comparison to the lensing residual. still appropriate, as it widens the uncertainty on the signal to account for outliers in the residual bias. Even if the measured signal were to include a significant contribution from a lensing residual, this would be captured by large cross-covariance between the residual and the measured signal. ### Forecasting results The key variables of the method which can potentially be tuned and controlled are the amplitude offset parameter, \(a\), and the shape noise correlation between the estimators, \(\rho\). We therefore choose to forecast the signal-to-noise ratio (SNR) for different values of \(\rho\) and \(a\), to place requirements on these parameters for an IA signal to be detected in LSST data, whilst accounting for realistic levels of residual multiplicative bias. We will consider three different definitions of the SNR to better contextualise our results: the SNR in a single \(r_{\rm p}\) bin, the combined SNR for all \(r_{\rm p}\) bins including the full covariance matrix, and the SNR for a 1-halo scale dependence parameter fit using an Markov-Chain Monte-Carlo (MCMC) method. #### 4.3.1 Requirements for the detection of IA The simplest question of whether or not we expect a detection of IA can be answered by looking at the SNR in a single \(r_{\rm p}\) bin. To claim a detection, we require (assuming all other systematics are controlled) that the SNR be greater than or equal to 1 for a given \(r_{\rm p}\) bin, else the \(1\sigma\) error bars would be consistent with zero. Figure 8 shows the per-bin SNR as a function of \(r_{\rm p}\) for Y1 and Y10, visualised for a selection of \(\rho\) and \(a\) pairings which are 'borderline' in terms of detection. From Figure 8 we can infer that, to obtain a detection of IA in all or most \(r_{\rm p}\) bins, a value of \(a\leq 0.6\) is required. Higher shape noise correlation values would allow for a detection with slightly higher values up to \(a\leq 0.7\), but \(a\) itself appears to have a greater impact on the per-bin SNR in all cases, as shown by the wider spread of points in the bottom two panels. We therefore expect from this that, given the estimators used meet this requirement, LSST Y1 levels of multiplicative bias should still allow for a \(1\sigma\) detection of IA with the MEM, with Y10 allowing for potentially a \(2\sigma\) or higher detection for the same estimators. It is important to note the \(a\) values considered here may be conservative. We have chosen these values to showcase Figure 8: SNR in each \(r_{\rm p}\) bin taken as the forecast IA signal divided by the \(1\sigma\) uncertainty. Values greater than 1 represent a detection of IA above statistical noise and uncertainty due to multiplicative bias, for the given \(r_{\rm p}\) bin in isolation. LSST Y1 is shown on the left and Y10 on the right. The top panels show different cases where \(\rho\) is varied but \(a\) kept fixed, while the bottom panels show the opposite. A much higher SNR is seen in the lowest projected separation bin where the 1-halo term becomes highly dominant. Varying \(a\) has a more significant effect on the SNR than varying \(\rho\). However, in the lowest signal to noise bins \(\rho\) can be the difference between a detection and a signal consistent with zero. In Y10 compared to Y1 we see a roughly factor of 2 increase in per-bin signal to noise. clearly the boundary at which the MEM becomes unable to detect the IA signal; of course, pushing \(a\) further below 0.6 would allow for an even stronger detection. For now, we expect as long as the estimators have \(a\leq 0.6\), the MEM has potential to detect IA in LSST Y1. Such a value should be achievable given the differences in IA amplitude found in Singh & Mandelbaum (2016). In all cases, significantly higher signal to noise is seen in the lowest projected separation bin, as a result of the different scale dependencies of the IA and lensing signals (which propagates forward into the lensing residual and thus our covariance) inside the 1-halo regime. For example, Georgiou et al. (2019) find a 1-halo IA scale dependence represented by a power-law in \(r_{P}\) with index of \(b=-2\) in Galaxy and Mass Assembly survey (GAMA) and KiDS data. This value is also used by F2021 in the construction of their IA halo model, and as such we have chosen to use the same value in our modelling here. On the other hand, the lensing signal scale dependence seen in similar samples used by Viola et al. (2015) and Dvornik et al. (2018) appears to follow roughly \(b=-1\). This implies it is not unsurprising that at projected separations of \(r_{\rm p}\approx 0.1\)Mpc/h we begin to see a much stronger alignment signal, resulting in a very high SNR. #### 4.3.2 Full covariance signal-to-noise The question of whether we expect a detection of IA is not the only one, however, particularly as the MEM is, by construction, unable to independently measure the amplitude of an IA signal (in isolation from precise external information on the value of \(a\)). We are thus motivated to consider the constraining power of the signal more broadly, with a key objective of the MEM being to place model independent constraints on the IA scale dependence. To move towards understanding this, we first consider the overall SNR, which can be obtained from the full covariance matrix in all \(r_{\rm p}\) bins with the following equation, \[\frac{S}{N}=\sqrt{[(1-a)\bar{\gamma}_{\rm IA}]\mathrm{Cov}_{\rm IA}^{-1}[(1-a) \bar{\gamma}_{\rm IA}]^{T}}. \tag{28}\] From this definition, we can see how the covariance between \(r_{\rm p}\) bins affects the total SNR across all bins. As \(\rho\) impacts the covariance matrix, computational limitations mean we cannot calculate a large quantity of covariance matrices to probe \(\rho\) values. Instead, we calculated the covariance matrices for nine \(a\) and \(\rho\) values between 0.1 and 0.9, resulting in 81 combinations. We then interpolate to obtain a smooth picture of the SNR across the \(a\)-\(\rho\) parameter space. Figure 9 shows the full covariance SNR for Y1 and Y10. Across, the entire parameter space, we see very high signal to noise, even in places where we would not expect to obtain a detection of IA. There are two reasons for this. First, as discussed in Section 4.3.1, even for poor values of \(a\) and \(\rho\), there is still a strong signal in the lowest separation bin. Second, the covariance matrix has highly correlated off diagonal terms, particularly in lower \(r_{\rm p}\) bins. This is shown in Figure 10. This high correlation implies that, while \(\rho\) has a less significant effect on whether we expect a detection of IA in a given \(r_{P}\) bin or not, its effect on the constraining power of the overall measurement with respect to parameters of interest may be significant. We will now go on to explore if this is indeed the case. #### 4.3.3 1-halo scale dependence constraints Evidently, the overall SNR does not, by itself, provide a full picture of the forecast constraining power of the MEM, with respect to model parameters of interest, namely, scale-dependence. To explore this in greater detail, we carry out MCMC fits using the emcee.5(Foreman-Mackey et al., 2013) package. We fit the synthetic measurement to a 4-parameter truncated power-law model, designed to qualitatively approximate our fiducial IA model, while maintaining a realistic level of model agnosticism: Footnote 5: [https://github.com/dfm/emcee](https://github.com/dfm/emcee) \[(1-a)\bar{\gamma}_{\rm IA}=a_{\rm 1h}r_{\rm p}^{b_{\rm 1h}}\left( \exp\left[-\left(\frac{r_{\rm p}}{0.3}\right)^{2}\right]\right)\\ +a_{\rm 2h}r_{\rm p}^{b_{\rm 2h}}\left(1-\exp\left[-\left(\frac{r_{ \rm p}}{0.75}\right)^{2}\right]\right). \tag{29}\] Figure 9: Combined SNR in all \(r_{\rm p}\) bins across the \(a\)-\(\rho\) parameter space for LSST Y1 (left) and LSST Y10 (right). We see high signal to noise in the entire explored region, even in areas where we expect the majority of the signal to have \(1\sigma\) uncertainty consistent with zero. This is a result of the high SNR in the lowest separation bin and highly correlated off diagonal elements of the covariance matrix. Note that in this case, the amplitudes which are fit (\(a_{\rm 1h}\) and \(a_{\rm 2h}\)) will be a fraction of the true amplitude, dictated by the value of \(a\). Compared to equation 22, we have swapped the order of the truncation terms as \(\ell\) and \(r_{\rm p}\) are inversely proportional. The truncation scales of 0.3 Mpc/h and 0.75 Mpc/h are chosen to give the best fit to the fiducial signal from maximum likelihood estimation. To constrain the model parameter space, we adopt a set of uniform priors, \(a_{\rm 1h}\in[0,10]\), \(b_{\rm 1h}\in[-10,0]\), \(b_{\rm 2h}\in[-10,0]\), and \(a_{\rm 2h}\in[0,10]\). We run chains for each of the 81 combinations of \(a\) and \(\rho\), initialising 32 walkers in a small spread around the maximum likelihood estimates, and allowing the chains to run until all have achieved convergence. which we define as, \(N>50\tau\), where \(N\) is the total number of iterations and \(\tau\) is in the integrated auto-correlation time. We note that a stricter test of convergence should ideally be used when placing model constraints with real data, but for the purposes of probing the acceptable values of \(a\) and \(\rho\), this criteria is sufficient. Marginalising over the other 3 parameters in the model, we estimate the forecast SNR for the 1-halo scale dependence, \(b_{\rm 1h}\), by taking the 50th percentile (median) value as the best fit and the distance between the 16th and 84th percentiles as the \(1\sigma\) (68%) confidence region. We choose to take the median rather than the maximum likelihood estimate, as we found it to be more robust to variations in walker initialisation and allowed more freedom of the model within the signal uncertainties. The resulting SNR from these fits is shown in Figure 11. Encouragingly, we see that even for certain \(a\)-\(\rho\) combinations where we do not expect detection, a \(1\sigma\) or greater constraint on the scale dependence is still possible with high enough values of \(\rho\). The Figure 11: Signal-to-noise ratio for the constraint on 1-halo scale dependence with LSST Y1 (left) and Y10 (right) like data. Different combinations of \(a\) and \(\rho\) are shown. We define the signal as the median of the posterior distribution of \(b_{\rm 1h}\) values and noise as the region containing 68% of the posterior probability (\(1\sigma\) uncertainty). Figure 10: Correlation matrices for LSST Y10 at \(\rho=0.10\) (left) and \(\rho=0.90\) (right). In the right panel we see very high correlation across the entire matrix, including in the larger \(r_{\rm p}\) bins. This is not the case for the lower \(\rho\) value on the left, however, in the smaller \(r_{\rm p}\) bins we still see significant correlation. importance of high \(\rho\) values is further emphasised here for ensuring the tightest possible constraints, with \(\rho=0.90\) resulting in twice the SNR compared to \(\rho=0.10\) for the same \(a\) value. Similar to what was seen in the per-bin diagonal SNR, going from Y1 to Y10, we again see an approximately factor of 2 increase in the constraint SNR. An interesting question arises when we consider the relation between \(a\) and \(\rho\) themselves. If we could expect that \(\rho\) were to increase as \(a\) decreased, it would be greatly beneficial to the development of bespoke estimators. However, the inverse could make designing an estimator suitable for Y1 in particular challenging. Answering this question would require the analysis of specific shape estimators, which is beyond the scope of this work, but we highlight this as a key consideration for future research. ## 5 Discussion and Conclusions In this work, we have carried out the first application of the MEM for measuring and / or constraining IA developed in L2018. Using DES Y1 shear estimators, we showed how the MEM could be applied, and investigated systematic errors that may pose problems to attempts to use this method. We identified three key systematics that future applications must treat or account for in order for the technique to succeed: * Selection biases induced when making catalogue cuts to match the lensing contribution to shear between shape samples. * Differences in effective weighting schemes between the two samples, altering the effective redshift distribution of the samples, and thus the measured lensing signal. * Residual multiplicative biases in the lensing signal, due to calibration uncertainty, resulting in a lensing residual when cancellation is performed. Our investigation into these systematics showed them each to individually be serious contaminants to our measured signal, leading us to not claim a detection of IA in DES Y1. Despite this, having developed the tools and knowledge necessary to apply the MEM, we went on to forecast the significance of the multiplicative bias induced lensing residual in Stage IV data. We additionally considered the requirement on galaxy size, such that a galaxy should be well enough resolved that differences in radial weighting between two shape estimators are physically meaningful. Using a strict cut on galaxy effective size of \(R\geq 3\), we constructed a new sample from the original LSST DESC 2018 source sample. We used halo occupation distribution models to theoretically determine the observed quantities necessary for the MEM, and thus forecast IA and lensing residual signals. For a fiducial IA signal in our forecasts, we used a combination of the IA halo model (Schneider & Bridle (2010),F2021) and a redshift dependent non-linear alignment model with parameters from the DES Y3 best fits (Sccco et al., 2022). We stress again, this choice, while well motivated by observations and literature, may not necessarily be representative of the contamination in LSST, and therefore our forecasting results are guidelines for future applications of the MEM, rather than strict requirements. We plan in future work to determine how varying IA models could impact the signal obtained from the MEM via analysis of simulated galaxy images. With a set of fiducial signals, we developed a scheme whereby uncertainty in the multiplicative bias can be accounted for as a systematic error and estimated its contribution to the covariance. We found even in the presence of multiplicative bias uncertainty \(m=\pm 0.013\), there is possibility of a \(1\sigma\) or higher detection in LSST Y1 for values of \(a\leq 0.6\), with limited dependence on \(\rho\). Higher \(\rho\) values can be important when the signal-to-noise on the signal is close to 1, however, the primary factor in determining if detection is possible is the portion of the IA signal recovered, which is higher for lower \(a\). For LSST Y10, the drop in multiplicative bias uncertainty to \(m=\pm 0.003\) results in a roughly factor of 2 increase in the SNR, and thus enables detection at higher \(a\) values. A general limit for Y10 could be taken as \(a\leq 0.80\). When addressing the ability of the MEM to constrain the 1-halo scale dependence of the IA signal, we found \(\rho\) to become more significant, due to high off diagonal values in the forecast covariance matrices. Even for values of \(a\) which would result in a signal consistent with zero, constraints on the scale dependence could have a SNR greater than 1 with sufficiently high \(\rho\). We also found for low SNR values of \(a\) and \(\rho\) that the maximum likelihood estimate from the chains tended to adhere more closely to the data points themselves (which could be over or underestimates in an observational context), whereas the median values of the posterior distribution allowed for more freedom of the model within the error-bars. For this reason, we chose to use the median as our best fit value when calculating the 1-halo scale dependence constraint SNR. As a general guideline, for high values of \(a\), achieving a value of \(\rho=a\) should be sufficient to obtain a reasonable constraint on the 1-halo scale dependence. This requirement relaxes for lower \(a\) values, allowing for lower values of \(\rho\). Given the results here, we recommend a realistic and achievable target for LSST Y1 is \(a=0.60\) and \(\rho=0.50\). Given the model used here, this would allow for both a detection and greater than \(1\sigma\) constraint on the scale dependence. Exceeding this baseline would of course result in even more favourable performance of the MEM. In future, we will look to identify specific shear estimators and optimise them for use with the MEM by introducing custom radial weighting. As mentioned previously, it would be interesting to investigate how values of \(a\) and \(\rho\) are related for different pairs of estimators, to determine the feasibility of achieving both \(w\) and \(\rho\). This is a study that is most easily carried out in tangent to shear estimator optimisation. Furthermore, we will seek to minimise multiplicative bias uncertainty in the estimators as much as possible, in the hopes of lowering it well beyond the LSST Y1 requirements. Promising shear estimators for optimisation include METADE-TECT (Sheldon et al., 2023), which builds upon the framework of METACALIBRATION to also perform galaxy detection, and Fourier Power Shapelets (Li et al., 2018, 2020, 2022), which has analytical correction for measurement bias making it computationally efficient in the context of the MEM, where a second shear estimation pipeline needs to be run on images. Finally, Forklens(Zhang et al., 2023) is potentially also promising, due to its ability to measure shear from extremely noisy images. This may remove the need for galaxy weights entirely, therefore circumventing the requirement for a matched weighting scheme. In conclusion, here we have built upon the work of L2018 to develop a greater understanding of the systematics present in the MEM, and how they can potentially be treated or accounted for. In the context of LSST Y1 and Y10, we have placed general requirements on the key parameters relating the shear estimators used, in order ensure future attempts at measurement are robust to contamination by residual multiplicative bias. Our work here has shown that, while challenges lie ahead, the measurement of IA in LSST Y1 is possible and strong constraints in Y10 are highly likely, especially if development of the MEM continues for LSST Y1 and beyond. To further develop the guidelines given here, work identifying and optimising shear estimators for the MEM is required and planned for future analysis. Tests on simulated galaxy images with these tailored estimators can be used to determine how the fiducial IA model affects the method, and allow specific requirements on galaxy size in the context of specific values of \(a\) to be placed. With the work carried out here and proposed for the near future, the MEM has the potential to become another important tool in developing our understanding of the intrinsic alignment of galaxies. ## Acknowledgements The authors of this work would like to thank Christos Georgiou for his help building the fiducial IA model used in this work and many other useful discussions. We also thank Joachim Harnois-Deraps, Carlos Garcia-Garcia, Joe Zuntz, Jonathan Blazek, Benjamin Joachimi, and Javier Sanchez for helpful discussions and suggestions. We are grateful to the Lorentz Center, Leiden, for hosting a workshop where some of the discussions were held and work carried out, as well as the Dark Energy Science Collaboration for hosting a similar workshop. Many thanks also to Chris Harrison for kindly allowing the use of his research computing server, upon which the majority of this analysis was done. Finally, thank you to Newcastle University and the Lady Bertha Jeffreys Endowment for funding the studentship under which much of this research was conducted. The Python libraries SctPv (Virtanen et al., 2020), NumPy (Harris et al., 2020), PvCCL (Chisari et al., 2019), TreeCorr (Jarvis, 2015), and TIPCov ([https://github.com/LSSTDESC/TJPCov](https://github.com/LSSTDESC/TJPCov)) were significant in enabling this work to be done. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. Author contributions: CMBM performed the majority of the analysis (including building and validating code, derivations for the covariance and power spectra, and producing the contents of this paper). CDL provided the conceptual framework of the analysis, guidance on the direction of the work, advice and support, some of the code used in the analysis, and editorial support for the text. ## Data availability The DES Y1 data used in this analysis is publicly available at [https://des.ncsa.illinois.edu/releases/dr1](https://des.ncsa.illinois.edu/releases/dr1). The other data used in the analysis will be shared if reasonably requested in correspondence with the author.
2304.07433
Topological recursion on transalgebraic spectral curves and Atlantes Hurwitz numbers
Given a spectral curve with exponential singularities (which we call a "transalgebraic spectral curve"), we extend the definition of topological recursion to include contributions from the exponential singularities in a way that is compatible with limits of sequences of spectral curves. This allows us to prove the topological recursion/quantum curve correspondence for a large class of transalgebraic spectral curves. As an application, we find that Atlantes Hurwitz numbers, which were previously thought to fall outside the scope of topological recursion, satisfy (our extended version of) topological recursion, and we construct the corresponding quantum curve directly from topological recursion.
Vincent Bouchard, Reinier Kramer, Quinten Weller
2023-04-14T23:54:24Z
http://arxiv.org/abs/2304.07433v1
# Topological recursion on transalgebraic spectral curves and atlantes Hurwitz numbers ###### Abstract. Given a spectral curve with exponential singularities (which we call a "transalgebraic spectral curve"), we extend the definition of topological recursion to include contributions from the exponential singularities in a way that is compatible with limits of sequences of spectral curves. This allows us to prove the topological recursion/quantum curve correspondence for a large class of transalgebraic spectral curves. As an application, we find that Atlantes Hurwitz numbers, which were previously thought to fall outside the scope of topological recursion, satisfy (our extended version of) topological recursion, and we construct the corresponding quantum curve directly from topological recursion. ###### Contents * 1 Introduction * 1.1 Motivation * 1.2 Main results * 1.3 Outline of the paper * 1.4 Notation * 1.5 Acknowledgments * 2 Topological recursion on meromorphic spectral curves * 2.1 Spectral curves * 2.2 Topological recursion * 3 Transalgebraic spectral curves * 3.1 Transalgebraic functions * 3.2 Transalgebraic spectral curves * 3.3 Transalgebraic spectral curves as limits * 4 Topological recursion on transalgebraic spectral curves * 4.1 Rewriting topological recursion * 4.2 Topological recursion on transalgebraic spectral curves * 4.3 Essential singularities only contribute for \(n=1\) * 5 Quantum curves * 5.1 The topological recursion/quantum curve correspondence * 5.2 Quantum curves for transalgebraic spectral curves * 5.3 A particular example * 6 Topological recursion for Atlantes Hurwitz numbers * 6.1 Hurwitz numbers * 6.2 The projection property for Atlantes Hurwitz numbers * 6.3 Topological recursion for Atlantes Hurwitz numbers * 6.4 Relation between the Atlantes and \(r\)-completed cycles Hurwitz quantum curves * 7 Conclusion and open questions * A Extensions of [1] to the transalgebraic case * B Proof of proposition 6.13 ## 1. Introduction ### Motivation Topological recursion [1] is a method to recursively define a collection of multi-differentials on a given object, called a spectral curve. It was originally developed to solve loop equations coming from matrix models [1], but has applications to many other areas of mathematics as well: among others intersection theory on moduli spaces of curves [11, 12, 13], volumes of moduli spaces [11, 10, 15, 16, 17, 18, 19], Hurwitz theory [1, 13, 14, 15, 17, 16, 18, 19], Gromov-Witten theory [1, 1, 12, 13, 15, 16, 17, 18, 19, 20, 21], maps [1], free probability [1, 22], supersymmetric gauge theories [23, 24], and integrable systems of various kinds [1, 2, 25, 26]. The spectral curve of topological recursion is usually taken to be a Riemann surface \(\Sigma\), together with two meromorphic functions \(x\) and \(y\) on it, and a symmetric bidifferential \(B\). The recursion only depends on the local behaviour of the spectral curve near the ramification points of \(x\), which were originally required to be simple and regular points of \(y\). These conditions on the ramification points have been lifted to a far higher generality [1, 1, 18, 19, 20, 21], allowing for higher ramification orders as well as certain poles of \(y\). Although topological recursion itself is local in nature, it behaves better if the spectral curve is global, i.e. if \(x\) and \(y\) are meromorphic functions on a compact Riemann surface. In this case, the functions \(x\) and \(y\) satisfy a polynomial relation \(P(x,y)=0\), and, according to the topological recursion/quantum curve correspondence conjecture [1, 2] (see also [20]), one should be able to quantise such an equation. Explicitly, there should exist an operator \(\hat{P}(\hat{x},\hat{y},\hbar)\), where \(\hat{x}=x\cdot\) and \(\hat{y}=\hbar\frac{d}{dx}\), such that \(P=\hat{P}(x,y,0)\), and such that it annihilates the wave function \[\psi(x(z))=\exp\left[\sum_{n=1}^{\infty}\sum_{g=0}^{\infty}\frac{\hbar^{2g+n-2 }}{n!}\int^{z}\cdots\int^{z}\left(\omega_{g,n}-\delta_{g,0}\delta_{n,2}\frac{ dx(z_{1})dx(z_{2})}{(x(z_{1})-x(z_{2}))^{2}}\right)\right]\,, \tag{1}\] that is, \(\hat{P}\psi=0\). This is a subtle issue, in part due to non-unique quantisation and integration paths. Nevertheless, the correspondence was proven to hold for a large class of genus zero algebraic spectral curves with arbitrary ramification in [1], and for all algebraic spectral curves with simple ramification in [1, 19, 2]. Compact spectral curves exhibit more nice features: topological recursion is related to intersection theory of the moduli spaces of curves in a general setup [1, 13], and in case the spectral curve is compact, the intersection theory is well-behaved and largely independent of the choice of bidifferential [13]. But what if the spectral curve is of a form where \(x\) and \(y\) do not satisfy an algebraic relation? This occurs in a large class of examples, mostly related to hypergeometric tau-functions and Hurwitz theory, see e.g. [1, 1, 15, 16, 17, 18, 19, 20, 21], where the spectral curve is usually of the form \(x(z)=ze^{-\psi(y(z))}\), for some series \(\psi(y)\) and \(y(z)\). In important examples, both \(\psi\) and \(y\) are polynomial, so the function \(x\) has an essential singularity. Does the topological recursion/quantum curve correspondence conjecture hold for these cases? In certain cases, for instance in [14], and also the more general setup of [1], it is shown that the differentials \(\omega_{g,n}\) produced by topological recursion are generating series for certain types of Hurwitz numbers. Using this interpretation, it is then proven that a quantum curve exists, albeit a fairly complicated one. However, the quantum curves were constructed from the enumerative interpretation in terms of Hurwitz numbers, rather than from the spectral curve itself, which is somewhat unsatisfactory. Can we construct the quantum curves directly from topological recursion, in the spirit of [1, 19, 2, 2]? #### 1.1.1. An observation In fact, this project started with the following observation. Consider topological recursion on the spectral curve \(\mathcal{S}\) given by the equation \[y-e^{x^{\tau}y^{\tau}}=0, \tag{2}\] for \(\tau\in\mathbb{Z}_{\geqslant 1}\).1 As shown in [15], the differentials \(\omega_{g,n}\) produced by topological recursion are generating series for \(\tau\)-completed cycles (also called \(\tau\)-spin) Hurwitz numbers. Moreover, using the semi-infinite wedge space interpretation of Hurwitz numbers, it was proven in [14] that these Hurwitz numbers satisfy a quantum curve in the sense above. The resulting quantum curve is a quantisation of the spectral curve, but a rather complicated one: Footnote 1: Our spectral curve looks a bit different from [14, 15], but this is because we are taking the one-form \(\omega_{0,1}\) to be \(y\,dx\) instead of \(y\frac{dx}{dx}\). Ultimately, it is the same spectral curve. \[\hat{P}=\hat{g}-\hat{x}^{1/2}e^{\frac{1}{\tau+\tau}\sum_{i=0}^{\tau}\hat{x}^{- 1}(\hat{x}\hat{y})^{i}\hat{x}(\hat{x}\hat{y})^{\tau-i}\hat{x}^{-1/2}}, \tag{3}\] with \(\hat{x}=x\cdot\) and \(\hat{y}=h\frac{d}{dx}\). To obtain this quantum curve directly from topological recursion, in the spirit of [1], one approach is to consider a sequence of genus zero compact spectral curves \(\mathcal{S}_{N}\), specified by rational functions \(x_{N}\) and \(y_{N}\), such that \(x_{N}\) and \(y_{N}\) go to the above \(x\) and \(y\) in the limit \(N\to\infty\). (Schematically, \(\lim_{N\to\infty}\mathcal{S}_{N}=S\).) For all positive integers \(N\), topological recursion produces differentials \(\omega_{g,n}^{N}\). From these differentials, one can construct a wave function \(\psi_{N}\) as in (1). If the genus zero spectral curves for finite \(N\) fall within the class studied in [1], then we know right away that there exist quantum curves \(\hat{P}_{N}\) such that \(\hat{P}_{N}\psi_{N}=0\) for all positive integers \(N\), and we can construct \(\hat{P}_{N}\) explicitly. Finally, we can take the limit \(N\to\infty\) to get a quantum curve \(\hat{P}_{\infty}\). Assuming that the \(N\to\infty\) limit of the differentials \(\omega_{g,n}^{N}\) recovers the differentials \(\omega_{g,n}\) of the \(N\to\infty\) spectral curve, which we could rewrite schematically as the condition \[\lim_{N\to\infty}\left(\omega_{g,n}^{N}[\mathcal{S}_{N}]\right)=\omega_{g,n} \left[\lim_{N\to\infty}\mathcal{S}_{N}\right], \tag{4}\] then the limiting quantum curve \(\hat{P}_{\infty}\) should annihilate the wave function \(\psi\) constructed from the differentials \(\omega_{g,n}\) by (1). However, the quantum curve \(\hat{P}_{\infty}\) that we obtain in this way reads \[\hat{P}_{\infty}=\hat{y}-e^{(\hat{x}\hat{y})^{\star}}, \tag{5}\] which is not the same as the quantum curve (3) that was obtained for \(r\)-completed cycles Hurwitz numbers! Both are quantisations of the same spectral curve, but they are certainly different, and annihilate different wave functions. What is going on? Meanwhile, the quantum curve \(\hat{P}_{\infty}\) from (5) already appeared in the work of [1], where it was proved to annihilate the wave function constructed from differentials that are generating series for another type of Hurwitz numbers, known as Atlantes Hurwitz numbers. In fact, it was already noticed in that paper that the quantum curve \(\hat{P}_{\infty}\) from (5) and the quantum curve \(\hat{P}\) from (3) are both quantisations of the same spectral curve. Since it was known that topological recursion on this spectral curve produces generating series for \(r\)-completed cycle Hurwitz numbers, this observation was taken as an indication that Atlantes Hurwitz numbers fall outside the scope of topological recursion. To quote [1]: "We have an example where the dequantization of the quantum curve doesn't give a spectral curve suitable for the corresponding topological recursion." They also state: "We can conclude that the dequantization of \(\hat{y}-e^{x\cdot\hat{y}}\)" cannot be the spectral curve for the atlantes Hurwitz numbers, suitable for the construction of the topological recursion." But... is this really the end of the story? #### 1.1.2. A resolution In this paper we resolve this conundrum and propose an explanation for this observation. The key is that for topological recursion to commute with limits of sequences of spectral curves as in (4), exponential singularities of the limiting spectral curve must be taken into account. More precisely, given a spectral curve with exponential singularities, one can construct differentials \(\omega_{g,n}\) by using topological recursion ignoring the exponential singularities, as it has been done so far in the literature. But, as we propose in this paper, one can also construct another set of differentials, call them \(\omega_{g,n}^{\infty}\), using an extension of topological recursion that includes contributions from exponential singularities (informally considered as "ramification points of infinite order"). In general, for a given spectral curve with exponential singularities, the differentials \(\omega_{g,n}^{\infty}\) and \(\omega_{g,n}\) will be distinct. It turns out that, as we prove in this paper, topological recursion commutes with limits of sequences of spectral curves as in (4) only if the differentials on the right-hand-side are the differentials \(\omega_{g,n}^{\infty}\) that include contributions from exponential singularities. This explains the observation above. For the spectral curve (2), the differentials \(\omega_{g,n}\) that ignore the exponential singularities are generating series for \(r\)-completed cycles Hurwitz numbers, as shown in [1]. However, as we show in the current paper, the differentials \(\omega_{g,n}^{\infty}\) that include contributions from the exponential singularity are generating series for Atlantes Hurwitz numbers. This shows that Atlantes Hurwitz numbers do fall within the scope of topological recursion, once the formalism is properly extended to include contributions from exponential singularities. Furthermore, since we show that topological recursion (properly extended to include contribution from exponential singularities) commutes with limits of sequences of spectral curves, we obtain directly that the wave function \(\psi_{\infty}\) constructed from the differentials \(\omega_{g,n}^{\infty}\) is annihilated by the quantum curve \(\hat{P}_{\infty}\) from (5). This provides a construction of the quantum curve for Atlantes Hurwitz numbers directly from topological recursion, and explains why it differs from the quantum curve for \(\tau\)-completed cycles Hurwitz numbers that was obtained in [20]. ### Main results We propose an extension of topological recursion that includes contributions from exponential singularities of the spectral curves, i.e. points \(p\in\Sigma\) where \(x(z)\sim M_{0}(z)e^{M_{1}(z)}\) as \(z\to p\) and \(M_{0}\), \(M_{1}\) are meromorphic functions with \(p\) a pole of \(M_{1}\). We will consider these singularities as ramification points of infinite order. Such functions are called "transalgebraic", and hence we will call these spectral curves "transalgebraic spectral curves". As the topological recursion formula involves sums over local deck transformations, infinite order ramification points require infinite sums, and this leads to multiple issues; chief amongst these is the definition of the residue at what may not be an isolated singularity. Instead of dealing with these issues directly, we construct topological recursion on transalgebraic spectral curves as a limit of topological recursion on sequences of finite degree, meromorphic, spectral curves, as in (4). For this definition to make sense, we need to make sure that the \(N\to\infty\) limit of the differentials exist and satisfies desired properties, which we do. Furthermore, we show that while the definition of topological recursion on transalgebraic spectral curves is fairly complicated (having to do with limits of sequences of spectral curves, although we do provide a formula applicable in some specific cases), in the end it is not too bad: for any given transalgebraic spectral curve, the formal definition has to be used only finitely many times, after which the exponential singularity does not contribute extra terms anymore. With this construction in hand, we study the topological recursion/quantum curve correspondence, with the aim of constructing quantum curves directly from topological recursion for transalgebraic spectral curves. For a subclass of transalgebraic curves, which we call regular, we adapt the argument of [1] to construct the quantum curves associated to these transalgebraic spectral curves. As an application, we show that Atlantes Hurwitz numbers, which were introduced in [1] as an example of Hurwitz numbers not satisfying topological recursion, do fit in our transalgebraic framework. We show that the differentials constructed from topological recursion (suitably extended to include contributions from exponential singularities) on the spectral curve (2) are generating series for Atlantes Hurwitz numbers (while the differentials constructed from the usual topological recursion that ignores exponential singularities on the "same" spectral curve2 are generating series for \(\tau\)-completed cycles Hurwitz numbers). Finally, we prove that the corresponding wave function is annihilated by the quantum curve (5) directly from topological recursion. Footnote 2: In fact, as we will explain, it is better to think of these two spectral curves as distinct spectral curves, in the following sense. While the functions \(x\) and \(y\), as well as the symmetric bidirectional \(B\), are formally the same “rules”, the two spectral curves have two different Riemann surfaces. For the spectral curve for \(\tau\)-completed cycles Hurwitz numbers, we take the Riemann surface to be \(\Sigma=C\), with the exponential singularity of \(x\) and \(y\) at infinity removed, while for the spectral curve for Atlantes Hurwitz numbers we take the Riemann surface to be \(\Sigma=P^{1}\), which includes the exponential singularity (see Examples 2.6 and 3.8). ### Outline of the paper In Section 2 we define spectral curves and review the topological recursion framework for meromorphic spectral curves with arbitrary ramification. In Section 3 we define transalgebraic spectral curves and explain how they can be realized as limits of sequences of meromorphic spectral curves. We then proceed in Section 4 with the definition of topological recursion on transalgebraic spectral curves and prove various properties of this extension of topological recursion, including the fact that essential singularities contribute to topological recursion only in a finite number of steps. In Section 5 we prove the topological recursion/quantum curve correspondence for a large class of transalgebraic spectral curves, which we call regular. This class includes the spectral curve for \(\tau\)-completed cycles and Atlantes Hurwitz numbers. We focus on this particular curve in Section 6, where we show that topological recursion on this transalgebraic spectral curve (including the exponential singularity) produces generating functions for Atlantes Hurwitz numbers. We conclude with open questions in Section 7. Appendix A provides the extension of the results of [1] needed to study the topological recursion/quantum curve correspondence for transalgebraic spectral curves, while Appendix B contains the proof of proposition 6.13 about Atlantes Hurwitz numbers. ### Notation We set \(S(z)=\frac{\sinh(z/2)}{z}\). For a natural number \(n\), we define \([n]\coloneqq\{1,\ldots,n\}\). Given a set \(S\) indexed by another set \(I\), i.e \(S=\{s_{i}\,|\,i\in I\}\), and given a subset \(J\subseteq I\), we denote \(s_{J}\coloneqq\{s_{i}\,|\,i\in J\}\). In particular, \(s_{1}=S\). For a set \(Z\), we use the notation \(\mu\vdash Z\) to indicate that \(\mu\) is a set partition of \(Z\); the length of this partition is then denoted by \(\operatorname{\mathsf{I}}(\mu)\). For a curve \(\Sigma\) with coordinate \(z\), we denote the induced coordinates on \(\Sigma^{n}\) by \(\{z_{1},\ldots,z_{n}\}\). With the previously given notation, this can be denoted \(z_{[n]}\). Given \(C=\{z\}\cup z_{[i]}\subset\Sigma\) and \(C^{\prime}=C\setminus\{z\}\) sets of cardinality \(n+1\) and \(n\), respectively, take a symmetric \(n\)-differential \(\eta\) and define \[\operatorname*{Res}_{C=z}\eta(z_{[n]})\coloneqq\operatorname*{Res}_{C^{ \prime}=z}\eta(z_{[n]})\coloneqq\operatorname*{Res}_{z_{1}=z}\cdots \operatorname*{Res}_{z_{n}=t}\eta(z_{[n]})\,. \tag{6}\] Since \(\eta\) is symmetric, this notation makes sense as the order in which we take the residues does not matter. In a similar spirit, we also define \[\operatorname*{Res}_{z=C^{\prime}}\coloneqq\sum_{z_{0}\in C^{ \prime}}\operatorname*{Res}_{z^{\prime}=z_{0}}\,. \tag{7}\] For any set of points \(C\subset\Sigma\) we denote by \(z^{C}\) one arbitrarily chosen point in this set. Lastly, as we will have to take many residues at once, we define, for fixed points \(a_{1},\ldots,a_{n}\in\Sigma\) \[\operatorname*{Res}_{\begin{subarray}{c}z_{1}=a_{1}\\ 1=1,\ldots,n\end{subarray}}\coloneqq\operatorname*{Res}_{z_{1}=a_{1}}\cdots \operatorname*{Res}_{z_{n}=a_{n}}\,, \tag{8}\] along with the obvious generalisations to our residues over sets notations. ### Acknowledgments The authors acknowledge support from the National Science and Engineering Research Council of Canada. R.K. acknowledges support from the Pacific Institute for the Mathematical Sciences. The research and findings may not reflect those of these institutions. The University of Alberta respectfully acknowledges that we are situated on Treaty 6 territory, traditional lands of First Nations and Metis people. ## 2. Topological recursion on meromorphic spectral curves ### Spectral curves One of the main goals of this paper is to extend the definition of topological recursion to spectral curves with exponential singularities, which we will call transalgebraic spectral curves. Let us start by recalling the usual formulation of topological recursion, in the Bouchard-Eynard formalism [1], which extends the original Eynard-Orantin formalism [1] to higher order ramification. We start with the definition of a spectral curve. **Definition 2.1**.: A _spectral curve_ is a quadruple \(\mathcal{S}=(\Sigma,x,y,B)\), where: 1. \(\Sigma\) is a Riemann surface; 2. \(x\) and \(y\) are functions on \(\Sigma\) that are holomorphic except potentially at a finite number of points and that separate points, i.e. \((x,y)\) is injective; 3. \(B\) is a symmetric bi-differential on \(\Sigma\times\Sigma\) with a double pole on the diagonal with biresidue \(1\). We say that the spectral curve is _meromorphic_ if \(x\) and \(y\) are meromorphic on \(\Sigma\) and the ramification locus \(R\) of \(x\), which is the set of zeros of \(dx\) and poles of \(x\) of order \(\geqslant 2\), is finite. The usual Bouchard-Eynard formalism will correspond to the case of meromorphic spectral curves, whereas our extension will be to a certain class of non-meromorphic spectral curves. Let us now review some of the basic features of meromorphic spectral curves. We write \(r_{a}\) for the ramification order of \(x\) at \(a\in R\). For a point \(z\in\Sigma\), we write \(f(z)=x^{-1}(x(z))\) for the fibre, and \(\hat{f}^{\prime}(z)=f(z)\setminus\{z\}\). Also, if \(a\in R\) and \(z\) is close to \(a\), we write \(f_{a}(z)\) for the local Galois conjugates of \(z\), and again \(\hat{f}^{\prime}_{a}(z)=f(z)\setminus\{z\}\). We note that while \(f_{a}(z)\) is always finite of cardinality \(r_{a}\), \(\hat{f}(z)\) may be countably infinite, as \(x:\Sigma\to\mathbb{P}^{1}\) is not necessarily a finite branched covering if \(\Sigma\) is non-compact. For \(a\in R\), define a local coordinate \(\zeta_{a}\) on a neighbourhood of \(a\) by \(x=x(a)+\zeta_{a}^{r_{a}}\) if \(x(a)\neq\infty\) and \(x=\zeta_{a}^{-r_{a}}\) if \(x(a)=\infty\). Then the one-form \(\omega_{0,1}\coloneqq ydx\) has an expansion: \[\omega_{0,1}\coloneqq ydx=\sum_{k=-1}^{\infty}t_{k}^{a\,r_{a}^{k-1}}dt_{a} \tag{9}\] for some \(1\) and \(t_{k}^{a}\). Let \(s_{a}\coloneqq\min\{k\operatorname{\mathsf{I}}t_{k}^{a}\neq 0\text{ and }r_{a}\nmid k\}\). The following admissibility condition on spectral curves is required to make sense of topological recursion [1]: **Definition 2.2**.: A meromorphic spectral curve is _admissible_ if for every point \(a\in R\), \(s_{a}\) and \(r_{a}\) are coprime, and either \(s_{a}\leqslant-1\), or \(1\leqslant s_{a}\leqslant r_{a}+1\) with \(r_{a}=\pm 1\pmod{s_{a}}\). _Remark 2.3_.: The ramification points \(a\in R\) with \(s_{a}\leqslant-1\) never contribute to the Eynard-Bouchard topological recursion [1]. This means that any ramification point \(a\) that is a pole of \(x\) may be dropped from the topological recursion unless \(dy\) has a zero at \(a\). As a result it is common practice in the topological recursion literature to sloppily refer to the set of zeros of \(dx\) as all the ramification points of \(x\)[1, 2]. We will see that, for our purposes, it is critical to include the poles of \(x\) of order greater than two as ramification points. A particularly interesting class of spectral curves is when the Riemann surface \(\Sigma\) is compact. **Definition 2.4**.: A spectral curve is _compact_ if \(\Sigma\) is a compact connected Riemann surface and \(B(z_{1},z_{2})\) has no poles except on the diagonal \(z_{1}=z_{2}\).3 Footnote 3: This condition on \(B\) is equivalent to requiring that \(B\) be normalised on a chosen Torelli marking of the compact Riemann surface \(\Sigma\). Compact meromorphic spectral curves have nice geometric properties. If \(\Sigma\) is compact, \(x\) and \(y\) are two meromorphic functions on a compact Riemann surface, and hence they identically satisfy an algebraic equation \[P(x,y)=0, \tag{10}\] where \(P\) is a polynomial. (Note that if \(\Sigma\) is non-compact, \(x\) and \(y\) may still satisfy a relation as above, but \(P\) could no longer be a polynomial - see example 2.6.) We also note that for compact meromorphic spectral curves, since \(x\) is a meromorphic function on a compact Riemann surface \(\Sigma\), \(x:\Sigma\to P^{1}\) is a finite degree branched covering. This means that \(f(z)\) is finite and of cardinality given by the degree of \(x\) if \(z\notin R\). **Example 2.5**.: Consider the spectral curve \(\mathcal{S}\) with \(\Sigma=P^{1}\), \(x=z^{r}\), \(y=z^{s-r}\), and \(B=\frac{dx_{2}\,dx_{2}}{(z_{1}-z_{2})^{2}}\), with \(r,s\) integers such that \(r\geqslant 2\), \(1\leqslant s\leqslant r+1\), and \(r=\pm 1\pmod{s}\). This is a compact meromorphic spectral curve, and \(x\) and \(y\) satisfy the algebraic equation \(x^{r-s}y^{r}-1=0\) if \(s<r\), or \(y^{r}-x=0\) if \(s=r+1\). One can also check that the spectral curve is admissible. In particular, this is the fundamental \((r,s)\) spectral curve studied in [1]. **Example 2.6**.: Consider the spectral curve \(\mathcal{S}\) with \(\Sigma=C\), \(x=ze^{-z^{r}}\), \(y=e^{z^{r}}\), and \(B=\frac{dx_{1}\,dx_{2}}{(z_{1}-z_{2})^{2}}\), with \(r\in\mathbb{Z}_{\geqslant 1}\). As the function \(x\) is meromorphic on \(C\) (it is in fact holomorphic), this spectral curve is meromorphic, but it is not compact. One can check that it is admissible. The functions \(x\) and \(y\) satisfy the relation \[y-e^{x^{r}y^{r}}=0, \tag{11}\] which is not algebraic. This spectral curve will play an important role in the following. As proven in [1], the differentials \(\omega_{g,n}\) produced by topological recursion on this spectral curve are generating functions for \(r\)-completed cycles Hurwitz numbers. We will come back to this enumerative geometric interpretation in section 6.4 Footnote 4: To avoid confusion, we remark that for this spectral curve \(y\) is often define in the literature via \(\omega_{0,1}=y\,d\log x\) instead of \(\omega_{0,1}=y\,d\,x\), which gives \(y=z\) instead of \(y=e^{z^{r}}\). The two definitions are of course equivalent, as it simply amounts to redefining \(y\mapsto xy\). In this paper, for all spectral curves, the one-form will be defined as \(\omega_{0,1}=y\,d\,x\). _Remark 2.7_.: We emphasize here that the choice of Riemann surface \(\Sigma\) in the definition of a spectral curve is very important. For instance, if we replace the Riemann surface \(P^{1}\) by \(C\) in example 2.5, it should be considered as a different spectral curve, since the pole of \(x\) at infinity is not included in the Riemann surface. In this case, as \(x\) is meromorphic on \(P^{1}\) and holomorphic on \(C\), the usual topological recursion (which applies to meromorphic spectral curves) can be used to calculate correlators \(\omega_{g,n}\) for both spectral curves, and it happens that the correlators coincide, since the pole at infinity does not contribute to the topological recursion. But this will not always be the case. For instance, one may want to consider the spectral curve of example 2.6, but with the Riemann surface \(\Sigma=P^{1}\), which includes the exponential singularity of \(x\) at infinity. We claim that the usual topological recursion does not apply in this case, as it only applies to meromorphic spectral curves; instead, one should use the extended version that we propose, in which the exponential singularity generally contributes. As we will see, with this extended definition topological recursion produces different correlators (for \(r\geqslant 2\)) depending on whether the essential singularity at infinity is included in the Riemann surface or not, i.e. whether \(\Sigma\) is taken to be \(P^{1}\) or \(C\) in the spectral curve of example 2.6. ### Topological recursion The standard definition of topological recursion applies to admissible meromorphic spectral curves. It does not however require the spectral curve to be compact; for instance, it can be applied to both spectral curves in examples 2.5 and 2.6. Out of the data of the spectral curve, a collection of symmetric differentials \(\{\omega_{g,n}\}_{g\geq 0,n\geq 1}\) are recursively computed. As mentioned in the introduction, topological recursion is interesting because for many spectral curves, the correlators \(\omega_{g,n}\) that it produces are generating functions for interesting enumerative invariants, such as Hurwitz numbers, Gromov-Witten invariants, etc. For the definition of (Bouchard-Eynard) topological recursion, recall the notation given in section 1.4. **Definition 2.8**.: Given an admissible meromorphic spectral curve \(\mathcal{S}=(\Sigma,x,y,B)\), _topological recursion_ gives a procedure to define multi-differentials \(\{\omega_{g,n}\}_{g\geq 0,n\geq 1}\), recursive on \(2g-2+n\), as follows: the base cases are \(\omega_{0,1}=\mathrm{yd}x\) and \(\omega_{0,2}=B\), and the recursive step is \[\omega_{g,n+1}(z_{0},z_{[n]})\coloneqq\sum_{\alpha\in\mathbb{R}}\operatorname {Res}_{z=\alpha}\sum_{\emptyset\neq Z\subseteq f_{\alpha}^{\prime}(z)}K_{|Z|+1} (z_{0},z,Z)\mathcal{W}_{g,n,|Z|+1}(z,Z\mid z_{[n]})\,, \tag{12}\] where \[K_{|Z|+1}(z_{0},z,Z)\coloneqq\frac{\int_{+}^{z}B(z_{0},\cdot)}{\prod_{z^{ \prime}\in Z}(y(z^{\prime})-y(z))\mathrm{d}x(z)} \tag{13}\] is the _recursion kernel_ and \[\mathcal{W}_{g,n,|Z|}(Z\mid z_{[n]})\coloneqq\sum_{\begin{subarray}{c}\mu \vdash Z\\ \frac{\left\lVert d_{\alpha}^{\prime}(z)\right\rVert}{\sum_{\begin{subarray}{ c}\mu\vdash Z\\ \sum_{k}\mu\vdash g+\{\mu\}\end{subarray}}\end{subarray}}}^{\prime}\prod_{k=1}^{ \ell(\mu)}\omega_{g_{k},|\mu_{k}|+|N_{k}|}(\mu_{k},z_{N_{k}})\,, \tag{14}\] where the prime on the summation means we omit any term containing \(\omega_{0,1}\). The differentials \(\omega_{g,n}\) are often called _correlators_ due to their origin in matrix models. For future reference, we also define \[\mathcal{E}_{g,n,|Z|}(Z\mid z_{[n]})\coloneqq\sum_{\begin{subarray}{c}\mu \vdash Z\\ \frac{\left\lVert d_{\alpha}^{\prime}(z)\right\rVert}{\sum_{\begin{subarray}{ c}\mu\vdash Z\\ \sum_{k}\mu\vdash g+\{\mu\}\end{subarray}}\end{subarray}}}\prod_{k=1}^{\ell(\mu) }\omega_{g_{k},|\mu_{k}|+|N_{k}|}(\mu_{k},z_{N_{k}})\,, \tag{15}\] where terms containing \(\omega_{0,1}\) are now included. Topological recursion was originally obtained as the unique solution to the loop equations of matrix models that satisfies a particular normalisation condition. This can be defined more mathematically, in terms of the so-called abstract loop equations, with the normalisation condition often called the "projection property". **Proposition 2.9** ([1, Appendix C]).: _A collection of multidifferentials \(\{\omega_{g,n}\}_{g\geq 0,n\geq 1}\) satisfies topological recursion if and only if it satisfies the higher abstract loop equations:_ \[\sum_{\begin{subarray}{c}Z\subset f_{\alpha}(z)\\ |Z|=1\end{subarray}}\mathcal{E}_{g,n,i}(Z\mid z_{[n]})=\mathcal{O}\left(z^{-r_ {\alpha}z_{\alpha}^{\natural}}\big{(}\frac{\mathrm{d}z}{z}\big{)}^{i}\right) \quad\text{as }z\to\alpha\,, \tag{16}\] _where_ \[\partial_{\alpha}^{i}\coloneqq-1-\left\lfloor\frac{s_{\alpha}(i-1)}{r_{\alpha }}\right\rfloor, \tag{17}\] _and the projection property: if \(2g-2+n\geq 0\),_ \[\omega_{g,n+1}(z_{0},z_{[n]})=\sum_{\alpha\in\mathbb{R}}\operatorname{Res}_{z =\alpha}\Big{(}\int_{\alpha}^{z}\omega_{0,2}(z_{0},\cdot)\Big{)}\omega_{g,n+1 }(z,z_{[n]}) \tag{18}\] Collections \((\omega_{g,n})_{g,n}\) satisfying the projection property may also be called _polarised_, and the choice of \(\omega_{0,2}=B\) is sometimes referred to as a choice of polarisation. Another property of the multi-differentials, which seems not to have appeared in the literature in this generality, is a bound on their pole order. We give that property in the next lemma, which generalises [1, Proposition 9] and [18, Section 7]. **Lemma 2.10**.: _Let the multi-differentials \(\{\omega_{g,n}\}_{g\geqslant 0,n\geqslant 1}\) be obtained by topological recursion on the admissible meromorphic spectral curve \(\mathcal{S}=(\Sigma,x,y,b)\). Then, the pole order of \(\omega_{g,n}\) in each variable at a point \(a\in\Sigma\) is bounded by \((s_{a}-1)(2g-2+n)+2g\)._ Proof.: The proof is exactly the same as that of [1, Proposition 9], which is the \(s_{a}=r_{a}+1\) case, noting that the pole order of the \(k\)-th recursion kernel in general is \((k-1)(s_{a}-1)\), by the definition of \(s_{a}\). Note that the original proof does not handle the possibility of \(\omega_{0,2}\)s in \(\mathcal{W}^{g}_{k,n}\) with both arguments coupled to the kernel in the recursive step. Each such occurrence adds a double pole, so if we call the number of these occurrences \(b\), then the pole order of each term in \(\mathcal{W}^{g}_{k,n}\) is at most \[(s_{a}-1)(2(g+\ell(\mu)-k)-2k+n+k)+2(g+\ell(\mu)+b-k)\,.\] Then we can use that \(\ell(\mu)+b\leqslant k\), by the definition of \(b\), so this can be bounded by \((s_{a}-1)(2g-k+n)+2g\), which is the same as found without the diagonal poles, so this omission does not change the pole order. ## 3. Transalgebraic spectral curves Our goal is to extend the definition of topological recursion to spectral curves where \(x\) is not meromorphic on the Riemann surface \(\Sigma\). More precisely, we wish to study spectral curves where \(x\) has exponential singularities at some isolated points on \(\Sigma\). The key example to keep in mind is example 2.6: we want to consider this spectral curve, but with the Riemann surface \(\Sigma=\mathbb{P}^{1}\), which includes the exponential singularity of \(x\) at infinity. ### Transalgebraic functions Let us define more precisely the class of functions that we are interested in. We first define exponential singularities. **Definition 3.1** ([1, 15]).: Let \(\Sigma\) be a Riemann surface and \(z\in\Sigma\). A function \(f\) is said to have an _exponential singularity_ at \(z\) if it is holomorphic and non-zero on some punctured open neighbourhood \(U\setminus\{z\}\) of \(z\), but cannot be extended to a meromorphic function on \(U\). The _exponential order_ of \(f\) at \(z\) is defined to be \[\operatorname{Erd}_{f}(z)=\inf\big{\{}d\in\mathbb{R}_{\geqslant 0}\,\big{|} \,\limsup_{w\to z}|w-z|^{d}\log|f(w)|<\infty\big{\}}\,. \tag{19}\] A transalgebraic function on \(\Sigma\) is a function that is holomorphic on \(\Sigma\) except potentially at a finite number of points, where it can have either poles or exponential singularities. As such, it is a natural generalization of meromorphic functions, where we allow not only poles but also exponential singularities. More precisely: **Definition 3.2** ([15]).: Let \(\Sigma\) be a Riemann surface. Let \(\mathcal{T}_{n}(\Sigma)\) be the space of _transalgebraic functions_ on \(\Sigma\) with at most \(n\in\mathbb{Z}_{\geqslant 0}\) zeros, poles, and exponential singularities, which consists of all non-zero holomorphic functions on \(\Sigma\backslash S\) for some \(S\subset\Sigma\) such that \(|S|\leqslant n\) and such that for any \(z\in S,\operatorname{Erd}_{f}(z)<\infty\). We define the _class of transalgebraic functions_ on \(\Sigma\) as \[\mathcal{T}(\Sigma)=\bigcup_{n\in\mathbb{Z}_{\geqslant 0}}\mathcal{T}_{n}(\Sigma) \tag{20}\] It is a natural question to ask why we insist that there are finitely many zeros and poles in the previous definition, but it is rather straightforward to see that this condition is required for ramification points to be isolated. Of course, by the great Picard theorem, all points in \(\mathbb{P}^{1}\) are obtained infinitely often as one approaches an essential singularity save for possibly two points and, fixing an affine coordinate on \(\mathbb{P}^{1}\), we may put those two points (if they exist) at zero and infinity via a change of coordinates. The following proposition tells us that if we didn't have exactly two such points, occasionally called _Picard points_ in the literature, then the exponential singularity would be a cluster point of ramification points; it seems natural to exclude such things from the perspective of the topological recursion. **Proposition 3.3**.: _Let \(\pi:\operatorname{B}_{\epsilon}(0)\setminus\{0\}\to\mathbb{P}^{1}\) be a branched covering from the punctured disk of radius \(\epsilon>0\) to \(\mathbb{P}^{1}\) with an exponential singularity at zero. Assume the only Picard point of \(\pi\) is infinity, and if infinity is a Picard point \(\epsilon\) is small enough that \(\pi\) never takes the value infinity. Then \(\pi\) has a ramification point in \(\operatorname{B}_{\epsilon}(0)\setminus\{0\}\)._ Proof.: Proceed by contradiction and assume \(\pi:\operatorname{B}_{\epsilon}(0)\setminus\{0\}\to\mathbb{C}\) is an honest covering map where \(C=\pi(\operatorname{B}_{\epsilon}(0)\setminus\{0\})\) is either \(C\) or \(C_{\infty}\cong\mathbb{P}^{1}\). As \(C\) is simply connected the monodromy group of \(\pi\) is trivial. Ergo, there exists a right inverse map (non-unique) \(\pi^{-1}:C\to\operatorname{B}_{\epsilon}(0)\setminus\{0\}\) so that \(\pi\circ\pi^{-1}=\operatorname{id}_{\operatorname{B}_{\epsilon}(0)\setminus\{0 \}}\). If we define the image of \(\pi^{-1}\) to be \(B\) then the map \(\pi^{\prime}:C\to B\) with \(\pi^{\prime}\equiv\pi^{-1}\) is biholomorphic and, in particular, a homeomorphism so \(B\) is simply connected; therefore, \(\pi^{\prime-1}:B\to C\) is the universal cover of \(C\) so there exists a covering map \(\varphi:B\to B_{\varepsilon}(0)\setminus\{0\}\) such that \(\pi\circ\varphi=\pi^{\prime-1}\). As \(\pi^{\prime-1}\) has the inverse \(\pi^{\prime}\), it is bijective so \(\varphi\) must be injective. As \(\varphi\) is a covering map, it must be surjective so it is in fact a homeomorphism. Thus, \(B_{\varepsilon}(0)\setminus\{0\}\) is simply connected, an obvious contradiction. Thus, if we only have one (or zero) Picard points then, for any \(\varepsilon>0\) sufficiently small, we get a ramification point in the disk of radius \(\varepsilon\) about the exponential singularity so that the exponential singularity must be an accumulation point of ramification points. It turns out that transalgebraic functions on compact Riemann surfaces have a very simple form. **Theorem 3.4** ([19, Theorem 2.17]).: _The space of transalgebraic functions on a compact Riemann surface \(\Sigma\) is equal to the space of functions of the form_ \[f(z)=M_{0}(z)e^{M_{1}(z)}\,, \tag{21}\] _where \(M_{0}\) and \(M_{1}\) are meromorphic functions on \(\Sigma\) and \(M_{0}\neq 0\)._ The choice of \(M_{0}\) and \(M_{1}\) in (21) is not quite unique: we can add to \(M_{1}\) a (local) constant \(c\), and multiply \(M_{0}\) by \(e^{-c}\) without changing \(f\). Given a transalgebraic function \(f\) on \(\Sigma\), its differential \(df\) is not usually a meromorphic one-form. However, it turns out that \(d\log f\) always is, which is the content of the next lemma. **Proposition 3.5** ([19, Lemma 2.15]).: _Let \(f\in\mathcal{T}(\Sigma)\). The logarithmic derivative \(d\log f\) is a meromorphic differential on \(\Sigma\) with integer residues. Conversely, if \(f\) is function on \(\Sigma\) which is non-zero holomorphic outside a finite set and is such that \(d\log f\) is meromorphic with integer residues at poles, then \(f\in\mathcal{T}(\Sigma)\)._ For more on transalgebraic functions and their underlying geometry (including the so-called log-Riemann surfaces), see [19, 19, 18, 18]. ### Transalgebraic spectral curves Looking back at the definition of spectral curves definition 2.1, \(x\) and \(y\) were only required to be holomorphic at all but finitely many of points. In particular, they may be transalgebraic functions on \(\Sigma\). **Definition 3.6**.: Let \(\mathcal{S}=(\Sigma,x,y,B)\) be a spectral curve. We say that it is _transalgebraic_ if \(x\) and \(y\) are transalgebraic functions on \(\Sigma\) such that \(xy\) is a meromorphic function on \(\Sigma\). Because of the requirement that \(xy\) is meromorphic, the one-form \(\omega_{0,1}=ydx\) is meromorphic on \(\Sigma\), since by proposition 3.5 we know that \(\frac{dx}{x}\) is always meromorphic and by assumption \(xy\) is meromorphic. Interestingly, all correlators \(\omega_{g,n}\) produced by the topological recursion will still be meromorphic. For topological recursion, we would like to consider \(x\) as a branched covering \(x:\Sigma\to\mathbb{P}^{1}\). Even though \(x\) is transalgebraic on \(\Sigma\), it can still be thought of as a branched covering [18]. However, if \(x\) has exponential singularities, the covering will not be of finite degree. Nevertheless, it makes sense, and we can define its ramification locus as follows. **Definition 3.7**.: Let \(x\) be a transalgebraic function on \(\Sigma\). If \(x\) is not meromorphic, we define its _degree_ to equal \(\infty\). We consider all the exponential singularities \(a\) of \(x\) as ramification points, with ramification order \(r_{a}=\infty\). We write \(R_{\infty}\) for the collection of exponential singularities, \(R_{0}\) for the collection of finite (\(r_{a}<\infty\)) ramification points, and note \(R=R_{0}\cup R_{\infty}\). Following [18], we will sometimes refer to the elements of \(R_{\infty}\) as infinite ramification points. We now focus on compact transalgebraic spectral curves, for which \(x\) is a transalgebraic function on a compact Riemann surface \(\Sigma\). In this case, by Theorem 3.4, we can write \[x(z)=M_{0}(z)e^{M_{1}(z)} \tag{22}\] for some meromorphic functions \(M_{0}\) and \(M_{1}\) on \(\Sigma\) with \(M_{0}\neq 0\). We also define \(M_{2}(z)\coloneqq x(z)y(z)\), which is another meromorphic function on \(\Sigma\). We will use the \(M_{0},M_{1}\) and \(M_{2}\) notation throughout the paper. **Example 3.8**.: The typical example of a compact transalgebraic spectral curve is example 2.6 but with \(\Sigma=\mathbb{P}^{1}\). In other words, we consider the spectral curve \(\mathcal{S}_{\infty}=(\Sigma,x,y,B)\) with \(\Sigma=\mathbb{P}^{1}\), \(x=ze^{-z^{\prime}}\), \(y=e^{z^{\prime}}\), and \(B=\frac{(dz_{1}dz_{2})^{2}}{(z_{1}-z_{2})^{2}}\), where \(r\in\mathbb{Z}_{\geqslant 1}\). In this case, \(M_{0}(z)=z\), \(M_{1}(z)=-z^{\prime}\), and \(M_{2}(z)=x(z)y(z)=z\). \(R_{0}\) contains the \(r\) finite ramification points at the solutions of \(z^{\prime}=\frac{1}{r}\), while \(R_{\infty}\) has a single point at \(\infty\). In topological recursion we are interested in the behaviour of \(x\) near its ramification points. Around a finite ramification point in \(R_{0}\), the behaviour of \(x\) is, locally at least, identical to meromorphic curves. So let us focus on the exponential singularities \(a\in R_{\infty}\), which correspond to the poles of \(M_{1}(z)\). **Lemma 3.9**.: _Let \(a\in R_{\infty}\). Suppose that \(M_{1}\) has a pole of order \(m_{1}\) at \(a\), and that \(M_{0}\) has either a zero of order \(m_{0}\) (for \(m_{0}\geqslant 0\)) or a pole of order \(|m_{0}|\) (for \(m_{0}<0\)) at \(a\). Then there exists a local coordinate \(\zeta\) near \(a\) such that_ \[x(\zeta)=e^{\zeta-m_{1}}\,,m_{0}=0,\quad x(\zeta)=z^{m_{0}}e^{-\frac{m_{0}}{m_ {1}}z^{-m_{1}}},m_{0}\neq 0. \tag{23}\] Proof.: If \(m_{0}=0\), this is obvious as near \(a\log(x)\) is a well defined meromorphic function with a pole of order \(m_{1}\) at \(a\), so there exists a local coordinate such that \(\zeta^{-m_{1}}=\log(x(\zeta))\). If \(m_{0}\neq 0\) we may take \(z\) as the coordinate such that, near \(a\), \(M_{0}(z)=z^{m_{0}}\). We can then solve the relation \[M_{1}(z)+\frac{m_{0}}{m_{1}}\zeta^{-m_{1}}-m_{0}\log\left(\zeta/z\right)=0, \tag{24}\] recursively for the coefficients \(\zeta=\sum_{n\geqslant 1}a_{n}z^{n}\) where \(a_{1}\neq 0\). This constructs \(\zeta\) in terms of \(z\). However, unlike in the case of finite ramification points we see that there are infinitely many different choices of \(\zeta\) corresponding to the branch choices for the \(m_{1}\)th root and the logarithm. In other words, \(f_{a}(z)\) is countably infinite. Moreover, unlike the finite case, even in the local coordinate \(\zeta\), the local deck transformations have no simple expression in terms of elementary functions (except when \(m_{0}=0\)). One can derive series expansions around a ramification point of the form (in terms of the local coordinate \(\zeta\)) \[\sum_{n\geqslant 0}s_{n}\zeta^{n\,m_{1}+1}, \tag{25}\] where \(s_{0}\) is an \(m_{1}\)th root of unity and \(s_{1}\) is \(s_{0}\log(s_{0}^{m_{0}})/m_{0}\) for some branch choice of the logarithm and \(m_{0}\neq 0\) (the explicit expansion is given below for \(m_{0}=0\)). Here, different choices of the \(m_{1}\)th root of unity and different branches of the logarithm will yield different local deck transformations. The radius of convergence of these series will depend on the choice of logarithm; as we will see shortly, there is no open set on which all such expansions converge. When \(m_{0}=0\) we may explicitly solve for the deck transformations. Indexing by \(k\in\mathbb{Z}\) and \(m=0,1,\dots,m_{1}-1\) and then denoting the deck transformations as \(\sigma_{a}^{k,m}\) we find \[\sigma_{a}^{k,m}(z)=\frac{\vartheta^{m}\zeta}{\left(1+2\pi\mathrm{i}k\zeta^{m_ {1}}\right)^{1/m_{1}}}\stackrel{{\zeta\supseteq 0}}{{ \vartheta^{m}\zeta}}\sum_{n=0}^{\infty}\binom{-1/m_{1}}{n}(2\pi\mathrm{i}k)^{n} \zeta^{m_{1},n}, \tag{26}\] where the radius of convergence is \(|\zeta|<|2\pi k|^{-1}\) and \(\vartheta=\exp(2\pi i/m_{1})\) is a primitive \(m_{1}\)th root of unity. To proceed with an examination of these deck transformations when \(m_{0}\neq 0\) we fix some notation. By equation (25), each local deck transformation is uniquely defined by the first and second coefficient in its expansion about \(a\) (more abstractly, this is due to the unique lifting property; see [15]). The first coefficient is an \(m_{1}\)th root of unity which we fix as \(s_{0}=\vartheta^{m}\). The second coefficient is \(s_{0}\log(s_{0}^{m_{0}})/m_{0}\). If we fix a choice of log with a branch cut chosen along an irrational angle in the complex plane (in particular, it must not exclude any power of \(\vartheta\)), then the choices of \(s_{1}\) are in one-to-one correspondence with the integers. We denote the local deck transformation with first coefficient \(\vartheta^{m}\) and second coefficient \(\vartheta^{m}(2\pi\mathrm{i}m/m_{1}-2\pi\mathrm{i}k/m_{0})\) as \(\sigma_{a}^{k,m}\). To proceed, we first find solutions for the partial inverses of \(x\) in terms of the Lambert \(W\) function \[x=\zeta^{m_{0}}e^{-\frac{m_{0}}{m_{1}}\zeta-m_{1}}\Rightarrow x^{-m_{1}/m_{0} }=\zeta^{-m_{1}}e^{\zeta-m_{1}}\Rightarrow\zeta=\left[W_{k}\left(x^{-m_{1}/m_{0 }}\right)\right]^{-1/m_{1}}, \tag{27}\] where \(W_{k}\) is the kth branch of the Lambert \(W\) function defined by the relation \[z=\mathrm{we}^{w}\Leftrightarrow\exists k\in\mathbb{Z}\ni w=W_{k}(z). \tag{28}\] Normally, \(W_{0}\) denotes the principal branch that is real-valued on the non-negative half of the real line and \(W_{-1}\) is the branch that is real-valued on the interval \([-1/e,0]\). Otherwise, there is no standard convention in the literature regarding the choices of branches of the Lambert \(W\) function; one such choice is given in [15], which we will use in the following. Our deck transformations then take the form \[\sigma_{a}^{k,m}(\zeta)=\vartheta^{m^{\prime}}\left[W_{k^{\prime}}\left( \varphi^{1}\zeta^{-m_{1}}e^{\zeta-m_{1}}\right)\right]^{-1/m_{1}}, \tag{29}\] where \(\varphi=\exp(-2\pi\mathrm{i}\mathrm{m}_{1}/\mathrm{m}_{0})\) and \(\mathrm{m}^{\prime},\mathrm{k}^{\prime},\mathrm{l}\) are integers. We then note the following expansion of the Lambert \(W\) function from [13], which is valid for all non-zero \(\mathrm{k}\) when \(\log(|z|)\) is sufficiently large \[W_{\mathrm{k}}(z)=\log(z)+2\pi\mathrm{i}k-\log(\log(z)+2\pi \mathrm{i}k)\\ +\sum_{\alpha=0}^{\infty}(-1)^{\alpha}\sum_{b=1}^{\infty}\frac{1 }{b!}\genfrac{[}{]}{0.0pt}{}{a+b}{a+1}(\log(z)+2\pi\mathrm{i}k)^{-\alpha-b} \log^{b}(\log(z)+2\pi\mathrm{i}k), \tag{30}\] where \(\genfrac{[}{]}{0.0pt}{}{n_{1}}{n_{2}}\) denotes an unsigned Stirling number of the first kind and \(\log^{b}\) denotes the logarithm composed with itself \(b\) times; all the logarithms in the above expansion are the principal branch of the logarithm. The above expansion is also valid when \(\mathrm{k}=0\) provided \(|z|\) is sufficiently large; for small \(z\) the principal branch satisfies \(W_{0}(z)\sim z\). Using the above expansion we find, for \(\zeta\) a small positive number \[\sigma_{\alpha}^{\mathrm{k},\mathrm{m}}(\zeta)=\vartheta^{\mathrm{m}^{\prime }}\zeta-\frac{\vartheta^{\mathrm{m}^{\prime}}}{\mathrm{m}_{1}}\left(2\pi \mathrm{i}\mathrm{k}^{\prime}-2\pi\mathrm{i}\frac{\mathrm{m}_{1}}{\mathrm{m}_ {0}}\right)\zeta^{\mathrm{m}_{1}+1}+\mathcal{O}(\zeta^{2\mathrm{m}_{1}+1}). \tag{31}\] This gives us that \(\mathrm{m}^{\prime}=\mathrm{m}\) and that \(\mathrm{k}^{\prime}\) and \(\mathrm{l}\) should satisfy \[\frac{\mathrm{m}_{1}}{\mathrm{m}_{0}}1-\mathrm{k}^{\prime}=\mathrm{m}-\frac{ \mathrm{m}_{1}}{\mathrm{m}_{0}}\mathrm{k}, \tag{32}\] where \(\mathrm{l}\) must be chosen so that \(-\frac{2\pi\mathrm{i}\mathrm{m}_{1}\mathrm{l}}{\mathrm{m}_{0}}\in(-\pi,\pi)\). Recalling that \(\mathrm{m}=0,\ldots,\mathrm{m}_{1}-1\) we see that \(\mathrm{k}=\mathcal{O}(\mathrm{k}^{\prime})\) as \(\mathrm{k}\to\infty\). This leads us to the following lemma, which considers the asymptotic behaviour as the chosen branch of the logarithm becomes 'large' in some sense. **Lemma 3.10**.: _For \(\mathrm{m}_{0}\neq 0\) (the corresponding \(\mathrm{m}_{0}=0\) formula is obvious from (26))_ \[\sigma_{\alpha}^{\mathrm{m},\mathrm{k}}(\zeta) =\vartheta^{\mathrm{m}}(2\pi\mathrm{i}\mathrm{m}_{1}\mathrm{k}/ \mathrm{m}_{0})^{-1/\mathrm{m}_{1}}\left(1+0\left(\frac{\log(|\mathrm{k}|)}{ \mathrm{k}}\right)\right),\quad|\mathrm{k}|\to\infty, \tag{33}\] \[\frac{\mathrm{d}\sigma_{\alpha}^{\mathrm{m},\mathrm{k}}}{\mathrm{ d}\zeta}(\zeta) =\vartheta^{\mathrm{m}}\zeta^{-\mathrm{m}_{1}-1}(2\pi\mathrm{i} \mathrm{m}_{1}\mathrm{k}/\mathrm{m}_{0})^{-1/\mathrm{m}_{1}-1}\left(1+0\left( \frac{\log(|\mathrm{k}|)}{\mathrm{k}}\right)\right),\quad|\mathrm{k}|\to\infty.\] Proof.: This is a direct consequence of the fact that \(\mathrm{k}\sim\frac{\mathrm{m}_{\mathrm{k}}}{\mathrm{m}_{1}}\mathrm{k}^{ \prime}\), \(|\mathrm{k}|\to\infty\), and the expansion (30). ### Transalgebraic spectral curves as limits Compact transalgebraic spectral curves naturally arise as limits of sequences of compact meromorphic spectral curves. The guiding light behind our definition of topological recursion for transalgebraic spectral curves is that it should commute with taking such limits. Schematically, if \(\mathcal{S}_{\mathrm{N}}\) is a sequence of compact meromorphic spectral curves such that \(\lim_{N\to\infty}\mathcal{S}_{\mathrm{N}}\) is a compact transalgebraic spectral curve, and the \(\omega_{g,n}^{\mathrm{N}}[\mathcal{S}_{\mathrm{N}}]\) are the correlators constructed from usual topological recursion on \(\mathcal{S}_{\mathrm{N}}\), we want the correlators \(\omega_{g,n}[\lim_{N\to\infty}\mathcal{S}_{\mathrm{N}}]\) associated to the transalgebraic spectral curve \(\lim_{N\to\infty}\mathcal{S}_{\mathrm{N}}\) to satisfy \[\lim_{N\to\infty}\left(\omega_{g,n}^{\mathrm{N}}[\mathcal{S}_{\mathrm{N}}] \right)=\omega_{g,n}\left[\lim_{N\to\infty}\mathcal{S}_{\mathrm{N}}\right]. \tag{34}\] Therefore, we should study such sequences of spectral curves. Considering such limits will also allow us to extend the notion of admissibility from definition 2.2 to exponential singularities. Therefore, consider such a sequence of compact meromorphic spectral curves \(\mathcal{S}_{\mathrm{N}}=(\varSigma,\mathrm{x}_{\mathrm{N}},\mathrm{y}_{ \mathrm{N}},\mathrm{B})\), such that \(\mathrm{x}_{\mathrm{N}}\to\mathrm{x}\) and \(\mathrm{y}_{\mathrm{N}}\to\mathrm{y}\) as \(\mathrm{N}\to\infty\), where \(\mathrm{x}\) and \(\mathrm{y}\) are transalgebraic functions on \(\varSigma\) with \(\mathrm{x}\mathrm{y}\) meromorphic. Explicitly, we will consider the sequence \[\mathrm{x}_{\mathrm{N}}=\mathrm{M}_{0}\left(1+(\tau-1)\frac{\mathrm{M}_{1}}{ \mathrm{N}}\right)^{-\mathrm{N}}\left(1+\tau\frac{\mathrm{M}_{1}}{\mathrm{N}} \right)^{\mathrm{N}},\qquad\mathrm{y}_{\mathrm{N}}=\frac{\mathrm{M}_{2}}{ \mathrm{x}_{\mathrm{N}}}\,, \tag{35}\] which converges compactly to \[\mathrm{x}=\mathrm{M}_{0}e^{\mathrm{M}_{1}},\qquad\mathrm{y}=\frac{\mathrm{M}_{2 }}{\mathrm{x}}, \tag{36}\] away from the poles of \(\mathrm{M}_{1}\). _Remark 3.11_.: We introduce the parameter \(\tau\) for two reasons. First, we will see in Theorem 4.3 that the limiting correlators do not depend on the choice of \(\tau\), which is evidence that our definition of topological recursion on transalgebraic spectral curves is the correct one for the limiting curve and not an artefact of the particular sequence chosen. Secondly, when constructing quantum curves, we will see that we get a priori different quantum curves for each choice of \(\tau\). However, at least in the cases of interest in this paper, we will see that this \(\tau\) dependence can be naturally transformed away. For the spectral curves \(\mathcal{S}_{N}\), we divide the ramification points of \(x_{N}\), denoted collectively as \(R^{N}\), into two sets of ramification points: 1. \(R^{N}_{\infty}=\{M_{1}=-\frac{N}{x}\}\cup\{M_{1}=\frac{N}{1-x}\}\cup\{M_{1}=\infty\}\) consists of the ramification points colliding at essential singularities of \(x\); 2. \(R^{N}_{0}=R^{N}\setminus R^{N}_{\infty}\) consists of those ramification points not colliding at essential singularities of \(x\). Let us consider what admissibility means for ramification points of transalgebraic spectral curves. The notion of admissibility at the ramification points in \(R_{0}\) is clear: it should be the same as for algebraic spectral curves, that of definition 2.2. At the points in \(R_{\infty}\) we need a new definition based on the notion of admissibility for points in \(R^{N}_{\infty}\). We distinguish two cases for an exponential singularity \(a\in R_{\infty}\) depending on whether \(M_{2}=xy\) has a pole at \(a\) or not. Let \(a\in R_{\infty}\), which means that it is a pole of \(M_{1}\), and suppose that \(xy\) has a pole at \(a\). Then, for finite \(N\), we have \(s_{a}\leqslant-1\) and therefore \(\mathcal{S}_{N}\) is admissible at \(a\) by definition 2.2 and the correlators will be regular at \(a\). Moreover, for sufficiently large \(N\), \(M_{2}\) will be regular and non-zero at the other points \(a^{\prime}\) in \(R^{N}_{\infty}\) that collide at the essential singularity \(a\), and hence \(s_{a^{\prime}}=1\) in definition 2.2 at all such points so \(\mathcal{S}_{N}\) is admissible at all such points. Thus, if \(xy\) has a pole at \(a\), each spectral curve in the sequence is admissible, and so it makes intuitive sense that the limit of these curves should be admissible. There appears to be significant challenges in defining the topological recursion in the case where \(xy\) does not have a pole at an exponential singularity of \(x\). As it appears in no cases of interest, it is not done here5. Therefore, we will define admissibility at infinite ramification points as follows. Footnote 5: Curiously, via limiting arguments, it seems that the modularity condition for admissibility should be \(m_{0}\pmod{m_{1}}=\pm 1\pmod{s_{a}}\) where \(m_{0}\) and \(m_{1}\) are the order of \(M_{0}\) and \(M_{1}\), respectively, at \(a\), which very naturally generalises definition 2.2. **Definition 3.12**.: Let \(\mathcal{S}\) be a compact transalgebraic spectral curve. We say that it is _admissible_ if both of the following conditions are satisfied: 1. it is admissible in the sense of definition 2.2 at the finite ramification points in \(R_{0}\); 2. \(xy=M_{2}\) has poles at the infinite ramification points in \(R_{\infty}\). ## 4. Topological recursion on transalgebraic spectral curves We are now ready to define topological recursion on transalgebraic spectral curves. Recall that we want topological recursion to commute with limits of sequences of meromorphic spectral curves, as stated schematically in (34). If \(\mathcal{S}_{N}\) is a sequence of compact meromorphic spectral curves such that \(\lim_{N\to\infty}\mathcal{S}_{N}\) is a compact transalgebraic spectral curve, and the \(\omega^{N}_{g,n}[\mathcal{S}_{N}]\) are the correlators constructed from usual topological recursion on \(\mathcal{S}_{N}\), we want the correlators \(\omega_{g,n}[\lim_{N\to\infty}\mathcal{S}_{N}]\) associated to the transalgebraic spectral curve \(\lim_{N\to\infty}\mathcal{S}_{N}\) to satisfy \[\lim_{N\to\infty}\left(\omega^{N}_{g,n}[\mathcal{S}_{N}]\right)=\omega_{g,n} \left[\lim_{N\to\infty}\mathcal{S}_{N}\right]. \tag{37}\] However, it is not straightforward to define topological recursion in this limit. The usual formulation of topological recursion considers residues at ramification points, and to define the integrand one needs to sum over local deck transformations at those ramification points. In other words, one needs to take the pullback of the pushforward of a one-form under the map \(x\). To include infinite ramification points in the topological recursion, one would need to take the pullback of the pushforward of a one-form under the local map \(x=\zeta^{m_{0}}e^{-\frac{m_{0}}{m}\zeta^{-m_{1}}}\). Using lemma 3.10, one can see that for a 1-form \(\eta\) that is holomorphic at all essential singularities of \(x,x^{*}x,\eta\) is well-defined in the sense that the sum over deck transformations is convergent. However, for the topological recursion, we want to look at forms with poles at the essential singularities. In this case the sum in \(x^{*}x,\eta\) is not absolutely convergent, but there is a natural way to define the principal value.6 However, even after summing, the resulting differential may not have an isolated singularity at the essential singularities of \(x\). Thus, defining the residue at these points becomes highly changing. One possible approach is to pushforward the entire TR integrand to the \(x\) plane, where it should be meromorphic as a function of \(x\), but then it is not clear what residues in the \(x\) plane one should be taking as essential singularities do not have well-defined branchpoints, although \(x=0,\infty\) (the Picard points) are the most compelling candidates. Even presuming one does all this in a reasonable manner, proving the \(N\to\infty\) limit commutes in the sense of (34) remains a daunting challenge. Instead, our approach consists in first rewriting the topological recursion in a different way, which trades out the sum over the deck transformations of \(x\) for a sum over ramification points, coinciding points, and deck transformations of \(y\). We present this rewriting for compact meromorphic spectral curves in the next section, and then generalise it to transalgebraic spectral curves.7 Footnote 7: This rewriting of topological recursion for compact meromorphic spectral curves is inspired by private notes of Nitin K. Chidambaram. ### Rewriting topological recursion Let us review some of the notation required for this rewriting that was introduced in section 1.4. Let \(C=\{t,t_{1},\ldots,t_{i}\}\subset\varSigma\) and \(C^{\prime}=C\setminus\{t\}\) be sets of cardinality \(i+1\) and \(i\), respectively. For a symmetric \(i\)-differential \(\eta\) we set, by definition \[\operatorname*{Res}_{C=t}\eta(t_{1},\ldots,t_{i})=\operatorname*{Res}_{C^{ \prime}=t}\eta(t_{1},\ldots,t_{i})=\operatorname*{Res}_{t_{1}=t}\cdots \operatorname*{Res}_{t_{i}=t}\eta(t_{1},\ldots,t_{i})\,. \tag{38}\] Then, for a set \(C\subset\varSigma\), we denoted by \(t^{C}\) one arbitrarily chosen element in this set. Finally, for the purposes of taking many residues in a compact notation, we defined \[\operatorname*{Res}_{\begin{subarray}{c}t_{1}=a_{1}\\ 1=1,\ldots,n\end{subarray}}=\operatorname*{Res}_{t_{1}=a_{1}}\cdots \operatorname*{Res}_{t_{n}=a_{n}}\,. \tag{39}\] With this notation (see section 1.4 for more details), we may proceed to the theorem of this section. **Theorem 4.1**.: _Let \(\mathcal{S}\) be a compact meromorphic admissible spectral curve8 and write \(Y(t)=y^{-1}(y(t))\). Then the correlators of topological recursion satisfy the alternative recursive formula_ Footnote 8: To be precise, we also need to assume here that \(\mathcal{S}\) can be “fully globalised”, in the language of [1]. What this means is that we can replace in topological recursion the sums over local deck transformations \(f^{\prime}_{a}(t)\) at the ramification points with a sum over the whole fibre \(f^{\prime}(t)\), and perform manipulations along the lines of [1, Theorem 5]. All spectral curves considered in the present paper are fully globalisable, according to the conditions determined in [1]. \[\omega_{g,n+1}(z_{0},z_{[n]})= \operatorname*{Res}_{t=R}\sum_{m=2}^{\deg(x)}\int_{*}^{t}B(z_{0},\cdot)\sum_{C_{1},\ldots,C_{j}+t_{[m-1]}}\frac{(-1)^{1-\delta_{j,m-1}}}{j!} \operatorname*{Res}_{\begin{subarray}{c}t_{1}=R,z_{[n]},Y(t)\\ 1=1,\ldots,j\end{subarray}}\operatorname*{Res}_{\begin{subarray}{c}t_{1}=R,z_{[n]},Y(t)\\ 1=1,\ldots,j\end{subarray}} \tag{40}\] \[\left(\prod_{l=1}^{j}\frac{1}{x(t)-x(t^{C_{1}})}\prod_{t_{0} \in C_{1}\setminus\{t^{C_{1}}\}}\frac{1}{x(t_{0})-x(t^{C_{1}})}\right)\frac{ \mathcal{W}_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{\prod_{l=1}^{m-1}(y(t)-y(t_{l})) }\,,\] _where we slightly abuse notation by writing that the \(C_{1}\) partition the dummy integration variables outside the integral._ Proof.: Starting with topological recursion from definition 2.8, we first replace the sums over local deck transformations \(f^{\prime}_{a}(t)\) by a sum over the whole fibre \(f^{\prime}(t)\), as in [1, Theorem 5] (see footnote 8 and [1]). Then, we perform the rewriting: \[\sum_{\emptyset\neq Z\subseteq f^{\prime}(t)}K_{|Z|+1}(z_{0},t,Z) \mathcal{W}_{g,n,|Z|+1}(t,Z|\,z_{[n]})=\\ \left(\int_{*}^{t}B(z_{0},\cdot)\right)\sum_{m\geqslant 2}\sum_{ \emptyset\neq\{\zeta_{1},\ldots,\zeta_{m-1}\}\subseteq f^{\prime}(t)} \operatorname*{Res}_{\begin{subarray}{c}t_{1}=\zeta_{1}\\ 1=1,\ldots,m-1\end{subarray}}\frac{\mathcal{W}_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n] })}{\prod_{l=1}^{m-1}(x(t_{l})-x(t))(y(t)-y(t_{l}))}\,. \tag{41}\] In our new writing, the summand is well-defined when two or more of the elements of \(Z=\{\zeta_{1},\ldots,\zeta_{m-1}\}\) coincide, it will just result in higher order poles at such a \(t_{1}=\zeta_{1}\). Therefore, instead of summing over subsets of \(f^{\prime}(t)\), we will sum over tuples of size at most \(\deg x-1\), but then we need to subtract tuples with repeating terms. This gives us two main terms: the original sum plus the added terms where two or more \(t_{1}\) coincide and the subtracted terms where two or more \(t_{1}\) coincide. We first examine the first term, for fixed size of the tuple \(m-1\), \[\sum_{\zeta_{1},\ldots,\zeta_{m-1}\in f^{\prime}(t)} \frac{1}{(m-1)!}\operatorname*{Res}_{\begin{subarray}{c}t_{1}=\zeta_{ 1}\\ 1=1,\ldots,m-1\end{subarray}}\frac{\mathcal{W}_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{ \prod_{l=1}^{m-1}(x(t_{l})-x(t))(y(t)-y(t_{l}))} \tag{42}\] \[= \frac{1}{(m-1)!}\operatorname*{Res}_{\begin{subarray}{c}t_{1}=R, z_{[n]},Y(t)\\ 1=1,\ldots,m-1\end{subarray}}\frac{\mathcal{W}_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{ \prod_{l=1}^{m-1}(x(t)-x(t_{l}))(y(t)-y(t_{l}))}.\] All we have done here is used the fact that \(\Sigma\) is a compact Riemann surface so the sum of all the residues of any differential must be zero. That we only pick up residues at the listed points is because the \(\omega_{g,n}\) only have poles at coinciding and ramification points. We now wish to apply the same logic to the terms with the coinciding points. To this end, we want to know what happens when \(j\leq m-1\) of the same \(t_{1}\) are specialised to the same sheet. So we consider \[\begin{split}&\sum_{\zeta\in f^{\prime}(t)}\underset{t=1,\ldots,j }{\text{Res}}\frac{W_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{\prod_{l=1}^{m-1}(x(t_{ 1})-x(t))(y(t)-y(t_{l}))}\\ &=\sum_{\zeta\in f^{\prime}(t)}\underset{t=1,\ldots,j-1}{\text{ Res}}\frac{W_{g,n,m}(t,t_{1},\ldots,t_{j-1},\zeta,t_{j+1},\ldots,t_{m-1}\,|\,z_{[n]})}{ \text{dx}(t)(y(t)-y(\zeta))\prod_{l=j}^{m-1}(x(t_{l})-x(t))(y(t)-y(t_{l}))}\\ &=\sum_{\zeta\in f^{\prime}(t)}\underset{t=1,\ldots,j-1}{\text{ Res}}\frac{W_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{(x(t_{j})-x(t))(y(t)-y(t_{j}))\prod_{l=j}^{m-1}(x(t_{ l})-x(t))(y(t)-y(t_{l}))}\\ &=\underset{t_{j}=R,z_{[n]},Y(t)}{\text{Res}}\underset{t=1, \ldots,j-1}{\text{Res}}\frac{W_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{(x(t)-x(t_{ j}))(y(t)-y(t_{j}))\prod_{l=j}^{m-1}(x(t_{l})-x(t))(y(t)-y(t_{l}))}.\end{split} \tag{43}\] Hence, the subtracted terms with the coinciding deck transformations may be written as \[\begin{split}&-\sum_{j=1}^{m-2}\frac{1}{j!}\sum_{\zeta_{1}, \ldots,\zeta_{j}\in f^{\prime}(t)}\sum_{C_{1},\ldots,C_{j}\vdash t_{[m-1]}} \underset{\{1=1,\ldots,j\}}{\text{Res}}\frac{W_{g,n,m}(t,t_{[m-1]}|\,z_{[n]} )}{\prod_{l=1}^{m-1}(x(t_{l})-x(t))(y(t)-y(t_{l}))}\\ =&-\sum_{j=1}^{m-2}\frac{1}{j!}\sum_{C_{1},\ldots,C_{ j}\vdash t_{[m-1]}}\underset{\{1=1,\ldots,j\}}{\text{Res}}\underset{\{1=1, \ldots,j\}}{\text{Res}}\frac{\text{Res}}{\text{C}_{j=1}^{m-4}\,c_{1}}\\ &\qquad\qquad\left(\prod_{l=1}^{j}\frac{1}{x(t)-x(t^{C_{1}})} \prod_{t_{0}\in C_{1}\setminus\{t^{C_{1}\}\}}\frac{1}{x(t_{0})-x(t^{C_{1}})} \right)\frac{W_{g,n,m}(t,t_{[m-1]}|\,z_{[n]})}{\prod_{l=1}^{m-1}(y(t)-y(t_{l})) }\,,\end{split} \tag{44}\] and the desired result then follows. ### Topological recursion on transalgebraic spectral curves With the topological recursion for compact meromorphic spectral curves rewritten as in theorem 4.1, we are in a position of topological recursion for compact transalgebraic spectral curves. We will present the definition of the topological recursion for transalgebraic spectral curves straight away, and then spend the rest of the section arguing and demonstrating why it works. **Definition 4.2**.: Let \(\delta=(\Sigma,x,y,B)\) be a compact transalgebraic admissible spectral curve, with \(x=M_{0}\exp(M_{1})\), \(y=M_{2}/x\) and \(M_{0},M_{1},M_{2}\) meromorphic functions on \(\Sigma\). Fix \(\tau\in\mathbb{C}\) and define the sequence of spectral curves \(\delta_{N}=(\Sigma,x_{N},y_{N},B)\), where \[x_{N}=M_{0}\left(1+(\tau-1)\frac{M_{1}}{N}\right)^{-N}\left(1+\tau\frac{M_{1}} {N}\right)^{N}\,,\quad y_{N}=M_{2}/x_{N}\,. \tag{45}\] Then, if \(\omega_{g,n}^{N}\) are the correlators constructed by topological recursion for the spectral curve \(\delta_{N}\), we _define_ the correlators of the spectral curve \(\delta\) as the \(N\to\infty\) limit of the \(\omega_{g,n}^{N}\). This defines nothing if the limit depends on \(\tau\) or does not yield well-defined meromorphic correlators. The main result of this section is the following theorem, which shows that these issues do not occur, and, therefore, that the above definition makes sense. **Theorem 4.3**.: _Let \(\delta=(\Sigma,x,y,B)\) be a compact transalgebraic admissible spectral curve. Then the \(\omega_{g,n}\) constructed from definition 4.2 are well-defined meromorphic differentials on \(\Sigma^{n}\) and do not depend on the choice of \(\tau\)._ Proof.: Our strategy will be to first prove the that \(\omega_{g,n}\) are well-defined for \(\tau=0\), and then show that the limit is independent of \(\tau\). The proof is divided into eight steps. _First step: start the induction and flip contours._ We proceed inductively in the \(\tau=0\) case on \(-x_{g,n}=2g+n-2\). For \(-x_{g,n}=-1,0\) (corresponding to \(\omega_{0,1}\) and \(\omega_{0,2}\)) the result holds trivially, so we may proceed to the induction step. For finite \(N\) we may use theorem 4.1 to write \[\omega_{g,n+1}^{N}(z_{0},z_{[n]}) =\operatorname*{Res}_{t=R^{N}}\sum_{m=2}^{\deg(x_{N})}\left(\int_{*} ^{t}B(z_{0},\cdot)\right)\sum_{C_{1},\ldots,C_{j}=t_{[n-1]}}\frac{(-1)^{1-s_{j,m-1}}}{j!}\operatorname*{Res}_{t^{C_{1}=R^{N},z_{[n]},y(t)}\begin{subarray}{ c}\operatorname*{Res}_{C_{1}=t^{C_{1}}}\\ \operatorname*{t=1,\ldots,j}\end{subarray}}\] \[\quad\left(\prod_{t=1}^{j}\frac{x_{N}(t)}{x_{N}(t)-x_{N}(t^{C_{1} })}\prod_{t_{0}\in C_{1}\setminus\{t^{C_{1}}\}}\frac{x_{N}(t^{C_{1}})}{x_{N}(t _{0})-x_{N}(t^{C_{1}})}\right)\frac{\mathcal{W}_{q,n,m}^{N}(t,t_{[m-1]}\,|\,z_ {[n]})}{\prod_{t=1}^{m-1}(M_{2}(t)-M_{2}(t_{1}))}\,, \tag{46}\] where \(\mathcal{Y}(t)=(xy)^{-1}\big{(}(xy)(t)\big{)}\). This is slightly different from theorem 4.1, so a couple of remarks are in order so it is clear how we get here: * in the denominator of the integrand in the original topological recursion we rewrote \(y_{N}(t)-y_{N}(\sigma(t))=(M_{2}(t)-M_{2}(\sigma(t)))/x_{N}(t)\) so we ended up with \(M_{2}=xy=x_{N}y_{N}\) in the denominator and \(x_{N}(t)\) in the numerator, where \(\sigma(t)\in\mathfrak{f}^{\prime}(t)\) is a deck transformation; * as \(x_{N}(t)=x_{N}(\sigma(t))\) for every deck transformation, we can choose which deck transformation we take the argument of \(x_{N}\) to be at; in particular, we take \(j\) of them to just be \(t\) and the other \(m-1-j\) to be precisely those deck transformations that gives us \(t^{C_{1}}\); * when we flipped the contour, we then had to pick out residues at \(\mathcal{Y}(t)\) rather than \(Y(t)\). _Second step: \(\operatorname{R}_{\infty}^{N}\) does not contribute._ We now observe that the residues at \(t^{C_{1}}=\operatorname{R}_{\infty}^{N}\) vanish for sufficiently large \(N\). Namely, for the points in \(\operatorname{R}_{\infty}^{N}\) that satisfy \(M_{1}=N\), \(x_{N}(t^{C_{1}})\) has a pole of order \(N\). As \(x_{N}(t^{C_{1}})\) appears in the denominator one time with no corresponding \(x_{N}(t^{C_{1}})\) in the numerator, the overall integrand in the variable \(t^{C_{1}}\) gains a zero of order \(N\). We claim the rest of the integrand has a pole of at worst uniformly bounded order in \(N\). At these ramification points \(\omega_{0,1}^{N}\) has simple poles, and, for sufficiently large \(N\), \(M_{2}\) will be regular and non-zero, which means that \(s_{a}=1\) at these points. From Lemma 2.10 we then know that the \(\omega_{g,n}^{N}\) have poles of order no more than \(2g\) and so the \(\mathcal{W}_{g,n,m}^{N}\) have poles of bounded order in \(N\). Similarly, \(M_{2}\) is meromorphic and constant in \(N\). For \(x_{N}(t^{C_{1}})/(x_{N}(t_{0})-x_{N}(t^{C_{1}}))\), the \(x_{N}\) appears in both the denominator and the numerator. Finally, we need to examine taking the residues at \(C_{1}=t^{C_{1}}\). This will be a residue of a pole of order no more than three (two from a potential \(w_{0,2}\) contribution plus one for the difference of the \(x_{N}\) in the denominator). Thus, this residue may be replaced by multiplication by \((t_{0}-t^{C_{1}})^{3}\) and twice differentiating by \(t_{0}\), for each \(t_{0}\in C_{1}\setminus\{t^{C_{1}}\}\), before taking the limit as \(t^{C_{1}}\to t_{0}\). By the quotient rule for differentiation, we will have the same total power of derivatives of \(x_{N}\) in the numerator and denominator, just in different combinations and orders of differentiation. Thus, at the residues at points where \(M_{1}=N\) we may drop the residue in \(t^{C_{1}}\). Now we examine the residues in \(t^{C_{1}}\) where the point at which the residue is taken satisfies \(M_{1}=\infty\). Here, when we take the residue at \(C_{1}=t^{C_{1}}\), as discussed previously, this corresponds to derivatives. Here though, the pole counting is a little more subtle so we do it explicitly. In particular, observe \[\frac{x_{N}(t^{C_{1}})w_{0,2}(t_{0},t^{C_{1}})(t_{0}-t^{C_{1}})^{3}}{(x_{N}(t _{0})-x_{N}(t^{C_{1}}))dt_{0}dt^{C_{1}}} =\frac{x_{N}(t^{C_{1}})}{x_{N}^{\prime}(t^{C_{1}})}-(t_{0}-t^{C_ {1}})\frac{x_{N}(t^{C_{1}})x_{N}^{\prime\prime}(t^{C_{1}})}{x_{N}^{\prime}(t^{ C_{1}})^{2}}\] \[+(t_{0}-t^{C_{1}})^{2}\left(\frac{x_{N}(t^{C_{1}})x_{N}^{\prime \prime}(t^{C_{1}})^{2}}{4x_{N}^{\prime}(t^{C_{1}})^{3}}-\frac{x_{N}(t^{C_{1}}) x_{N}^{\prime\prime\prime}(t^{C_{1}})}{x_{N}^{\prime}(t^{C_{1}})^{2}}\right)+ \mathcal{O}\left((t_{0}-t^{C_{1}})^{3}\right)\,. \tag{47}\] In the constant term and the \(t_{0}-t^{C_{1}}\) term, there is no pole at \(t^{C_{1}}\) equalling a pole of \(M_{1}\). However, the \((t_{0}-t^{C_{1}})^{2}\) term has a simple pole here. On the other hand, the \(\omega_{g,n}^{N}\) are regular at these points (this is because we are in the \(s_{a}\leq-1\) case at these points; see remark 2.3) and \(M_{2}(t_{0})\), which has at least a simple pole by admissibility, appears in the denominator. Thus, in terms with an \(\omega_{0,2}(t_{0},t^{C_{1}})\) we do not have contributions from these points. For terms without this factor, the pole at \(t_{0}=t^{C_{1}}\) is simple. Thus, observing \[\frac{x_{N}(t^{C_{1}})(t_{0}-t^{C_{1}})}{x_{N}(t_{0})-x_{N}(t^{C_{1}})}=\frac{x_ {N}(t^{C_{1}})}{x_{N}^{\prime}(t^{C_{1}})}+\mathcal{O}\left(t_{0}-t^{C_{1}} \right)\,, \tag{48}\] we see the same argument still holds. In summary, we may replace the residues in each \(t^{C_{1}}\) at all the points in \(\operatorname{R}_{N}\) with just those at \(\operatorname{R}_{N}^{0}\). _Third step: integrand well-defined._ This shows that the integrand in \(t\) is well defined in the limit: indeed, we may commute the limit in \(N\to\infty\) with the residues (integrals) in the \(t_{0}\in C_{1}\) and \(t^{C_{1}}\) using dominated convergence and use the induction assumption that \(\omega_{g,n}^{N}\to\omega_{g,n}\). Note that, although it may appear that the sum over \(m\) becomes infinite in the limit, for any fixed \(g\) and \(n\) only finitely many terms are non-zero so commuting the limit with this sum is entirely trivial. _Fourth step: integral well-defined at \(R_{0}\)._ However, we want the integral, not just the integrand, to be well-defined in the limit. To this end, we note that the contributions from the residues at \(t=R_{0}^{N}\) go to the contributions at \(t=R_{0}\) in the limit by pulling the limit in \(N\) inside each integral using dominated convergence, as before, and applying the induction assumption. However, this simple argument will not work for the residues at \(t=R_{\infty}\) as these points can collide in the limit. _Fifth step: work in coordinate \(w\) for integrals at \(R_{\infty}\)._ To deduce that the integral must be well-defined in the limit, we will pushforward to work in the \(M_{1}\) plane where all elements of \(R_{\infty}^{N}\) fall at \(M_{1}=N,\infty\). To move to the origin, let \(w=1/M_{1}\). For a deck transformation \(\sigma\) of \(M_{2}\), i.e., \(\sigma(t)\in\mathcal{Y}(t)\), we define a corresponding transformation \(\nu\) through \(\nu(w)=\nu(1/M_{1}(t))=1/M_{1}(\sigma(t))\). Although \(\nu\) may depend on \(t\), and so is multi-valued, we sum over \(\nu\) at every step; this is well-defined. In completing this sum, as from now on we will suppress this detail, note that the sum over \(\nu\) must include both the sum over \(\sigma\) (from all elements of \(\mathcal{Y}(t)\)) and partial inverse of \(M_{1}\) (from the fact we pushforward in \(M_{1}\)). Let \(\nu_{1},\ldots,\nu_{r}\) be all such non-trivial \(\nu\) and note that, for general \(N\), \(\nu_{p}(w=1/N)\neq 1/N\) for all \(p=1,\ldots,r\). We claim that the integrand in \(t\) of topological recursion, pushed forward under \(M_{1}\) and then written in the coordinate \(w\), takes the following form (where \(\exp_{N}(z)=(1-z/N)^{-N}\)): \[\frac{\mathcal{N}_{N}^{d}}{N!\left(w\,|\,z_{0},z_{[n]}\right)\exp_{N}(w^{-1} )^{d}+\mathcal{N}_{N}^{A-1}(w\,|\,z_{0},z_{[n]})\exp_{N}(w^{-1})^{d-1}+\cdots +\mathcal{N}_{N}^{N}(w\,|\,z_{0},z_{[n]})}{\mathcal{D}_{N}^{d}(w\,|\,z_{[n]} )\exp_{N}(w^{-1})^{d}+\mathcal{D}_{N}^{d-1}(w\,|\,z_{0},z_{[n]})\exp_{N}(w^{- 1})^{d-1}+\cdots+\mathcal{D}_{0}^{N}(w\,|\,z_{0},z_{[n]})}\,, \tag{49}\] where each of the \(\mathcal{N}_{N}^{k}\) and \(\mathcal{D}_{N}^{k}\) are meromorphic functions9 with the order of their zeros and poles at \(w=1/N\) bounded uniformly in \(N\) and it is assumed \(\mathcal{D}_{N}^{d}(w\,|\,z_{0},z_{[n]})\) is not identically zero. Footnote 9: Due to the presence of \(f_{n}^{t}\,B(z_{0},\cdot)\) in the integrand, this is not strictly true for non-zero genus. However, all we want to do is integrate, so we may slightly misuse terminology. To get this expansion, first note we can write every derivative of \(x_{N}\) as \(\exp_{N}(M_{1})=\exp_{N}(1/w)\) times a sequence of meromorphic functions with the order of their poles and zeros bounded uniformly in \(N\). Then, when we take the residues at \(t_{0}=t^{C_{1}}\), we get expansion like (47) and (48), which we may put over a common denominator. Then, we will claim that when we take the residues at \(t^{C_{1}}=R_{N}^{0},z_{[n]},\mathcal{Y}(t)\) the total number of factors of derivatives of \(x_{N}(t)\) in the denominator is greater than or equal to those in the numerator in each term; putting everything over a common denominator and pushing forward results in an expression of the form (49). The fact that we get this same power behaviour will be demonstrated in the proceeding (sixth) step and for now can be taken as a claim. Like any fraction, such an expression is non-unique as we may multiply the numerator and the denominator by the same factor without changing the total value; however, the ratio \(\mathcal{N}_{N}^{d}(w\,|\,z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|\,z_{0},z_{[n]})\) is unique and all we will eventually care about. _Sixth step: properties of \(\mathcal{N}/\mathcal{D}\)._ We now need to verify a couple of important properties of \(\mathcal{N}_{N}^{d}(w\,|\,z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|\,z_{0},z_{[n ]})\). First, we claim it has no pole at \(w=0\). Second, we claim that it does not have essential singularity in the limit and its only potential pole near \(w=0\) is the one at \(w=1/N\). Note that the only place an essential singularity could come from are the \(\exp_{N}(1/\nu_{p}(w))\). Finally, along the way, it will become clear that, as we have stated, the degree of the numerator and denominator in \(\exp_{N}(1/w)\) must be the same and that the ratio \(\mathcal{N}_{N}^{d}(w\,|\,z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|\,z_{0},z_{[n ]})\) is well-defined. The proof for these claims is a bit more involved. We will examine the individual terms that we put over a common denominator before pushing forward to the \(w\)-plane; by examining the ratio of the coefficient of the highest power of \(x_{N}\) in the numerator to the one in the denominator, we can deduce the behaviour of the coefficients in the fraction put over a common denominator (before pushing forward, \(x_{N}\) is \(\exp_{N}(M_{1})\) times a meromorphic function \(M_{0}\), so looking at leading powers of \(x_{N}\) is the same as looking at leading powers of \(\exp_{N}(M_{1})\)). As there are no poles at \(t\in R_{\infty}\), there are none at \(w=0\), and the lack of \(\exp_{N}(M_{1}(\sigma(t)))\) factors in the leading order will give us the no essential singularities result. To this end we first perform the residues at \(C_{1}=t^{C_{1}}\) in the integrand in \(t\) and will be left with an expression of the form \[\underset{t^{C_{1}}=R^{N}_{N},z_{[n]},y(t)}{\text{Res}}\frac{x_{N}(t)}{x_{N}(t)- x_{N}(t^{C_{1}})}\frac{f_{N}(t,t^{C_{1}},t_{[m-1]}\setminus C_{1}\,|\,z_{[n]})}{(M_{2}(t)- M_{2}(t^{C_{1}}))^{|C_{1}|}}\,, \tag{50}\] where \(f_{N}\) is a differential in all its arguments except \(t\). Furthermore, \(f_{N}\) is meromorphic in \(t\) and \(t^{C_{1}}\) and remains so in the limit (note that in (47) and (48) the derivatives in \(x_{N}\) appear in the same power in the numerator and denominator so we may cancel out the factor of \(\exp_{N}(M_{1})\)), and there is no pole in \(t\) or \(t^{C_{1}}\) at the poles of \(M_{1}\) (we established this before in the second step to show that the residues at \(t^{C_{1}}=R_{\infty}\) do not contribute). First we examine the residues \[\underset{t^{C_{1}}=R^{N}_{N},z_{[n]}}{\text{Res}}\frac{x_{N}(t)}{x_{N}(t)-x_ {N}(t^{C_{1}})}\frac{f_{N}(t,t^{C_{1}},t_{[m-1]}\setminus C_{1}\,|\,z_{[n]})}{ (M_{2}(t)-M_{2}(t^{C_{1}}))^{|C_{1}|}}\,. \tag{51}\] Here, the residues in the \(t^{C_{1}}\) are taken at points that do not depend on \(t\). Thus, due to the pole of \(M_{2}\) at the poles of \(M_{1}\) guaranteed by admissibility, these residues will all contribute some sort of zero to the ratio of leading order coefficients in powers of \(x_{N}\). Finally, it is trivial that these residues will never contribute \(\exp_{N}(M_{1}(\sigma(t)))\) as nothing depends on \(\sigma(t)\). However, a pathology can occur here. In the limit we can have two or more different elements of \(R^{N}_{0}\) collide; if this happens, the independent contribution of each to the final integrand will not be well-defined and only the sum over the residues at these colliding points is well-defined in the limit. Let us examine this case and assure ourselves that this presents no issues for the well-definedness of the ratio \(N^{d}_{N}(w\,|\,z_{0},z_{[n]})/D^{d}_{N}(w\,|\,z_{0},z_{[n]})\) in the limit. First note that these colliding points cannot be poles of \(x_{N}\) as the location of the poles of \(x_{N}\) in \(R^{N}_{0}\) do not depend on \(N\) as they are just the poles of \(M_{0}\) and the poles of \(M_{0}\) will never collide with the zeros of \((1-M_{1}/N)^{N+1}dx_{N}=(1-M_{1}/N)dM_{0}+M_{0}dM_{1}\). Thus, the general scenario we must examine is when we have \(a^{k}_{N},\ldots,a^{k}_{N}\in R^{N}_{0}\) which are all zeros of \(dx_{N}\) and all collide in the limit. We then examine the following expression in which we computed the residue in terms of derivatives \[\sum_{i=1}^{k}\lim_{t^{C_{1}}\to a^{k}_{N}}\frac{1}{M!}\frac{d^{M}}{d(t^{C_{1 }})^{M}}\frac{x_{N}(t)}{x_{N}(t)-x_{N}(t^{C_{1}})}\frac{f_{N}(t,t^{C_{1}},t_{[m -1]}\setminus C_{1}\,|\,z_{[n]})}{(M_{2}(t)-M_{2}(t^{C_{1}}))^{|C_{1}|}}\,, \tag{52}\] where \(M\in\mathbb{Z}_{\geqslant 0}\) is chosen large enough so all the limits in \(t^{C_{1}}\) are finite. By using the product rule we may write this expression as \[\frac{1}{M!}\sum_{h=1}^{M}\sum_{i=1}^{k}\frac{x_{N}(t)}{(x_{N}(t)-x_{N}(a^{k}_ {N}))^{h}}F^{h}_{N}(t,a^{i}_{N},t_{[m-1]}\setminus C_{1}\,|\,z_{[n]})\,, \tag{53}\] where the \(F^{h}_{N}\) are meromorphic functions in \(t\) such that the order of all its zeros and poles (in \(t\)) are uniformly bounded in \(N\). Denoting this uniform bound by \(N_{0}\), we see for sufficiently large \(N\) the prefactors of the \(F^{h}_{N}\) in the above expression must have zeros and poles of order larger than \(N_{0}\) (except possibly for \(h=1\) if \(a^{k}_{N}\) is a zero of \(x_{N}\) for some \(i\) and all \(N\) sufficiently large). As the sum over \(h\) must be well defined in the limit, we can conclude that each individual term in the sum over \(n\) is well-defined in the limit. Therefore, the \(h=1\) expression \[\frac{1}{M!}\sum_{i=1}^{k}\frac{x_{N}(t)}{x_{N}(t)-x_{N}(a^{i}_{N} )}F^{1}_{N}(t,a^{i}_{N},t_{[m-1]}\setminus C_{1}\,|\,z_{[n]})\\ =\frac{x_{N}(t)}{M!}\prod_{i=1}^{k}\left[x_{N}(t)-x_{N}(a^{i}_{N} )\right]^{-1}\sum_{i=1}^{k}F^{1}_{N}(t,a^{i}_{N},t_{[m-1]}\setminus C_{1}\,|\,z_ {[n]})\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{k}\left[x_{N}(t)-x_{N}(a^{i}_{N})\right]\,, \tag{54}\] is well-defined in the limit. Observing that this is degree \(k\) in \(x_{N}(t)\) in both the numerator and the denominator we see that this is precisely the desired leading order in \(x_{N}\) expression that will contribute to the ratio \(N^{d}_{N}(w\,|\,z_{0},z_{[n]})/D^{d}_{N}(w\,|\,z_{0},z_{[n]})\), whereas the terms other than \(j=1\) will contribute only to terms lower order in \(\exp_{N}(1/w)^{d}\). More involved are the contributions from the residues at \(\mathcal{Y}(t)\). Here, we divide these into three sub-cases: 1. we may take the residues at \(\sigma(t)\in\mathcal{Y}^{\prime}(t)\) that do not preserve \(M_{1}\), these correspond to the \(\nu_{1},\dots,\nu_{r}\) and are where the limit of \(\mathcal{N}_{N}^{d}(w\,|z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|z_{0},z_{[n]})\) may have potential essential singularities; 2. we may take the residues at \(\sigma(t)\in\mathcal{Y}^{\prime}(t)\) that preserve \(M_{1}\), i.e., \(M_{1}\circ\sigma=M_{1}\); 3. we may take the residue at \(t\) itself. Starting with sub-case (i), take an element \(\sigma(t)\in\mathcal{Y}^{\prime}(t)\), fix an \(l\), and inspect the following residue \[\operatorname*{Res}_{t^{C_{1}}=\sigma(t)}\frac{x_{N}(t)}{x_{N}(t)-x_{N}(t^{C_ {1}})}\frac{f_{N}(t,t^{C_{1}},t_{[m-1]}\setminus C_{1}\,|z_{[n]})}{(M_{2}(t)-M _{2}(t^{C_{1}}))^{|C_{1}|}}\,, \tag{55}\] where \(f_{N}\) is as before. Here we have a pole of order \(|C_{1}|\) at \(\sigma(t)\) in the variable \(t^{C_{1}}\). We can therefore calculate the residue with the formula \[\lim_{t^{C_{1}}\to\sigma(t)}\frac{(-1)^{|C_{1}|}}{|C_{1}|!}\frac{d^{|C_{1}|-1} }{d(t^{C_{1}})^{|C_{1}|-1}}\frac{x_{N}(t)f_{N}(t,t^{C_{1}},t_{[m-1]}\setminus C _{1}\,|z_{[n]})}{x_{N}(t)-x_{N}(t^{C_{1}})}\left(\frac{t^{C_{1}}\sigma(t)}{M_ {2}(t^{C_{1}})-M_{2}(t)}\right)^{|C_{1}|}\,. \tag{56}\] If we take the derivatives of \(1/(x_{N}(t)-x_{N}(t^{C_{1}}))\), we will end up with subleading terms in powers of \(x_{N}\); these therefore do not concern our analysis. All we care about when we take derivatives of \(f\), is that derivatives cannot create poles. Finally, we have the expansion \[\left(\frac{t^{C_{1}}\sigma(t)}{M_{2}(t^{C_{1}})-M_{2}(\sigma(t))}\right)^{|C _{1}|}=\frac{1}{M_{2}^{\prime}(\sigma(t))^{|C_{1}|}}\sum_{k=0}^{\infty}S_{k} (\sigma(t))(t^{C_{1}}-\sigma(t))^{k}\,, \tag{57}\] where \(S_{k}\) has a pole of order at most \(k\) at elements of \(R_{\infty}\) (poles of \(M_{1}\)). The pre-factor has a zero of at least order \(|C_{1}|\) (as \(M_{2}\) has a pole at all elements of \(R_{\infty}\)), and the only \(S_{k}\) that can contribute are those with \(k\leq|C_{1}|\). Thus, for the leading terms in \(x_{N}\), we will never get poles. Furthermore, from the above discussion, we see the \(\exp_{N}(1/\nu_{p}(w))\) will never enter the leading order power in \(\exp_{N}(1/w)\). Now, we move on to sub-case (ii): the deck transformations that preserve \(M_{1}\); let \(\sigma(t)\) be such a deck transformation and examine the expression \[\lim_{t^{C_{1}}\to\sigma(t)}\frac{(-1)^{|C_{1}|}}{|C_{1}|!}\frac{d^{|C_{1}|-1} }{dt^{|C_{1}|-1}}\frac{x_{N}(t)f_{N}(t,t^{C_{1}},t_{[i-1]}\setminus C_{1}\,| z_{[n]})}{x_{N}(t)-x_{N}(t^{C_{1}})}\left(\frac{t^{C_{1}}\sigma(t)}{M_{2}(t^{C_{1} })-M_{2}(t)}\right)^{|C_{1}|}\,, \tag{58}\] which is the same as the prior case as nothing in the steps changes up to this point. The only thing that changes in analysing this expression is that when we take derivatives of the \(x_{N}(t)/(x_{N}(t)-x_{N}(t^{C_{1}}))\) factor we do not end up with only subleading terms as \(x_{N}(\sigma(t))\) now has a factor of \(\exp(M_{1}(t))\). If we take \(k\) derivatives of this factor, we get a pole of order at most \(k\) in the ratio of the coefficients of the leading powers of \(x_{N}\). Thus, we still cannot get a pole as we have the factor of \(M_{2}^{\prime}(\sigma(t))^{|C_{1}|}\), as before. Finally, we examine sub-case (iii), where we take the residue at \(t^{C_{1}}=t\): \[\operatorname*{Res}_{t^{C_{1}}=t}\frac{x_{N}(t)}{x_{N}(t)-x_{N}(t^{C_{1}})} \frac{f_{N}(t,t^{C_{1}},t_{[i-1]}\setminus C_{1}\,|z_{[n]})}{(M_{2}(t)-M_{2}( t^{C_{1}}))^{|C_{1}|}}. \tag{59}\] Here, we may have a pole of order at most \(|C_{1}|+3\). In particular, we get a pole of order one from the \(x_{N}(t)/(x_{N}(t)-x_{N}(t^{C_{1}}))\) factor, a pole of order \(|C_{1}|\) from the difference of the \(M_{2}\) in the denominator, and a potential double pole in \(f_{N}\) at \(t^{C_{1}}=t\) due to possible presence of an \(\omega_{0,2}(t,t^{C_{1}})\). Here, however, the \(M_{2}^{\prime}(t)^{|C_{1}|}\) has at least a zero of order \(2|C_{1}|\). Using identical arguments with the expansions of the individual factors, this case will not create an undesired pole in \(\mathcal{N}_{N}^{d}(w\,|z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|z_{0},z_{[n]})\). _Seventh step: integral well-defined._ With these properties established, we can prove the \(\omega_{g,n}^{N}\) are well-defined in the limit. First note \[\operatorname*{Res}_{w=1/N}\frac{\mathcal{M}_{N}^{d}(w\,|z_{0},z_{[n]})\exp_{N} (1/w)^{d}+\cdots}{\mathcal{D}_{N}^{d}(w\,|z_{0},z_{[n]})\exp_{N}(1/w)^{d}+\cdots} =\operatorname*{Res}_{w=1/N}\frac{\mathcal{N}_{N}^{d}(w\,|z_{0},z_{[n]}) }{\mathcal{D}_{N}^{d}(w\,|z_{0},z_{[n]})}\left(1+\mathcal{O}(w-1/N)^{N}\right) \tag{60}\] \[=\operatorname*{Res}_{w=1/N}\frac{\mathcal{N}_{N}^{d}(w\,|z_{0},z_{[n ]})}{\mathcal{D}_{N}^{d}(w\,|z_{0},z_{[n]})}\,.\] As we have established \(\mathcal{N}_{N}^{d}(w\,|z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|z_{0},z_{[n]})\) has no pole at \(w=0\), we may change the residue at \(w=1/N\) to a contour integral about a small circle around \(w=0\). Then, using dominated convergence to bring the limit as \(N\to\infty\) inside the contour, we conclude \[\lim_{N\to\infty}\operatorname*{Res}_{w=\lceil/N}\frac{\mathcal{N}_{N}^{d}(w\,|\,z _{0},z_{[n]})\exp_{N}(1/w)^{d}+\cdots}{\mathcal{D}_{N}^{d}(w\,|\,z_{0},z_{[n]}) \exp_{N}(1/w)^{d}+\cdots}=\operatorname*{Res}_{w=0}\frac{\mathcal{N}_{\infty}^ {d}(w\,|\,z_{0},z_{[n]})}{\mathcal{D}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})}. \tag{61}\] _Eighth step: independence of \(\tau\)._ Finally, we claim that all choices of \(\tau\) yield the same result in the \(N\to\infty\). To establish this, we first prove \(\partial_{\tau}^{n}\omega_{g,n+1}^{N}(z_{0},z_{[n]})|_{\tau=0}\) exists and goes to zero for every value of \(m\in\mathbb{Z}_{\geqslant 1}\) as \(N\to\infty\) proceeding inductively on \(-\chi_{g,n}=2g+n-2\). First note that the result is straightforward for \(\omega_{0,1}^{N}\) and trivial for \(\omega_{0,2}\). So we may proceed directly to the induction step and assume the result holds for all prior correlators. We first argue that we may commute derivatives in \(\tau\) with all residues in (46). To do this, we transform all residues into contour integrals; even if the point at which the residue is being taken depends on \(\tau\), the contour may be taken to be locally constant in \(\tau\). Then we may commute the derivatives in \(\tau\) with the \(\tau\)-independent contour integrals and, as the derivatives in \(\tau\) cannot create new poles, we may switch all contour integrals back to the same residues. In an identical manner to the second step of the proof, we wish to argue the residues at \(t^{C_{1}}=R_{\infty}^{N}\) do not contribute. For the points in \(R_{\infty}^{N}\) that satisfy \(M_{1}=N/(1-\tau)\), \(x_{N}(t^{C_{1}})\) has a pole of order \(N\) and the argument proceeds identically to the argument in the second step. Similarly, for the points where \(M_{1}=\infty\) the expansions (47) and (48) are the same as before and the identical argument works for general choices of \(\tau\). The only new thing to check is that the points in \(R_{\infty}^{N}\) that satisfy \(M_{1}=-N/\tau\) do not contribute. Here \(x_{N}\) has a zero of order \(N\) rather than a pole of order \(N\). Denoting the collection of these points as \(V_{N}\), by (50) we look at the expression (with the \(\tau\) dependence suitably inserted) \[\operatorname*{Res}_{t^{C_{1}}=V_{N}}\frac{x_{N}(t)}{x_{N}(t)-x_{N}(t^{C_{1}} )}\frac{f_{N}^{\tau}(t,t^{C_{1}},t_{[m-1]}\setminus C_{1}\,|\,z_{[n]})}{(M_{ 2}(t)-M_{2}(t^{C_{1}}))^{|C_{1}|}}\,, \tag{62}\] where, as before, \(f_{N}^{\tau}\) will only have poles of order bounded uniformly in \(N\). It will be important to observe, that as the correlators are regular at the poles of \(M_{1}\) and \(M_{2}\) has poles at the poles of \(M_{1}\) that \(f_{N}^{\tau}\) has no pole at the poles of \(M_{1}\) for all \(\tau\). Then we note, for any \(a_{N}\in V_{N}\), \[\frac{x_{N}(t)}{x_{N}(t)-x_{N}(t^{C_{1}})}=1+\mathcal{O}((t-t^{C_{1}})^{N}), \tag{63}\] so the expression in (62) is in fact equal to \[\operatorname*{Res}_{t^{C_{1}}=V_{N}}\frac{f_{N}^{\tau}(t,t^{C_{1}},t_{[m-1]} \setminus C_{1}\,|\,z_{[n]})}{(M_{2}(t)-M_{2}(t^{C_{1}}))^{|C_{1}|}}\to:F_{N}^ {\tau}(t,t^{C_{1}},t_{[m-1]}\setminus C_{1}\,|\,z_{[n]}). \tag{64}\] We now claim that \([\partial^{m}F_{N}^{\tau}]_{\tau=0}\equiv 0\) for all \(m\in\mathbb{Z}_{\geqslant 0}\), which would mean, at least locally near \(\tau=0\), that these points do not contribute. To prove this we wish to commute the \(\tau\) derivatives with the residue in the above expression. There is a slight subtlety, as all points in \(V_{N}\) collide at poles of \(M_{1}\) in the limit \(\tau\to 0\). However, as the \(f_{N}^{\tau}\) are regular at the poles of \(M_{1}\), near \(\tau=0\) we may draw a contour around each of the poles of \(M_{1}\) that includes all points in \(V_{N}\) (but no other poles of the integrand) and commute the \(\tau\) derivatives with this contour. Next, we note that \(f_{N}^{\tau}=f_{N}\) has no pole at the poles of \(M_{1}\) so \([\partial^{m}f_{N}^{\tau}]_{\tau=0}\) is in fact regular at the poles of \(M_{1}\) for all \(m\in\mathbb{Z}_{\geqslant 0}\). Then, \([\partial^{m}F_{N}^{\tau}]_{\tau=0}\) just involves taking residues of \([\partial^{m}f_{N}^{\tau}]_{\tau=0}\) at the poles of \(M_{1}\) and therefore vanishes. As argued previously, we can commute the \(\tau\) derivatives with the residues and obtain \[\partial_{\tau}^{k}\omega_{g,n+1}^{N}(z_{0},z_{[n]}) =\operatorname*{Res}_{t=R^{N}}\sum_{m=2}^{\deg(x_{N})}\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Thus, the entire second line (_i.e._, the part inside all the residues) is converging to zero as \(N\to\infty\). As all the residues in the variables \(C_{1}\) may just be converted into integrals10 we get that the entire integrand in \(t\) is converging to zero. Analogously to proving the existence of the limit when \(\tau=0\), we must now argue the integral itself goes to zero. Footnote 10: If two or more ramification points in \(R^{N}_{0}\) collide in the limit, then we will have to write one contour integral around the point in \(R_{0}\) they collide at. For the residues at \(t=R^{N}_{0}\) the fact that the integrand is going to zero is clear as we may take the \(N\to\infty\) limit inside the contour integral; so we concentrate on the residues at \(t=R^{N}_{\infty}\). Here, for generic \(\tau\), we will have residues at solutions of \(M_{1}(z)=N/(1-\tau),-N/\tau\). When we set \(\tau=0\), we end up with no poles at \(M_{1}(z)=\infty\), even after taking derivatives, as we cannot create poles by taking derivatives. For our purposes, we may therefore neglect the residues at \(M_{1}(z)=-N/\tau\), as they will drop out in the end. Thus, at these points, we need to take \(\tau\) derivatives of the analogous expression to (49) where the \(N\) and \(\mathcal{D}\) coefficients acquire the suitable \(\tau\) dependence and \(\exp_{N}(w^{-1})=(1+(\tau-1)/(\mathrm{Nw}))^{-N}(1+\tau/(\mathrm{Nw}))^{N}\) is appropriately modified11. After taking \(\tau\) derivatives, the ratio of the new leading order coefficients will have no pole at \(w=\tau=0\) by the quotient rule and the fact that the \(\tau\) derivative can only decrease the order of poles at \(w=0\). We may then conclude that the same argument with the \(N\to\infty\) limit holds Footnote 11: \(\tau\) derivatives commute with the pushforward in \(M_{1}\) as \(M_{1}\) does not depend on \(\tau\). \[\begin{split}\operatorname*{Res}_{w=(1-\tau)/N}& \partial_{\tau}\frac{\mathcal{N}_{N}^{d,\tau}(w\,|\,z_{0},z_{[n]})\exp_{N}(1/ w)^{d}+\cdots}{\mathcal{D}_{N}^{d,\tau}(w\,|\,z_{0},z_{[n]})\exp_{N}(1/w)^{d}+ \cdots}\\ &=\operatorname*{Res}_{w=(1-\tau)/N}&\partial_{ \tau}^{m}\frac{N_{N}^{d,\tau}(w\,|\,z_{0},z_{[n]})}{\mathcal{D}_{N}^{d,\tau}( w\,|\,z_{0},z_{[n]})}\left(1+\mathcal{O}(w-(1-\tau)/N)^{N}\right)\\ &=\operatorname*{Res}_{w=(1-\tau)/N}&\partial_{ \tau}^{m}\frac{N_{N}^{d,\tau}(w\,|\,z_{0},z_{[n]})}{\mathcal{D}_{N}^{d,\tau}( w\,|\,z_{0},z_{[n]})}\,.\end{split} \tag{66}\] but this time, after taking the derivatives in \(\tau\), the ratio of the leading order coefficients is converging to zero. Finally, we argue that this result, namely that \(\partial_{\tau}^{m}\omega_{g,n+1}^{N}(z_{0},z_{[n]})|_{\tau=0}\) exists and goes to zero in the limit, perhaps unsurprisingly, actually establishes the theorem. Note that we have the following expansion for sufficiently small \(\tau\) and generic choices of \(z_{1},\ldots,z_{n}\in\varSigma\) \[\omega_{g,n+1}^{N}(z_{0},z_{[n]})=\sum_{m=0}^{\infty}\frac{\tau^{m}}{m!} \partial_{\tau}^{m}\omega_{g,n+1}^{N}(z_{0},z_{[n]})|_{\tau=0}\,. \tag{67}\] Denote the radius of convergence of this sum as \(\rho_{N}(z_{0},z_{[n]})\). We claim that \(\rho_{N}\to\infty\) provided \(z_{i}\notin R\quad\forall i=0,\ldots,n\). To prove this claim we examine the singularity structure of \(\omega_{g,n+1}^{N}(z_{0},z_{[n]})=\omega_{g,n+1}^{N}(z_{0},z_{[n]};\tau)\). This is straightforward as these \(\omega_{g,n+1}^{N}\) only have poles at ramification points. The ramification points of \(x_{N}\) can be put into three categories: solutions of \(M_{1}(z)=(1-\tau)/N\); solutions of \(M_{1}(z)=-\tau/N\); poles of \(M_{1}\); poles of \(M_{0}\); zeros of \(dx_{N}\) that converge to elements of \(R_{0}\). For the first two cases we clearly see that, for fixed \(z\), these singularities go to infinity in the \(\tau\) plane. The next two cases can never create singularities in the \(\tau\) plane as, by assumption, \(z_{i}\notin R\). The fifth and final case is dealt with by computing \[\left(1+(\tau-1)\frac{M_{1}}{N}\right)^{N+1}\left(1+\tau\frac{M_{1}}{N}\right) ^{-N+1}dx_{N}=\left(1+(\tau-1)\frac{M_{1}}{N}\right)\left(1+\tau\frac{M_{1}} {N}\right)dM_{0}+M_{0}dM_{1}\,, \tag{68}\] and noting that, for a fixed point on \(\varSigma\) that is not a zero of this differential in the limit, the only zero this has in the \(\tau\) plane shoots off to infinity in the limit \(N\to\infty\). As \(\rho_{N}\) is the distance between zero and the nearest singularity in the \(\tau\) plane we indeed have that \(\rho_{N}\to\infty\). Thus, given any fixed \(\tau\) we are inside the radius of convergence, _i.e._, \(|\tau|<\rho_{N}\), for all \(N\) sufficiently large so we may commute the limit in \(N\) with the infinite sum. This proves the theorem. Now that we know that correlators for transalgebraic spectral curves are well-defined, we can easily prove a number of corollaries. **Corollary 4.4**.: _Let \(8\) be a compact transalgebraic admissible spectral curve. For \(2g+n-2\geqslant 1\), the correlators \(\omega_{g,n}\) constructed by topological recursion on \(8\) satisfy the following properties:_ * _Symmetry: the_ \(\omega_{g,n}\) _are symmetric in all of their_ \(n\) _variables._ * _Pole structure: the_ \(\omega_{g,n}\) _have poles only at the ramification points of_ \(x\)_._ * _Residualess: the_ \(\omega_{g,n}\) _have vanishing residue at all points._ * _Homogeneity: rescaling_ \(\omega_{0,1}\) _by a constant_ \(c\in\mathbb{C}^{*}\) _to_ \(\omega_{0,1}\) _results in a rescaling_ \(\omega_{g,n}\to c^{2-2g-n}\omega_{g,n}\)_._ Proof.: These properties are well-known for ordinary topological recursion, and were proved in [1, 2, 3]. They carry over as they hold for each curve in our sequence of spectral curves. We also give a direct formula for topological recursion on transalgebraic spectral curves, in a wide variety of cases, without using a sequence of correlators and taking limits. **Lemma 4.5**.: _Let \(\mathcal{S}\) be a compact transalgebraic admissible spectral curve. If \(M_{1}\) is a well-defined function of \(M_{2}\) we may use the following formula to recursively compute the correlators of topological recursion._ \[\begin{split}\omega_{g,n+1}(z_{0},z_{[n]})=& \operatorname*{Res}_{t=R}\sum_{m=2}^{\deg(x)}\left(\int_{*}^{t}B(z_{0}, \cdot)\right)\sum_{C_{1},\ldots,C_{j}+t_{[m-1]}}\frac{(-1)^{1-\delta_{j,m-1}}}{ j!}\operatorname*{Res}_{\begin{subarray}{c}t^{C_{1}}=R_{0},Z,y(t)\\ 1=1,\ldots,j\end{subarray}}\operatorname*{Res}_{\begin{subarray}{c}t^{C_{1}}= R_{0},Z,y(t)\\ 1=1,\ldots,j\end{subarray}}\operatorname*{Res}_{\begin{subarray}{c}t^{C_{1}}= R_{1}\\ 1=1,\ldots,j\end{subarray}}\\ &\left(\prod_{i=1}^{j}\frac{x(t)}{x(t)-x(t^{C_{1}})}\prod_{t_{0} \in C_{1}\setminus\{t^{C_{1}}\}}\frac{x(t)}{x(t_{0})-x(t^{C_{1}})}\right) \frac{W_{g,n,m}(t,t_{[m-1]}\,|\,z_{[n]})}{\prod_{1=1}^{l-1}((xy)(t)-(xy)(t_{1}) )}\,,\end{split} \tag{69}\] _where \(\mathcal{Y}(t)=M_{2}^{-1}\big{(}M_{2}(t)\big{)}\) and the residues at the infinite ramification points \(R_{\infty}\) are defined as_ \[\operatorname*{Res}_{t=R_{\infty}}\leftrightarrow\frac{1}{(2g-1)!}\lim_{w \to 0^{+}}\frac{\mathrm{d}^{2g-1}}{\mathrm{d}w^{2g-1}}M_{1\,*}\,, \tag{70}\] _where the expression on the right is to be interpreted as follows: we take the pushforward under the map \(M_{1}\) and define \(w=1/M_{1}\) so the infinite ramification points are all located at \(w=0\); the formula is then the standard one for a pole of order \(2g\) at \(w=0\) except we take the limit as \(w\to 0\) along the positive real axis._ Proof.: Adopting the notation of the proof of theorem 4.3, as \(\mathcal{N}_{\infty}^{M}(w\,|\,z_{0},z_{[n]})/\mathcal{D}_{N}^{d}(w\,|\,z_{0}, z_{[n]})\) has a pole of order at most \(2g\) at \(w=1/N\), we will have that \(\mathcal{N}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})/\mathcal{D}_{\infty}^{d}(w\,|\,z _{0},z_{[n]})\) will have a pole of order no more than \(2g\) at \(w=0\). Ergo, we can compute the topological recursion in the limit (note here, by assumption, there are no \(\nu\)). By definition, this is \[\frac{1}{(2g-1)!}\lim_{w\to 0^{+}}\frac{\mathrm{d}^{2g-1}}{ \mathrm{d}w^{2g-1}}\frac{\mathcal{N}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})\exp( \mathrm{d}/w)+\cdots}{\mathcal{D}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})\exp( \mathrm{d}/w)+\cdots}\] \[= \frac{1}{(2g-1)!}\lim_{w\to 0^{+}}\frac{\mathrm{d}^{2g-1}}{ \mathrm{d}w^{2g-1}}\frac{\mathcal{N}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})}{ \mathcal{D}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})}\left(1+0(\exp(-w^{-1}))\right)\] \[= \frac{1}{(2g-1)!}\lim_{w\to 0^{+}}\frac{\mathrm{d}^{2g-1}}{ \mathrm{d}w^{2g-1}}\frac{\mathcal{N}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})}{ \mathcal{D}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})}=\operatorname*{Res}_{w=0}\frac{ \mathcal{N}_{\infty}^{d}(w\,|\,z_{0},z_{[n]})}{\mathcal{D}_{\infty}^{d}(w\,| \,z_{0},z_{[n]})}.\qed\] _Remark 4.6_.: We believe that this formula works even when \(M_{1}\) is not a well-defined function of \(M_{2}\), but it is tricky to establish due to the possibility of the \(\exp(1/\nu(w))\) contributing to the leading order coefficients in the limit. If one could prove that the \(\exp(1/\nu(w))\) do not contribute to the leading order in the limit, the lemma would hold even when \(M_{1}\) is not a well-defined function of \(M_{2}\). ### Essential singularities only contribute for \(n=1\) As highlighted in remark 2.7, given transalgebraic functions \(x\) and \(y\) with exponential singularities on a compact Riemann surface \(\Sigma\), with \(xy\) meromorphic, one can define two distinct spectral curves: 1. A compact transalgebraic spectral curve, where the Riemann surface is taken to be \(\Sigma\) itself; 2. A non-compact meromorphic spectral curve, where the Riemann surface is taken to be \(\Sigma\setminus R_{\infty}\) (which ignores the essential singularities of \(x\)). In this section, we show that topological recursion on the compact transalgebraic spectral curve differs from topological recursion on the non-compact meromorphic spectral curve at only finitely many steps. More explicitly, what this means is that for all but finitely many \((g,n)\), the \(\omega_{g,n}\) defined by topological recursion on transalgebraic spectral curves (definition 4.2) may also be calculated using the recursive step of standard topological recursion (definition 2.8) ignoring essential singularities. We will prove this by analysing the N-dependence of the \(\omega_{g,n}^{N}\); we show that \(\lim_{N\to\infty}\omega_{g,n}^{N}\) has no principal part at \(R_{\infty}\) for all but finitely many \((g,n)\). We begin by examining the loop equations of proposition 2.9. We wish to prove that the \(\mathrm{i}\)th loop equation (corresponding to \(\mathcal{E}_{g,n,i}\)) can be written with all the same sheets rather than all different ones. First we define the non-regularised \((0,2)\) correlator \[\bar{\omega}_{0,2}(z_{1},z_{2})\coloneqq\omega_{0,2}(z_{1},z_{2})-\frac{\mathrm{ d}x(z_{1})\mathrm{d}x(z_{2})}{(x(z_{1})-x(z_{2}))^{2}}=-\sum_{\zeta\in f^{ \prime}(z_{1})}\omega_{0,2}(\zeta,z_{2})+\mathrm{regular}\,. \tag{71}\] Then we have the following result. **Proposition 4.7**.: _On a meromorphic spectral curve,_ \[\sum_{\begin{subarray}{c}Z\subset f_{\mathrm{i}}(z)\\ |Z|=\mathrm{i}\end{subarray}}\mathcal{E}_{g,n,i}(Z\,|\,z_{[n]})=(-1)^{\mathrm{i -1}}(\mathrm{i}-1)!\sum_{z^{\prime}\in f_{\mathrm{i}}(z)}\mathcal{E}_{g,n,i}( z^{\prime},\ldots,z^{\prime}\,|\,z_{[n]})+\mathcal{O}(z^{r_{a}-1-2\,g}\mathrm{d}z^{ \mathrm{i}}) \tag{72}\] _for a local coordinate \(z\) near a ramification point \(a\) with \(s_{a}=1\), where the bar indicates we replace any occurrence of \(\omega_{0,2}\) with \(\bar{\omega}_{0,2}\)._ _For an admissible transalgebraic spectral curve \(\mathcal{S}\), for fixed \(g\), either side of the expression is holomorphic of arbitrarily high vanishing order on \(S_{\mathrm{N}}\) near points \(a\in R_{\infty}^{\mathsf{N}}\) as \(N\to\infty\)._ Proof.: We will repeatedly apply the linear loop equations on the meromorphic spectral curves converging to our transalgebraic one. The linear loop equation, the \(\mathrm{i}=1\) case of proposition 2.9, always holds up to \(\mathcal{O}(x^{0}(z)\mathrm{d}x(z))=\mathcal{O}(z^{r_{a}-1}\mathrm{d}z)\). By assumption, \(s_{a}=1\), so by lemma 2.10, the pole order of \(\omega_{g,n}\) at \(a\) is bounded by \(2g\). Writing \(\zeta_{k}\) for the local Galois conjugates of a \(\zeta_{1}\), \[\sum_{k_{1}}\sum_{k_{2}\neq k_{1}}\cdots\sum_{k_{i}\neq k_{1}, \ldots,k_{i-1}}\mathcal{E}_{g,n,i}(\zeta_{k_{1}},\ldots,\zeta_{k_{i}}(z)\,|\, z_{[n]})\\ \sim-\sum_{k_{1}}\sum_{k_{2}\neq k_{1}}\cdots\sum_{k_{i-1}\neq k_ {1},\ldots,k_{i-2}}\sum_{j=1}^{i-1}\mathcal{E}_{g,n,i}(\zeta_{k_{1}},\ldots, \zeta_{k_{i-1}},\overline{\zeta_{k_{j}}}\,|\,z_{[n]})\,, \tag{73}\] where the bar over the entry indicates we should replace every instance of \(\omega_{0,2}\) evaluated at this entry with \(\bar{\omega}_{0,2}\) and \(\sim\) means up to \(\mathcal{O}(z^{r_{a}-1-2\,g}\mathrm{d}z^{\mathrm{i}})\). We then note the summation is entirely symmetric in \(k_{1},\ldots,k_{i-1}\) (we are summing over all permutations of \(\mathrm{i}-1\) indices where no two indices are the same) so all terms in the sum over \(\mathrm{j}\) are equal to the term with \(\mathrm{j}=1\) and therefore the above is equal to \[-(\mathrm{i}-1)\sum_{k_{1}}\sum_{k_{2}\neq k_{1}}\cdots\sum_{k_{i- 1}\neq k_{1},\ldots,k_{i-2}}\mathcal{E}_{g,n,i}(\zeta_{k_{1}},\ldots,\zeta_{k _{i-1}},\overline{\zeta_{k_{1}}}\,|\,z_{[n]})\\ \sim(\mathrm{i}-1)\sum_{k_{1}}\sum_{k_{2}\neq k_{1}}\cdots\sum_{ k_{i-2}\neq k_{1},\ldots,k_{i-3}}\sum_{j=1}^{\mathrm{i}-2}\mathcal{E}_{g,n,i}( \zeta_{k_{1}},\ldots,\zeta_{k_{i-2}},\overline{\zeta_{k_{j}}},\overline{ \zeta_{k_{1}}}\,|\,z_{[n]}). \tag{74}\] By the same argument we may remove the sum over \(\mathrm{j}\) by picking up a factor of \(\mathrm{i}-2\). Repeating this argument a further \(\mathrm{i}-3\) times yields the desired first result. For the second, we use that for any \(a\in R_{\infty}^{\mathsf{N}}\) by admissibility, \(s_{a}=1\), so we may apply the first result, and moreover the order of the loop equation from proposition 2.9 is \(\mathcal{O}(z^{r_{a}-\mathrm{i}}\mathrm{d}z^{\mathrm{i}})\), as \(s_{a}=1\) and \(\mathrm{i}\leqslant r_{a}\). Then \(r_{a}=\mathcal{O}(N)\), so indeed the vanishing order of either side grows arbitrarily large as \(N\to\infty\). Now we use our rewritten loop equations to derive the desired \(N\)-dependence of the \(\omega_{g,n}^{\mathsf{N}}\). Fixing an essential singularity \(a\) of \(x\), we define a local coordinate near this essential singularity through \(\zeta^{-m_{1}}=M_{1}\); using this notation, the essential singularity corresponds to \(\zeta(a)=0\). Setting \(\tau=0\) in \(x_{N}\), let \(\emptyset\) be a primitive \(m_{1}\mathrm{th}\) root of unity and define the coordinate \(t\) such that \(t^{-N}=x_{N}\) so that \(t(\zeta)=M_{0}(\zeta)^{-1/N}(1-\frac{1}{\zeta^{m_{1}}N})\) where the branch of the \(N\mathrm{th}\) root is chosen such that \(M_{0}(\zeta=\emptyset^{m}N^{-1/m_{1}})\) does not lie on the cut for any values of \(m\) and \(N\) and the limit value of \(t\) (\(\lim_{N\to\infty}t\) is a constant function) is not a pole of any of the correlators. Our claim is then **Lemma 4.8**.: _With the above conditions, the principal part of \(\omega_{g,n+1}^{\mathsf{N}}(t,z_{[n]})\) at \(\mathcal{C}(t)=\emptyset^{m}N^{-1/m_{1}}\) is given by_ \[\operatorname*{Res}_{\zeta=\emptyset^{m}N^{-1/m_{1}}}\Big{(}\int_{\emptyset^{m} N^{-1/m_{1}}}^{\zeta}\omega_{0,2}(t,\cdot)\Big{)}\omega_{g,n+1}^{\mathsf{N}}( \zeta,z_{[n]})=N^{1-n+\frac{m_{2}}{m_{1}}x_{g,n+1}}\sum_{1=1-2g}^{-1}w_{g,n}^{ 1,m}(z_{[n]})\,d\xi_{1}^{m}(t)\,, \tag{75}\] _where_ \[\mathrm{d}\xi_{\mathrm{l}}^{m}(\mathrm{t}^{\prime})=\operatorname*{Res}_{\zeta= \vartheta^{m}\mathrm{N}^{-1/m_{1}}}\Big{(}\int_{\vartheta^{m}\mathrm{N}^{-1/m_{ 1}}}^{\zeta}\omega_{0,2}(\mathrm{t}^{\prime},\cdot)\Big{)}\mathrm{t}(\zeta)^{ \mathrm{l}}\mathrm{dt}(\zeta),\qquad\mathrm{l}<0\,, \tag{76}\] \(\chi_{g,n}=2-2g-n\) _is the Euler characteristic, and \(w_{g,n}^{1,m}=\mathcal{O}(\mathrm{N}^{0})\)._ We require a lemma for the proof. **Lemma 4.9**.: _With the notation as above,_ \[\tilde{\omega}_{0,2}^{\mathrm{N}}(\mathrm{t},\mathrm{t})=\frac{\mathrm{dt}\, \mathrm{du}}{(\mathrm{t}-\mathrm{u})^{2}}-\frac{\mathrm{dt}^{\mathrm{N}} \mathrm{du}^{\mathrm{N}}}{(\mathrm{t}^{\mathrm{N}}-\mathrm{u}^{\mathrm{N}})^{ 2}}\bigg{|}_{\mathrm{u}=\mathrm{t}}+\mathcal{O}(\mathrm{t}^{\phi}\mathrm{dt}^ {2})=\frac{(\mathrm{N}-1)(\mathrm{N}+1)(\mathrm{5N}-6)}{24\mathrm{N}}\frac{ \mathrm{dt}^{2}}{\mathrm{t}^{2}}+\mathcal{O}(\mathrm{t}^{\phi}\mathrm{dt}^{2} )\,. \tag{77}\] Proof.: The first equality holds by general invariance of the principal part of \(\frac{\mathrm{d}z\mathrm{d}w}{(z-w)^{2}}\) under change of local coordinates. The second equality is a direct calculation, using geometric series. \[\frac{\mathrm{dt}\,\mathrm{du}}{(\mathrm{t}-\mathrm{u})^{2}}- \frac{\mathrm{dt}^{\mathrm{N}}\mathrm{du}^{\mathrm{N}}}{(\mathrm{t}^{\mathrm{ N}}-\mathrm{u}^{\mathrm{N}})^{2}} =\Big{(}\Big{(}\sum_{m=0}^{N-1}\mathrm{t}^{\mathrm{N}-m-1}\mathrm{ u}^{m}\Big{)}^{2}-\mathrm{N}^{2}\mathrm{t}^{\mathrm{N}-1}\mathrm{u}^{\mathrm{N}-1} \Big{)}\frac{\mathrm{dt}\,\mathrm{du}}{(\mathrm{t}^{\mathrm{N}}-\mathrm{u}^{ \mathrm{N}})^{2}}\] \[=\sum_{k=0}^{\mathrm{N}-2}(k+1)\Big{(}\mathrm{t}^{2N-2-k}\mathrm{ u}^{k}+\mathrm{t}^{k}\mathrm{u}^{2N-2-k}-2t^{\mathrm{N}-1}\mathrm{u}^{\mathrm{N}-1} \Big{)}\frac{\mathrm{dt}\,\mathrm{du}}{(\mathrm{t}^{\mathrm{N}}-\mathrm{u}^{ \mathrm{N}})^{2}}\] \[=\sum_{k=0}^{\mathrm{N}-2}(k+1)\sum_{l=0}^{\mathrm{N}-2-k}\frac{ (\mathrm{t}^{2N-3-k-l}\mathrm{u}^{k+1}-\mathrm{t}^{k+l}\mathrm{u}^{2N-3-k-l}) \mathrm{dt}\,\mathrm{du}}{(\mathrm{t}-\mathrm{u})\big{(}\sum_{m=0}^{N-1} \mathrm{t}^{\mathrm{N}-1-m}\mathrm{u}^{m}\big{)}^{2}}\] \[=\sum_{j=0}^{\mathrm{N}-2}\sum_{k=0}^{j}(k+1)\frac{(\mathrm{t}^{2 N-3-j}\mathrm{u}^{j}-\mathrm{t}^{j}\mathrm{u}^{2N-3-j})\mathrm{dt}\,\mathrm{du}}{( \mathrm{t}-\mathrm{u})\big{(}\sum_{m=0}^{N-1}\mathrm{t}^{\mathrm{N}-1-m} \mathrm{u}^{m}\big{)}^{2}}\] \[=\sum_{j=0}^{\mathrm{N}-2}\frac{(j+1)(j+2)}{2}\mathrm{t}^{j} \mathrm{u}^{j}\sum_{i=0}^{2N-4-2j}\frac{\mathrm{t}^{2N-4-2j-i}\mathrm{u}^{i} \mathrm{dt}\,\mathrm{du}}{\big{(}\sum_{m=0}^{N-1}\mathrm{t}^{\mathrm{N}-1-m} \mathrm{u}^{m}\big{)}^{2}}\,.\] At this point, we may set \(\mathrm{u}=\mathrm{t}\), and obtain \[\tilde{\omega}_{0,2}^{\mathrm{N}}(\mathrm{t},\mathrm{t}) =\sum_{j=0}^{\mathrm{N}-2}\frac{(j+1)(j+2)}{2}(2\mathrm{N}-3-2j) \frac{\mathrm{t}^{2N-4}\mathrm{dt}^{2}}{\big{(}\mathrm{N}\mathrm{t}^{\mathrm{ N}-1}\big{)}^{2}}+\mathcal{O}(\mathrm{t}^{\phi}\mathrm{dt}^{2})\] \[=\frac{(\mathrm{N}-1)(\mathrm{N}+1)(5\mathrm{N}-6)}{24\mathrm{N}} \frac{\mathrm{dt}^{2}}{\mathrm{t}^{2}}+\mathcal{O}(\mathrm{t}^{\phi}\mathrm{dt} ^{2})\,.\qed\] Proof of lemma 4.8.: The principal part is given by the left-hand side because of the projection property of proposition 2.9. To prove the equality, we use induction on the negative of the Euler characteristic \(-\chi_{g,n}=2g-2+n\). We start with \(\omega_{0,1}\). However, we modify \(\omega_{0,1}\) locally near each ramification point \(z=\vartheta^{m}\mathrm{N}^{1/m_{1}}\) by subtracting \(-\mathrm{N}^{-1}\mathfrak{y}(\vartheta^{m}\mathrm{N}^{1/m_{1}})\,\mathrm{d}\log (\mathrm{x}_{\mathrm{N}})\); as this is a pure function of \(\mathrm{x}\), this modification does not affect topological recursion in any way, and it ensures \(\omega_{0,1}\) satisfies the local linear loop equations at the ramification points colliding at \(\mathfrak{a}\). Examining the expansion coefficients in \(\mathrm{t}\) of \(M_{2}\) around \(\mathrm{t}(z=\vartheta^{m}\mathrm{N}^{1/m_{1}})=0\), \[M_{2}(\mathrm{t})=\mathrm{N}^{\frac{m_{2}}{m_{1}}}\vartheta^{m\,m_{2}}\sum_{ \mathrm{t}=0}^{\infty}y_{1}\xi_{\mathrm{l}}(\mathrm{t})\,,\] where the \(y_{1}\), to leading order in \(\mathrm{N}\), do not depend on \(\mathrm{N}\) or \(m\). As \(\omega_{0,1}(\mathrm{t})=-\mathrm{N}(M_{2}(\mathrm{t})-M_{2}(\mathrm{t}=0)) \mathrm{dt}/\mathrm{t}\), the claimed result holds in this case. The claim for \(\omega_{0,2}\) also holds: \[\omega_{0,2}(\mathrm{t},z_{1})=\sum_{\mathrm{l}=1}^{\infty}w_{0,2}^{\mathrm{l},m}(z_{1})\xi_{\mathrm{l}}^{m}(\mathrm{t}),\qquad z\to\vartheta^{m}\mathrm{N}^{1/ m_{1}}\,.\] However, we will be particularly concerned with not \(\omega_{0,2}\) but \(\bar{\omega}_{0,2}\). There are two cases we need to examine when there is such a term in \(\mathcal{E}_{g,n,i}(\zeta_{k_{1}},\ldots,\zeta_{k_{i}}\,|z_{[n]}|)\). The first is when a term has a factor of \[\bar{\omega}_{0,2}(\zeta_{k_{a}},z_{b})=\omega_{0,2}(\zeta_{k_{a}},z_{b})- \frac{\mathrm{d}x_{N}(z)\mathrm{d}x_{N}(z_{b})}{(x_{N}(z)-x_{N}(z_{b}))}.\] Here, at the ramification points of interest, \(t=0\), we note that the second term has a zero of order \(N-1\) and so will not need to be taken into account in examining the loop equations. The first term is just \(\omega_{0,2}\) and so follows the claim. The second case is when there is a factor of \(\bar{\omega}_{0,2}(\zeta_{k_{a}},\zeta_{k_{a}})\). In this case, by lemma 4.9, \(\bar{\omega}_{0,2}(\zeta_{k_{a}},\zeta_{k_{a}})=\mathcal{O}(N^{2})\frac{ \mathrm{d}t^{2}}{t^{2}}\). Now we are equipped to perform the induction step. Let \(\alpha\) be a primitive \(Nth\) root of unity and examine the loop equation from proposition 4.7 \[\sum_{r=1}^{N}\mathcal{E}_{g,n,i}(\alpha^{r}t,\ldots,\alpha^{r}t\,|\,z_{[n]}) =\sum_{\mu\vdash[i]}\sum_{\begin{subarray}{c}\mu\vdash[i]\\ \sum_{k=1}^{\lfloor n\mu\rangle}\beta_{k}=g+1(\mu)-n\end{subarray}}\prod_{k=1}^ {1(\mu)}\omega_{g_{k},|i_{k}\downarrow+|N_{k}|}(\alpha^{r}t,\ldots,\alpha^{r }t,z_{N_{k}}).\] Let us first examine the terms that contain a factor of \(\omega_{g,n+1}\). These are \[i\sum_{r=1}^{N}\omega_{g,n+1}(\alpha^{r}t,z_{[n]})\omega_{0,1}(\alpha^{r}t)^ {i-1}.\] Each factor of \(\omega_{0,1}\) gives an \(N\) dependence of \(N^{1+\frac{m_{2}}{m_{1}}}\) and the sum over \(r\) gives an additional (possible) factor of \(N\). Thus the highest order \(N\) dependence the coefficient of \(\omega_{g,n+1}(\alpha^{r}t,z_{[n]})\) can have is \(N^{1+(i-1)(1+\frac{m_{2}}{m_{1}})}\). Now let us examine the highest order \(N\) dependence another term may have using the induction assumption. Examining the expansion of \(\bar{\omega}_{0,2}(\alpha^{r}t,\alpha^{r}t)\) and the induction assumption, we see that the highest order occurs when the partition \(\mu\) consists only of singletons and pairs where each pair has genus zero and none of the \(z_{[n]}\) to yield a \(\bar{\omega}_{0,2}(\alpha^{r}t,\alpha^{r}t)\) factor. We will calculate the largest possible \(N\) dependence of such a term. Let \(s\) be the number of singletons and \(d\) the number of pairs. The pairs give us \(d\) factors \(\bar{\omega}_{0,2}(\alpha^{r}t,\alpha^{r}t)\), which give an \(N\) dependence of \(N^{2d}\). The singletons give a total \(N\) dependence, by the induction assumption, of \(N^{2s-s-n+\frac{m_{2}}{m_{1}}(i-1+x_{g,n+1})}\). The sum then gives us an additional factor of \(N\), for a total dependence of (using that \(2d+s=i\)) \[N^{2d+s-n+\frac{m_{2}}{m_{1}}(i-1+x_{g,n+1})+1}=N^{i+1-n+\frac{m_{2}}{m_{1}}( i-1+x_{g,n+1})}.\] Thus, \(\omega_{g,n+1}\) indeed has the claimed \(N\) dependence and we are done. As a direct consequence of lemma 4.8, we find the following corollary. **Corollary 4.10**.: _Let \(\alpha\) be a pole of \(M_{1}\). Then all correlators \(\omega_{g,n}^{N}\) with \(2gm_{2}\geqslant(2-n)(m_{1}+m_{2})\) have vanishing principal part at \(\alpha\) in the limit as \(N\to\infty\); in particular, this includes all correlators with \(n\geqslant 2\)._ Proof.: Given that \(t=M_{0}(z)^{-1/N}(1-z^{m_{1}}/N)\), we have the following expansion, where the \(\alpha_{1}=\mathcal{O}(N^{0})\) are order one coefficients that, to leading order in \(N\), do not depend on \(m\): \[t=\sum_{l=1}^{\infty}\frac{a_{l}}{\bar{\omega}^{m}N^{l/m_{1}}}(z-\bar{\omega} ^{m}N^{1/m_{1}})^{l}.\] Ergo, lemma 4.8 implies that we have the expansion \[\omega_{g,n+1}^{N}(z,z_{[n]})=N^{1-n+\frac{m_{2}}{m_{1}}x_{g,n+1}}\sum_{l=2}^ {2g}\frac{A_{1,m}(z_{[n]})}{(z/(\bar{\omega}^{m}N^{1/m_{1}})-1)^{l}}\mathrm{d} \frac{z}{\bar{\omega}^{m}N^{1/m_{1}}}+\mathcal{O}\left((z-\bar{\omega}^{m}N^{1/ m_{1}})^{0}\right)\,, \tag{78}\] where the \(A_{1,m}=\mathcal{O}(N^{0})\). So if \(1-n+\frac{m_{2}}{m_{1}}x_{g,n+1}-\frac{1}{m_{1}}<0\), the limit \(N\to\infty\) vanishes. Changing \(n+1\to n\), this is equivalent to the condition in the corollary. **Corollary 4.11**.: _The correlators \(\omega_{g,n}\) with \(2g-2+n>0\) are regular at essential singularities where \(m_{2}\geqslant m_{1}\); in particular, this includes all essential singularities where \(M_{1}\) has only a simple pole \((m_{1}=1)\)._ Proof.: For \(n>1\) we have established the correlators may never have poles. For \(n=1\) we know the correlators do not have poles if \((2g-1)m_{2}\geqslant m_{1}\) by the previous corollary; this is trivially true if \(m_{2}\geqslant m_{1}\) as \(g\geqslant 1\). **Proposition 4.12**.: _Let \(S\) be a compact transalgebraic admissible spectral curve. For \(2gm_{2}\geqslant(2-n)(m_{1}+m_{2})\), the correlators \(\omega_{g,n}\) defined via topological recursion on \(S\) (definition 4.2) are regular at all essential singularities \(a\in R_{\infty}\). In particular, this includes all correlators with \(n\geqslant 2\)._ _The correlators \(\omega_{g,n}\) satisfying the condition above may be calculated via the topological recursion of definition 2.8 with residues only at the finite ramification points, but where the \(\omega_{g_{k},n_{k}}\) on the right-hand side of equation (12) are obtained by the topological recursion of definition 4.2._ Proof.: The first statement follows immediately from corollary 4.10 and the definition of transalgebraic topological recursion as a limit. For the second statement, note that the individual contributions of ramification points in equation (12) are continuous as the spectral curve varies without the type of ramification changing.12 This is the case for the \(a\in R_{0}^{N}\), which converge to \(R_{0}\). By the projection property, equation (18), \(\omega_{g,n}^{N}\) are the sums of their principal parts, and by corollary 4.10, the limit of the contributions at elements of \(R_{\infty}^{N}\) vanishes. As \(\omega_{g,n}\) is defined as the limit, this proves the second statement. Footnote 12: To be precise, this is proven for ramification points of arbitrary order in [2]. _Remark 4.13_.: Proposition 4.12 does _not_ mean that for \(2gm_{2}\geqslant(2-n)(m_{1}+m_{2})\), the correlators \(\omega_{g,n}\) calculated via topological recursion on the transalgebraic spectral curve (definition 4.2) are equal to the ones calculated from topological recursion on the non-compact meromorphic spectral curve that ignores essential singularities. Rather, the contributions from the essential singularities at higher Euler characteristic propagate to all \((g,n)\) through the recursion at finite ramification points. However, as the inequality only fails for \(n=1\) and \(2g-1\leqslant\frac{m_{1}}{m_{2}}\), it does mean that for any given transalgebraic spectral curve, the limit definition of topological recursion only has to be used a finite number of times, and can then be disregarded for the remaining correlator calculations. Now we provide a bound on the order of the poles of the correlators at the infinite ramification points. **Proposition 4.14**.: _Let \(S\) be a compact transalgebraic admissible spectral curve. Let \(a\in R_{\infty}\) be an infinite ramification point. Suppose that \(M_{1}\) has a pole of order \(m_{1}\) at \(a\), and let \(m_{2}\) be the order of the pole of \(xy\) at \(a\). Then \(\omega_{g,1}\) has a pole of order \(n\) greater than \(m_{2}(1-2g)+m_{1}+1\) at \(a\)._ Proof.: Begin with the expansion we found before in equation (78), which for \(n=0\) reads \[\omega_{g,1}^{N}(z)=N^{1+\frac{m_{2}}{m_{1}}(1-2g)}\sum_{1=2}^{2g}\frac{A_{1,m }}{(z/(\vartheta^{m}N^{1/m_{1}})-1)^{1}}d\frac{z}{\vartheta^{m}N^{1/m_{1}}}+ \mathcal{O}\left((z-\vartheta^{m}N^{1/m_{1}})^{0}\right)\,, \tag{79}\] where \(A_{1,m}=\mathcal{O}(N^{0})\). We then change coordinate \(z=\zeta^{-1}\), such that \(\zeta(a)=0\) and \(\zeta^{m_{1}}=M_{1}\), so we obtain \[\omega_{g,1}^{N}(\zeta)\sim N^{1+\frac{m_{2}}{m_{1}}(1-2g)-\frac{1}{m_{1}}} \sum_{1=2}^{2g}\frac{A_{1,m}\vartheta^{-m}}{(\vartheta^{-m}N^{-1/m_{1}}\zeta ^{-1}-1)^{1}}d\zeta^{-1}\,, \tag{80}\] where we ignore the non-polar part, as it does not contribute to the poles for an admissible spectral curve. For a fixed small \(\zeta\) take \(N\) large enough that \(N^{-1/m_{1}}\zeta^{-1}\) is small, and use this to expand denominators as geometric series. We see that the coefficient of \(\zeta^{-k}d\zeta^{-1}=-\zeta^{-k-2}d\zeta\) is then \(\mathcal{O}(N^{1+\frac{m_{2}}{m_{1}}(1-2g)-\frac{k+1}{m_{1}}})\), so for this to persist in the limit, we require that \[0 \leqslant 1+\frac{m_{2}}{m_{1}}(1-2g)-\frac{k+1}{m_{1}}\] \[0 \leqslant m_{1}+m_{2}(1-2g)-k-1\] \[k \leqslant m_{1}+m_{2}(1-2g)-1\,,\] which after a shift of \(2\) for \(k\) exactly gives the order in the proposition. We then conjecture that the principal parts of the correlators at essential singularities actually take a nice form. **Conjecture 4.15**.: _Let \(\mathcal{S}\) be a compact transalgebraic admissible spectral curve. The contribution to the correlator \(\omega_{g,n}\) from the infinite ramification points \(R_{\infty}\) is given explicitly by the following formula_ \[\sum_{\alpha\in R_{\infty}}\operatorname*{Res}_{t=a}\left(\int_{a}^{t}\omega_{ 0,2}(z_{0},\cdot)\right)\omega_{g,n}(t,z_{[n-1]})\] \[=\delta_{n,1}\frac{(2^{1-2g}-1)B_{2g}}{(2g)!}\sum_{\alpha\in R_{ \infty}}\operatorname*{Res}_{t=a}\left(\int_{a}^{t}\omega_{0,2}(z_{0},\cdot) \right)\mathrm{d}\left(\frac{\mathrm{d}}{\mathrm{d}M_{2}(t)}\right)^{2g-1} \log(x(t)) \tag{81}\] \[=\delta_{n,1}\frac{(2^{1-2g}-1)B_{2g}}{(2g)!}\sum_{\alpha\in R_{ \infty}}\operatorname*{Res}_{t=a}\left(\int_{a}^{t}\omega_{0,2}(z_{0},\cdot) \right)\mathrm{d}M_{2}(t)\left(\frac{\mathrm{d}}{\mathrm{d}M_{2}(t)}\right)^{ 2g}M_{1}(t)\,,\] _where \(B_{2g}\) denotes the \(2g\)th Bernoulli number. This formula implies that the only correlators with poles at essential singularities are those with \(n=1\). Furthermore, for admissible spectral curves, poles of \(M_{1}\) are poles of \(M_{2}\), so only finitely many \(\omega_{g,1}\) have poles._ The evidence we have for this conjecture is as follows: * the conjecture agrees with the bounds on the order of the poles in proposition 4.14; * if we replace \(x\) by \(x_{N}\) (with \(\tau=0\)) and \(R_{\infty}\) by the solutions of \(M_{1}=N\) in the above formula we reproduce the correct pole structure for finite \(N\) and \(n=1\), i.e., a pole of order \(2g\) (see lemma 2.10 with \(s_{a}=1\)); * the \(\omega_{g,n}\) will have vanishing principal parts at essential singularities for \(n\geqslant 2\) by corollary 4.10; * we prove the conjecture for the \(r\)-Atlantes Hurwitz curves in corollary 6.17; * the conjecture holds for \(g=1\) by the following proposition. **Proposition 4.16**.: _For \(g=1\), conjecture 4.15 holds._ Proof.: For \(n\geqslant 2\) the result holds by corollary 4.10, so we concentrate on the \(n=1\) case. Take a large positive integer \(N\) with a corresponding primitive \(N\)th root of unity \(\alpha\) and fix a ramification point of \(x_{N}\) (we use \(\tau=0\)), which we denote \(a\), such that \(M_{1}(a)=N\). Fixing a local coordinate \(t^{-N}=x_{N}\) we write the quadratic local loop equation for \(\omega_{1,1}^{N}\) at 13 Footnote 13: When we consider the local loop equations about a we need \(\omega_{0,1}^{N}\) to be regular at \(a\). This is accomplished by taking the local \(\omega_{0,1}^{N}\) to be \([M_{2}(\alpha^{i}t)-M_{2}(a)]\,\mathrm{d}\log(x_{N}(t))\), which differs from the original by a pure function of \(x_{N}\) and therefore does not change the other correlators. See the proof of lemma 4.8 for further explanation. \[\frac{1}{2!}\frac{dx_{N}(t)}{x_{N}(t)}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}2\left[M_{2}(\alpha^{i}t)-M_{2}(a)\right]\omega_{1,1}(\alpha^{j}t)+\frac{1}{2!}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\omega_{0,2}(\alpha^{i}t,\alpha^{j}t)=O\left( \frac{dx_{N}(t)^{2}}{x_{N}(t)}\right). \tag{82}\] By lemma 2.10 (with \(s_{a}=1\)) \(\omega_{1,1}^{N}\) has only a double pole at \(a\); denote the coefficient of this double pole as \(A\). Therefore, \[\left[M_{2}(\alpha^{i}t)-M_{2}(a)\right]\omega_{1,1}(\alpha^{i}t)=-\alpha^{i- j}M_{2}^{\prime}(a)\frac{A}{N}\frac{dx_{N}(t)}{x_{N}(t)}+O(\mathrm{d}t), \tag{83}\] where we assume that \(N\) is chosen so large that \(M_{2}^{\prime}(a)\neq 0\). Furthermore, from [1, Lemma A.5] we have \[\frac{1}{2!}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\omega_{0,2}(\alpha^{i}t,\alpha^{j}t)=-\frac{N^{2}- 1}{24N}\left(\frac{dx_{N}(t)}{x_{N}(t)}\right)^{2}. \tag{84}\] Putting these two results together we obtain \[A=\frac{1}{M_{2}^{\prime}(a)}\frac{N^{2}-1}{24N}, \tag{85}\] and we can then repackage this result in a more suggestive form (the base point \(*\) is arbitrary) \[\operatorname*{Res}_{t=a}\left(\int_{*}^{t}\omega_{0,2}(z_{0},\cdot)\right) \omega_{1,1}^{N}(t)=-\frac{N^{2}-1}{24N^{2}}\operatorname*{Res}_{t=a}\left(\int_ {*}^{t}\omega_{0,2}(z_{0},\cdot)\right)\mathrm{d}\frac{\mathrm{d}}{\mathrm{d }M_{2}(t)}\log(x_{N}(t)). \tag{86}\] This holds for every \(a\) with \(M_{1}(a)=N\). Ergo, we can sum both sides over all such \(a\). Moreover, near the elements of \(R_{\infty}\) (poles of \(M_{1}\)) the integrands will only have poles at such as; we can therefore replace this sum over residues by integration over a contour \(\Gamma\) that is the disjoint union of small circles around each element of \(\mathbb{R}_{\infty}\) \[\int_{\Gamma(\mathrm{t})}\left(\int_{*}^{\mathrm{t}}\omega_{0,2}(z_{0},\cdot) \right)\omega_{1,1}^{\mathrm{N}}(\mathrm{t})=-\frac{\mathrm{N}^{2}-1}{24 \mathrm{N}^{2}}\int_{\Gamma(\mathrm{t})}\left(\int_{*}^{\mathrm{t}}\omega_{0,2} (z_{0},\cdot)\right)\mathrm{d}\frac{\mathrm{d}}{\mathrm{d}M_{2}(\mathrm{t})} \log(\mathrm{x}_{\mathrm{N}}(\mathrm{t})). \tag{87}\] Noting that the second Bernoulli number is \(B_{2}=1/6\) so \((2^{1-2}-1)B_{2}/2!=-1/24\), the claimed result holds upon taking the \(\mathrm{N}\to\infty\) limit of the above expression. Let us show this explicitly in an example. **Example 4.17**.: Let \(z\) be an affine coordinate on \(\mathbb{P}^{1}\), \(\tau,q\in\mathbb{Z}_{\geqslant 1}\), and consider the spectral curve \[\mathcal{S}=\left(\mathbb{P}^{1},\mathrm{x}(z)=z\mathrm{e}^{-z^{\alpha\tau}}, \mathrm{y}(z)=z^{q-1}\mathrm{e}^{z^{\alpha\tau}}\right), \tag{88}\] which we will call the \(q\)-orbifold \(r\)-atlantes Hurwitz curve. Here we will use lemma 4.5 to calculate the contribution from the essential singularity at infinity to the correlator \(\omega_{1,1}\). Given our spectral curve we have \(\mathrm{M}_{1}(z)=-z^{\alpha\tau}\), \(\mathrm{M}_{2}(z)=z^{q}\), and \(\mathrm{W}_{1,0,2}(\mathrm{t},\mathrm{t}_{1}\,|\,\emptyset)=\omega_{0,2}( \mathrm{t},\mathrm{t}_{1})\). Letting \(\vartheta\) be a primitive \(q\)th root of unity we see that this contribution will be \[\operatorname*{Res}_{\mathrm{t}=\infty}\left(\int_{\infty}^{\mathrm{t}}\omega _{0,2}(z_{0},\cdot)\right)\omega_{1,1}(\mathrm{t})=\operatorname*{Res}_{ \mathrm{t}=\infty}\frac{\mathrm{d}z_{0}}{z_{0}-\mathrm{t}}\operatorname*{Res} _{\mathrm{t}_{1}=\mathbb{R}_{0},\mathrm{y}(\mathrm{t})}\frac{\mathrm{t} \mathrm{e}^{-\mathrm{t}^{\alpha\tau}}}{\mathrm{te}^{-\mathrm{t}^{\alpha\tau} }-\mathrm{t}_{1}\mathrm{e}^{-\mathrm{t}^{\alpha\tau}}}\frac{\omega_{0,2}( \mathrm{t},\mathrm{t}_{1})}{\mathrm{t}^{q}-\mathrm{t}_{1}^{q}}. \tag{89}\] The residues at \(\mathrm{t}_{1}=\mathbb{R}_{0}\) will drop out, as the integrand has no poles here. For the residues at \(\mathrm{t}_{1}=\mathbb{y}(\mathrm{t})\) we must be careful to distinguish between the trivial and non-trivial sheets of \(\mathrm{M}_{2}\), as the pole structure of the integrand is different in these two cases. First, we look at the non-trivial sheets, where there is only a simple pole \[\sum_{m=2}^{q-1}\operatorname*{Res}_{\mathrm{t}_{1}=\emptyset=\mathrm{t}}\frac {\mathrm{t}\mathrm{e}^{-\mathrm{t}^{\alpha\tau}}}{\mathrm{t}\mathrm{e}^{- \mathrm{t}^{\alpha\tau}}-\mathrm{t}\mathrm{e}^{-\mathrm{t}^{\alpha\tau}}} \frac{\omega_{0,2}(\mathrm{t},\mathrm{t}_{1})}{\mathrm{t}^{q}-\mathrm{t}_{1}^ {q}}=\sum_{m=2}^{q-1}\frac{1}{1-\vartheta^{m}}\frac{\vartheta^{m}}{-q}\frac{ \mathrm{dt}}{\mathrm{t}^{2}(1-\vartheta^{m})^{2}}, \tag{90}\] which we see has no pole at \(\mathrm{t}_{1}=\infty\) and so will not contribute to the final result. Next, we examine the residue at \(\mathrm{t}_{1}=\mathrm{t}\). The calculation was done on SageMath [10] and we just present the result here: \[\operatorname*{Res}_{\mathrm{t}_{1}=\mathrm{t}}\frac{\mathrm{t}\mathrm{e}^{- \mathrm{t}^{\alpha\tau}}}{\mathrm{t}_{1}\mathrm{e}^{-\mathrm{t}^{\alpha\tau} }-\mathrm{t}\mathrm{e}^{-\mathrm{t}^{\alpha\tau}}}\frac{\omega_{0,2}(\mathrm{t },\mathrm{t}_{1})}{\mathrm{t}_{1}^{q}-\mathrm{t}^{q}}=-\frac{\mathrm{q} \mathrm{r}(\mathrm{r}-1)\mathrm{t}^{q\mathrm{r}-q-1}\mathrm{dt}}{24}+\mathcal{O} (\mathrm{t}^{-2})\mathrm{dt}. \tag{91}\] Then, multiplying by \(\int_{\infty}^{\mathrm{t}}\omega_{0,2}(z_{0},\cdot)\) and taking the residue at infinity we obtain \[\operatorname*{Res}_{\mathrm{t}=\infty}\left(\int_{\infty}^{\mathrm{t}}\omega _{0,2}(z_{0},\cdot)\right)\omega_{1,1}(\mathrm{t})=-\frac{\mathrm{r}\mathrm{d }z_{0}^{q(\mathrm{r}-1)}}{24}, \tag{92}\] which is in agreement with conjecture 4.15 (that is, proposition 4.16). Note that we did not have to use the re-definition of the residue at \(\mathrm{t}=\mathbb{R}_{\infty}\) in lemma 4.5 as the integrand is meromorphic. This is generic to calculations of \(\omega_{1,1}\), but will not hold for more complicated correlators. ## 5. Quantum curves One of the main motivation for introducing topological recursion on transalgebraic spectral curves is to make a sense of a conundrum related to sequences of meromorphic spectral curves, which arises in the context of quantum curves. To understand this, we introduce the notion of quantum curves, and the topological recursion/quantum curve correspondence. ### The topological recursion/quantum curve correspondence Topological recursion originally appeared in the context of matrix models, where the correlators \(\omega_{g,n}\) are generating functions for expectation values of the traces of the matrices under consideration [1, 13]. But the trace is only one of the two most natural basis-independent objects one can form from a matrix; the other is, of course, the determinant. Traces and determinants are intimately connected, and, fundamentally, this relation is what gives rise to the topological recursion/quantum curve correspondence. In a matrix model, the expectation values of the determinants satisfy certain differential equations; roughly speaking, the solution of these differential equations is the _wave function_\(\psi\) and the operator that kills it is the _quantum curve_. Because of the well known relation \(\mathrm{det}\exp=\exp\mathrm{Tr}\), it is intuitively clear that the wave function should involve the exponential of the \(\omega_{g,n}\). The connection between the differential equation satisfied by the wave function \(\psi\) and the topological recursion satisfied by the correlators \(\omega_{g,n}\) is made explicit in the topological recursion/quantum curve correspondence. Let us now be a little more precise. Let \(\delta=(\Sigma,x,y,B)\) be a meromorphic spectral curve. The functions \(x\) and \(y\) satisfy a relation \(P(x,y)=0\). If the spectral curve is compact, then \(P\) will be polynomial, but in general it may not be. Define the wave function \(\psi\) associated to the spectral curve \(\delta\) as \[\psi(x(z))=\exp\left[\sum_{n=1}^{\infty}\sum_{g=0}^{\infty}\frac{\hbar^{2g+n-2} }{n!}\int^{z}\dots\int^{z}\left(\omega_{g,n}-\delta_{g,0}\delta_{n,2}\frac{dx(z _{1})dx(z_{2})}{(x(z_{1})-x(z_{2}))^{2}}\right)\right]\,, \tag{93}\] which is an exponential of the correlators \(\omega_{g,n}\) constructed from topological recursion. Here \(\hbar\) is a formal expansion parameter, there are \(n\) integrations in each term, and it is conventional to write \(\psi\) as a function of \(x(z)\), rather than \(z\), even though it is not globally well-defined as such.14 The exact nature of the integration should be defined carefully (see, for example, [1]). Footnote 14: This convention is the natural one as the way one obtains the expectation values of the traces from the \(\omega_{g,n}\) is through formal expansion in \(x\) where the expectation values of the traces are read off from the expansion coefficients. The statement of the topological recursion/quantum curve correspondence is that there should exist an operator \(\hat{P}(\hat{x},\hat{y},\hbar)\) such that \[\hat{P}\psi=0\,, \tag{94}\] where \(\hat{x}=x\cdot\) and \(\hat{y}=\hbar\frac{d}{dx}\). Furthermore, this \(\hat{P}\) should be a quantisation of \(P\), in the sense that \(\hat{P}(x,y,0)=P(x,y)\). If such a \(\hat{P}\) exists, we call it a "quantum curve". Of course, there is no unique quantisation of \(P\), due to non-commutativity of \(\hat{x}\) and \(\hat{y}\). Moreover, we may allow corrections of order \(\hbar\) in the operator \(\hat{P}\). In our context, we define quantisation as follows. **Definition 5.1**.: Let \(\delta\) be a meromorphic spectral curve, with \(x\) and \(y\) satisfying the relation \(P(x,y)=0\). We say that \(\hat{P}(\hat{x},\hat{y};\hbar)\) is a quantisation of \(P(x,y)\) if we have the following expansion for some \(m\in\mathbb{N}\cup\{\infty\}\): \[\hat{P}(\hat{x},\hat{y};\hbar)=P(\hat{x},\hat{y})+\sum_{i=1}^{m}\hbar^{i}\hat {P}_{i}(\hat{x},\hat{y}),\] where \(P(\hat{x},\hat{y})\) is taken to be normally ordered (in each term all the \(\hat{x}\) are put to the left of the \(\hat{y}\)) and the \(\hat{P}_{i}\) are normal ordered polynomials of degree at most \(\deg P-1\). We say that the quantisation is _simple_ if \(m<\infty\). We can now state the topological recursion/quantum curve correspondence.15 Footnote 15: This conjecture is sometimes referred to as the Gukov-Sulkowski conjecture in the literature [13]; however, the result has been well-known in the context of matrix models [12] long before the topological recursion was introduced, and was already being considered more generally in [1] in the context of the then recently discovered topological recursion before [13]. **Conjecture 5.2**.: _Let \(\delta\) be a meromorphic spectral curve, with \(x\) and \(y\) satisfying the relation \(P(x,y)=0\). Let \(\psi(x(z))\) be the wave function (93) associated to \(\delta\), with the \(\omega_{g,n}\) constructed from topological recursion. Then there exists a quantisation \(\hat{P}(\hat{x},\hat{y};\hbar)\) of \(P(x,y)\) such that_ \[\hat{P}(\hat{x},\hat{y};\hbar)\psi(x(z))=0. \tag{95}\] _We call \(\hat{P}(\hat{x},\hat{y};\hbar)\) a quantum curve._ As stated here, the conjecture is imprecise. To start with, it requires a proper definition of integration in the wave function (93) (see [1]). Furthermore, the wave function (93) is the "perturbative wave function", and as stated the conjecture is only expected to hold when the spectral curve is genus zero. For higher genus spectral curves, non-perturbative corrections should be added to (93). Nevertheless, the statement can be made precise, and the conjecture has been proved for a wide class of compact meromorphic genus zero spectral curves with arbitrary ramification in [1], as well as for every compact meromorphic spectral curves with only simple ramification in [1, 10, 1]. ### Quantum curves for transalgebraic spectral curves The conjecture has also been proved for a number of non-compact meromorphic genus zero spectral curves, such as the spectral curve from example 2.6[14]. However, in contrast to the compact cases mentioned above, in these non-compact cases the existence of the quantum curve is proved from the enumerative geometric interpretation of the correlators (which is \(r\)-completed cycles Hurwitz theory in the case of example 2.6), and not directly from topological recursion. Part of the motivation for the current paper was to prove the existence of quantum curves for such cases directly from topological recursion. Our idea is simple: as the transalgebraic spectral curve is obtained as the \(N\to\infty\) limit of a sequence of compact meromorphic spectral curves, if quantum curves are known to exist for the spectral curves at finite \(N\), then we simply need to take the \(N\to\infty\) limit of these quantum curves to get the quantum curve for topological recursion on the transalgebraic spectral curve. Therefore, to achieve this program we need to consider transalgebraic curves for which the finite \(N\) spectral curves are known to satisfy the topological recursion/quantum curve correspondence. In this respect, we will use the results of [1]. Using this methodology, we will prove the topological recursion/quantum curve correspondence for a large class of compact meromorphic spectral curves, which are called "regular". Moreover, the proof is constructive, as it provides an explicit way of calculating the quantum curve. To understand the regularity condition on spectral curves, we need to introduce the Newton polygon of a plane curve. **Definition 5.3**.: The _Newton polygon_\(\Delta\) of \(P\) is the convex hull of the exponents in \(P\), i.e., the convex hull in \(\mathbb{R}^{2}\) of \(A\coloneqq\{(i,j)\in\mathbb{N}^{2}\,|\,\alpha_{i,j}\neq 0\}\). Regularity for compact meromorphic spectral curves is defined as follows.16 Footnote 16: Regular is called “admissible” in [1]. **Definition 5.4** (Definition 2.7, [1]).: Let \(\delta\) be a compact meromorphic spectral curve. Then \(x\) and \(y\) satisfy a polynomial equation \(P(x,y)=0\). We say that \(\delta\) is _regular_ if \(P(x,y)=0\) is smooth as an affine curve and its Newton polygon has no integral interior point.17 Footnote 17: As every Newton polygon with an interior contains a non-integral interior point it is common to state this condition without the word “integral”. In particular, all regular spectral curves have genus zero by Baker's formula [1], which states that the number of interior points of the Newton polygon is greater or equal than the genus of the curve. In fact, compact meromorphic spectral curves that are regular can be classified [1]. A compact meromorphic spectral curve is regular if and only if it falls into one of the following cases: * \(P(x,y)\) is linear in \(x\), i.e., \(P(x,y)=xE_{1}(y)-E_{2}(y)\), where \(E_{1},E_{2}\) are polynomials. * \(P(x,y)\) has Newton polygon \(\Delta\) given by the convex hull of \(\{(0,0),(0,2),(2,0)\}\). * \(P(x,y)\) is obtained from one of the two previous cases via a transformation \((x,y)\to(x^{a}y^{b},x^{c}y^{d})\) with \(ad-bc=1\) and a rescaling by powers of \(x\) and \(y\) to get an irreducible polynomial equation. For all regular spectral curves, the quantum curve associated to the corresponding wave function is constructed in [1]. We would like to extend the notion of regularity to transalgebraic spectral curves. In the spirit of defining the transalgebraic in terms of limits of the algebraic, it would seem most natural to define transalgebraic curves \(\delta\) as regular precisely when the considered sequence of meromorphic curves that converge to \(\delta\) are regular. The following lemma precisely characterises when this is the case. **Lemma 5.5**.: _Let \(\delta=(\varSigma,x,y,B)\) be a compact transalgebraic regular spectral curve, with the notation of definition 5.6. Then the curves \(\delta_{N}=(\varSigma,x_{N},y_{N},B)\) with_ \[x_{N}=M_{0}\left(1+(\tau-1)\frac{M_{1}}{N}\right)^{-N}\left(1+\tau\frac{M_{1} }{N}\right)^{N}\,,\qquad y_{N}=M_{2}/x_{N} \tag{96}\] _are regular for all \(N\) if and only if \(\varSigma\cong P^{1}\) and \(xy\in Aut(P^{1})\)._ Proof.: If \(x_{N}y_{N}=M_{2}\) is a Mobius transformation, then we may define an affine coordinate \(z\) as \(z=x_{N}y_{N}\). As \(x_{N}\) is meromorphic on \(P^{1}\), this means that we can write \[x_{N}=f_{N}(z) \tag{97}\] for some rational function \(f_{N}\). Clearing denominators and using \(z=x_{N}y_{N}\) it follows that \[E_{N}^{(1)}(z)x_{N}-E_{N}^{(2)}(z)=0 \tag{98}\] for some polynomials \(E_{N}^{(1)}\) and \(E_{N}^{(2)}\). But this is a regular curve, as it is obtained from the curve \[E_{N}^{(1)}(y_{N})x_{N}-E_{N}^{(2)}(y_{N})=0 \tag{99}\] via the transformation \((x_{N},y_{N})\mapsto(x_{N},x_{N}y_{N})\). Alternatively, assume \(\mathcal{S}_{N}\) is regular for every \(N\). By the classification of regular compact meromorphic spectral curves \(\mathcal{S}_{N}\) will be a transformation of either a curve \(P_{N}(x_{N},y_{N})=0\) that is linear in \(x_{N}\), or a curve \(P_{N}(x_{N},y_{N})=0\) with Newton polygon \(\Delta_{N}\) given by the convex hull of \(\{(0,0),(0,2),(2,0)\}\). First consider the later case. Up to suitable rescaling by overall powers of \(x_{N}\) and \(y_{N}\) the most general curve of this form is \[P_{N}(x_{N},y_{N})=k_{0}+k_{1}x_{N}^{a}y_{N}^{b}+k_{2}x_{N}^{c}y_{N}^{d}+k_{3} x_{N}^{a+c}y_{N}^{b+d}+k_{4}x_{N}^{2a}y_{N}^{b}+k_{5}x_{N}^{2c}y_{N}^{d}, \tag{100}\] for some constants \(k_{0},\dots,k_{5}\in C\) and \(ad-bc=1\). However, for sufficiently large \(N\), \(\mathcal{S}_{N}\) will never take this form as \(P_{N}\) will have more than six non-zero terms. Ergo, we focus on the former case; when \(P_{N}(x_{N},y_{N})=0\) is a transformation of a curve linear in \(x_{N}\). Thus, for some polynomials \(E_{N}^{(1)}\) and \(E_{N}^{(2)}\) and \(ad-bc=1\), \[P_{N}(x_{N},y_{N})=x_{N}^{a}y_{N}^{b}E_{N}^{(1)}(x_{N}^{c}y_{N}^{d})+E_{N}^{(2 )}(x_{N}^{c}y_{N}^{d})=0. \tag{101}\] Letting \(E_{N}=E_{N}^{(2)}/E_{N}^{(1)}\) and choosing an affine coordinate \(w\) on \(\mathbb{P}^{1}\) we have \[(x_{N}^{a}y_{N}^{b})(w)=E_{N}((x_{N}^{c}y_{N}^{d})(w)). \tag{102}\] We now wish to count the number of sheets of these two equal functions. Using \(y_{N}=M_{2}/x_{N}\) and denoting the degree (as a branched covering) of the rational function \(E_{N}\) as \(D_{N}\), we have \[|a-b|\deg(x_{N})+|b|\deg(M_{2})\geqslant\deg(x_{N}^{a}y_{N}^{b})=\deg(E_{N}(x_ {N}^{c}y_{N}^{d}))\geqslant D_{N}\,\|c-d|\deg(x_{N})-d\deg(M_{2})|\,. \tag{103}\] By assumption, \(\mathcal{S}_{N}\) is transalgebraic in the limit so \(E_{N}\) must not be meromorphic in the limit. Thus, \(D_{N}\to\infty\) so the only way this inequality can hold for arbitrarily large \(N\) is if \(|c-d|=0\) so \(c=d\). However, we now have that \((a-b)c=1\) so \(c=\pm 1\) and \(a=b\pm 1\). This gives us \[x=[xy]^{-b}[E_{N}([xy]^{\pm 1})]^{\pm 1}, \tag{104}\] so \(x\) is a well-defined function of \(xy\) and as \(y=xy/x\), \(y\) is also a well-defined function of \(xy\). Therefore \(xy\) is in fact a valid coordinate everywhere on the curve so we have that the function \(xy:\varSigma\cong\mathbb{P}^{1}\to\mathbb{P}^{1}\) can be taken to be injective. Ergo, as \(xy\) is also meromorphic and therefore rational, \(xy\in\operatorname{Aut}(\mathbb{P}^{1})\). The preceding lemma, then, justifies the succeeding definition. **Definition 5.6**.: Let \(\mathcal{S}=(\varSigma,x,y,B)\) be a compact transalgebraic spectral curve. We say that \(\mathcal{S}\) is _regular_ if \(\varSigma\cong\mathbb{P}^{1}\) and \(xy\in\operatorname{Aut}(\mathbb{P}^{1})\). _Remark 5.7_.: The reader may wonder whether our regularity condition is merely an artefact of our considered sequence. However, this is almost certainly not the case. Any sequence of curves converging to a transalgebraic curve will have more than six terms, and the equality \(\deg(x_{N}^{a}y_{N}^{d})=\deg(E_{N})\deg(x_{N}^{c}y_{N}^{d})\) should always enforce that \(\deg(x_{N}^{c}y_{N}^{d})\) will be finite in the limit so \(c=d=1\). In contrast to the meromorphic case, where admissibility and regularity were very much independent conditions, in the transalgebraic case there is a nice classification of all regular curves that are also admissible. **Proposition 5.8**.: _Let \(\mathcal{S}=(\varSigma,x,y,B)\) be a regular transalgebraic spectral curve. Then \(\mathcal{S}\) is admissible if and only if \(M_{1}\) is a polynomial in \(xy\)._ Proof.: We first prove necessity. Let \(z=xy\) be an affine coordinate. \(M_{1}\) is meromorphic and we are in genus zero so it is rational. If it were not polynomial, then it would have a pole at some point \(z=p\neq\infty\). As \(x\) will then have an essential singularity at this point, and \(xy=M_{2}\) does not have a pole at this point, the curve may not be admissible by definition 3.12. Now we prove sufficiency. In light of the argument for necessity, it is clear that the curve will satisfy definition 3.12 at all essential singularities of \(x\) so we must only concern ourselves with the finite ramification points where admissibility is defined in definition 2.2. Again letting \(z=xy\) be an affine coordinate, we see \(\omega_{0,1}(z)=z\operatorname{d}\log(x(z))\). Let \(a\in R_{0}\setminus\{z=0\}\) be a ramification point and examine two cases: when \(x(a)\in\{0,\infty\}\), or when \(x(a)\in\mathbb{C}^{\times}\). In the first case we have that \(s_{a}=1\) and in the second case we have that \(s_{a}=r_{a}\). In either case admissibility holds. In the circumstance that \(z=0\) is a ramification point we again examine the two cases \(x(0)\in\{0,\infty\}\) and \(x(0)\in\mathbb{C}^{\times}\). In the first case nothing changes and \(s_{a}=1\), but in the second case we now have that \(s_{a}=r_{a}+1\) rather than \(r_{a}\). In all cases admissibility holds. Given a compact transalgebraic admissible regular spectral curve \(\mathcal{S}\), the strategy to construct its quantum curve is then clear. It proceeds in two steps: 1. For all \(N\), we construct the wave function \(\psi_{N}\) from the correlators \(\psi_{g,n}^{N}\) obtained via the usual topological recursion on \(\mathcal{S}_{N}\). From [1], we can construct an associated quantum curve \(\hat{P}_{N}\) such that \[\hat{P}_{N}\psi_{N}=0.\] (105) 2. The \(N\to\infty\) limit of the correlators \(\psi_{g,n}^{N}\) gives the correlators \(\omega_{g,n}\) associated to the transalgebraic spectral curve \(\mathcal{S}\), and hence the \(N\to\infty\) limit of the wave-function \(\psi_{N}\) gives the wave function \(\psi\) associated to \(\mathcal{S}\). Its quantum curve is thus obtained by taking the \(N\to\infty\) limit of \(\hat{P}_{N}\): \[\lim_{N\to\infty}\left(\hat{P}_{N}\psi_{N}\right)=\left(\lim_{N\to\infty}\hat {P}_{N}\right)\left(\lim_{N\to\infty}\psi_{N}\right)=\hat{P}\psi=0.\] (106) This strategy is studied in detail in appendix A - see theorem A.5 and theorem A.8. In particular, we apply this procedure explicitly to calculate the quantum curve for the spectral curve of example 3.8 in the next section. ### A particular example Let us recall the spectral curve from example 3.8: \[\mathcal{S}_{\infty}=\left(\Sigma=\mathbb{P}^{1},\quad x(z)=z\epsilon^{-z^{ \tau}},\quad y(z)=\epsilon^{z^{\tau}},\quad B=\frac{dz_{1}dz_{2}}{(z_{1}-z_{2} )^{2}}\right), \tag{107}\] where \(\tau\in\mathbb{Z}_{\geqslant 1}\) is a fixed integer. The functions \(x\) and \(y\) satisfy the relation \[P(x,y)=y-\epsilon^{x^{\tau}y^{\tau}}=0. \tag{108}\] We note that this transalgebraic curve is regular, since \(M_{2}(z)=x(z)y(z)=z\). Thus, all spectral curves \(\mathcal{S}_{N}\) in the sequence are regular, and the results of [1] apply for finite \(N\). As a result, we obtain the quantum curve associated to \(\mathcal{S}_{\infty}\): **Proposition 5.9**.: _Let \(\mathcal{S}_{\infty}\) be the compact transalgebraic spectral curve from example 3.8. We use the results and notations from appendix A. Let \(\psi_{\infty}(x;0)\) be the wave function associated to \(\mathcal{S}_{\infty}\) and constructed from the correlators \(\omega_{g,n}^{\infty}\) with the integration base point \(z=0\). Then \(\psi_{\infty}(x;0)\) satisfies the quantum curve differential equation_ \[\left(\hat{g}-\epsilon^{(\hat{g})^{\tau}}\right)\psi_{\infty}(x;0)=0. \tag{109}\] Proof.: As \(\mathcal{S}_{\infty}\) is regular, by lemma 5.5 we know that for all \(N\)\(\mathcal{S}_{N}\) is regular, and we can use the results of appendix A. For all \(\tau\in C\), we write the equation \(P(x,y)=0\) satisfied by the functions \(x\) and \(y\) as follows, following remark A.6: \[P(x,y)=y\epsilon^{(\tau-1)(xy)^{\tau}}-\epsilon^{x(xy)^{\tau}}=\sum_{m=0}^{ \infty}\frac{(\tau-1)^{m}}{m!}x^{\tau m}y^{\tau m+1}-\sum_{m=0}^{\infty}\frac {\tau^{m}}{m!}x^{\tau m}y^{\tau m}. \tag{110}\] Using the notation of appendix A we find that \(|\alpha_{m}|=m-1+\delta_{m,0}\), \(q_{m}(x)=0\) if \(m\neq 0,1\pmod{r}\), \(q_{\tau m}(x)=-\frac{\tau^{m}x^{\tau m}}{m!}\), and \(q_{\tau m+1}(x)=\frac{(\tau-1)^{m}x^{\tau m}}{m!}\). Choosing the base point \(b=\{z=0\}\), we find the following coefficients \[\begin{split} H_{1}&=\hbar\left(\frac{d}{dx}-\frac {1}{x}\right),\quad H_{i}=\hbar\left(x\frac{d}{dx}-1\right),\\ F_{1}&=\hbar\frac{d}{dx},\quad F_{i}=\hbar x\frac {d}{dx},\quad G_{i}=0.\end{split} \tag{111}\] As \(z=0\) is a zero of \(x\) which is not in the ramification locus, we can apply theorem A.8. We get the following quantum curve, where \(\hat{x}=x\) and \(\hat{y}=\hbar\frac{d}{dx}\): \[\begin{split}\hat{P}(\hat{x},\hat{y};\hbar)&=-1+ \frac{1}{x}\sum_{m=0}^{\infty}\frac{(\tau-1)^{m}\hbar^{\tau m+1}}{m!}\left(x \frac{d}{dx}-1\right)^{\tau m}x\frac{d}{dx}-\frac{1}{x}\sum_{m=1}^{\infty} \frac{\tau^{m}\hbar^{\tau m}}{m!}\left(x\frac{d}{dx}-1\right)^{\tau m-1}x^{2} \frac{d}{dx}\\ &=\sum_{m=0}^{\infty}\frac{(\tau-1)^{m}\hbar^{\tau m+1}}{m!} \left(x\frac{d}{dx}\right)^{\tau m}\frac{d}{dx}-\sum_{m=0}^{\infty}\frac{\tau^ {m}\hbar^{\tau m}}{m!}\left(x\frac{d}{dx}\right)^{\tau m}\\ &=\epsilon^{(\tau-1)(\hbar x\frac{d}{dx})^{\tau}}\hbar\frac{d}{ dx}-\epsilon^{x(\hbar x\frac{d}{dx})^{\tau}},\end{split} \tag{112}\] where we used \(x^{-1}(x\frac{d}{dx}-1)x=x\frac{d}{dx}\). Finally, for any value of \(\tau\) we can multiply on the left by an invertible operator to show that \(\hat{P}(\hat{x},\hat{y};h)\psi_{\infty}(x;0)=0\) if and only if \[\left(\hat{y}-e^{(\hat{x}\,\hat{y})^{\tau}}\right)\psi_{\infty}(x;0)=0, \tag{113}\] which completes the proof. _Remark 5.10_.: Note that we started with a quantum curve that depended on \(\tau\) and ended up with a result that had no \(\tau\) dependence. This is because all the quantum curves for different \(\tau\) were related by multiplication on the left by an invertible operator and multiplying on the left by an invertible operator does not change the solution of the corresponding differential equation. It is unclear to the authors whether this holds in general, or is unique to the case considered. ## 6. Topological recursion for Atlantes Hurwitz numbers The astute reader may recognise the quantum curve (109): it appeared in the work of [1], where it is proved that it annihilates the wave function for Atlantes Hurwitz numbers. In fact, to quote [1]: "We have an example where the dequantization of the quantum curve doesn't give a spectral curve suitable for the corresponding topological recursion." They also state: "We can conclude that the dequantization of \(\hat{y}-e^{\hat{x}\,\hat{y}^{\tau}}\) cannot be the spectral curve for the atlantes Hurwitz numbers, suitable for the construction of the topological recursion." What do they mean by that? By "dequantization" of \(\hat{y}-e^{\hat{x}\,\hat{y}^{\tau}}\), they mean the relation \(P(x,y)=y-e^{x^{\tau}\,\hat{y}^{\tau}}=0\). They then assume that this relation is associated with the spectral curve of example 2.6, namely the non-compact meromorphic spectral curve \[\hat{S}=\left(\Sigma=C,\quad x(z)=ze^{-z^{\tau}},\quad y(z)=e^{z^{\tau}},\quad B =\frac{dz_{1}dz_{2}}{(z_{1}-z_{2})^{2}}\right). \tag{114}\] As it was already conjectured, with substantial evidence, in [15] (later proved in [1]) that topological recursion on this spectral curve produces correlators \(\omega_{g,n}\) that are generating functions for \(\tau\)-completed cycles Hurwitz numbers, which are _not_ Atlantes Hurwitz numbers, they conclude that Atlantes Hurwitz numbers provide an example of enumerative invariants satisfying a quantum curve relation that does not arise from topological recursion. However, this is not the end of the story. The key realisation of the present paper is that there are in fact two different spectral curves with an enumerative interpretation that share the same relation \(P(x,y)=y-e^{x^{\tau}\,\hat{y}^{\tau}}=0\): 1. The non-compact meromorphic spectral curve \(\hat{S}=\left(C,x(z)=ze^{-z^{\tau}},y(z)=e^{z^{\tau}},B=\frac{dz_{1}dz_{2}}{( z_{1}-z_{2})^{2}}\right)\); 2. The compact transalgebraic spectral curve \(\hat{S}_{\infty}=\left(\mathbb{P}^{1},x(z)=ze^{-z^{\tau}},y(z)=e^{z^{\tau}},B= \frac{dz_{1}dz_{2}}{(z_{1}-z_{2})^{2}}\right)\). While both spectral curves share the same functions \(x\), \(y\), and bidifferential \(B\), the Riemann surface over which \(x\) and \(y\) are defined is different. In \(\hat{S}\), the exponential singularity of \(x\) at infinity is not included, while it is included in \(\hat{S}_{\infty}\). According to our proposal, topological recursion on \(\hat{S}\) may produce different correlators than topological recursion on \(\hat{S}_{\infty}\): indeed, this is exactly what happens for \(\tau\geqslant 2\) (the correlators happen to be the same for \(\tau=1\) by corollary 4.11). Where does that leave us? On the one hand, we know from [1] that topological recursion on \(\hat{S}\) produces generating functions for \(\tau\)-completed cycles Hurwitz numbers. Moreover, a quantum curve for \(\tau\)-completed cycles Hurwitz numbers has been obtained in [15], from the geometry of Hurwitz numbers (not directly from topological recursion). In our notation, their result is: \[\hat{P}(\hat{x},\hat{y};h)=\hat{y}-\hat{x}^{1/2}e^{\frac{1}{\tau+1}\sum_{n=0} ^{\tau}\hat{x}^{-1}(\hat{x}\,\hat{y})^{\hat{x}}(\hat{x}\,\hat{y})^{\tau-1}} \hat{x}^{-1/2}. \tag{115}\] While this is a quantisation of the relation \(P(x,y)=y-e^{x^{\tau}\,\hat{y}^{\tau}}\), it is clearly not the same as the one that we obtained above in (109), as it corresponds to a different choice of ordering of the non-commuting operators \(\hat{x}\) and \(\hat{y}\). On the other hand, in the present paper, we defined correlators \(\omega_{g,n}^{\infty}\) for the spectral curve \(\hat{S}_{\infty}\) that includes the exponential singularity of \(x\). We showed in proposition 5.9 that the wave function constructed from these correlators is annihilated by the quantum curve \[\hat{P}_{\infty}(\hat{x},\hat{y};h)=\hat{y}-e^{(\hat{x}\,\hat{y})^{\tau}}, \tag{116}\] which happens to be the same as the quantum curve for Atlantes Hurwitz numbers. It is then natural to guess that _the correlators \(\omega_{\mathfrak{g},n}^{\infty}\) constructed from topological recursion on the transalgebraic curve \(\mathcal{S}_{\infty}\) are generating functions for Atlantes Hurwitz numbers_. This is what we prove in this section, therefore showing that Atlantes Hurwitz numbers _do_ fit within the framework of topological recursion, but only if one considers topological recursion on transalgebraic spectral curves. We can summarize these relations in the following table: To prove that the correlators \(\omega_{\mathfrak{g},n}^{\infty}\) constructed by topological recursion on \(\mathcal{S}_{\infty}\) compute Atlantes Hurwitz numbers, we first need to define what Atlantes Hurwitz numbers are. Let us now review some of the key results relating Hurwitz numbers and topological recursion. ### Hurwitz numbers Hurwitz numbers are counts of covers of a given Riemann surface with a given ramification behaviour, up to equivalence and weighed by automorphisms. We will always take the target curve to be \(\mathbb{P}^{1}\). Via the monodromy representation, they can be interpreted as counting decompositions of the identity in the symmetric group algebra; this is the point of view that we will take in this paper. **Definition 6.1**.: Let \(\mathrm{d}\in\mathbb{N}\) and \(\mathrm{C}_{1},\ldots,\mathrm{C}_{k}\in\mathbb{Z}\mathrm{C}[\mathcal{S}_{ \mathrm{d}}]\). The associated _disconnected Hurwitz number_ is \[\mathrm{H}^{\bullet}(\mathrm{C}_{1},\ldots,\mathrm{C}_{k})=\frac{1}{\mathrm{d }!}[1]\prod_{j=1}^{k}\mathrm{C}_{j}\,. \tag{117}\] Here \([1]\) is the dual to the unit of \(\mathrm{C}[\mathcal{S}_{\mathrm{d}}]\) in the natural basis, i.e. it extracts the coefficient of \(1\). In most modern studies of Hurwitz numbers, one or two of the central elements are chosen as free parameters -- usually indexed by partitions, as \(\mathbb{Z}\mathrm{C}[\mathcal{S}_{\mathrm{d}}]\) has a basis given by sums of conjugacy classes, i.e. cycle types, which are naturally indexed by partitions of \(\mathrm{d}\). All of the other central elements are then chosen to be equal, and this 'generic' element determines the type of Hurwitz problem. These kind of Hurwitz problems are related to the Kadomtsev-Petviashvili (KP) and 2D Toda lattice hierarchies: they can be assembled into generating functions which are _hypergeometric tau-functions_ or _Orlov-Scherbin partition function_ of these hierarchies, [11, 12, 13]. For more on this relation, see [10, 14]. Moreover, in many cases (i.e. for many generic elements) they satisfy topological recursion, which was first conjectured for simple Hurwitz numbers (generic partition \((2,1,\ldots,1)\)) in [11] and proved in [13], and has since been proved in many individual cases, see e.g. [1, 1, 18, 19]. The most general and direct relation between these two points of view is given by the following theorem. **Theorem 6.2** ([1, 13]).: _Consider two formal power series_ \[\hat{\psi}(\mathrm{h}^{2},\mathrm{y})\coloneqq\sum_{k=1}^{\infty}\sum_{m=0}^ {\infty}\mathrm{c}_{k,m}\mathrm{y}^{k}\mathrm{h}^{2m}\,,\qquad\hat{g}( \mathrm{h}^{2},z)\coloneqq\sum_{k=1}^{\infty}\hat{g}_{k}(\mathrm{h}^{2})z^{k} \coloneqq\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}s_{k,m}z^{k}\mathrm{h}^{2m}\,, \tag{118}\] _and their associated hypergeometric KP tau-function_ \[\mathbb{Z}(\underline{p})=\mathrm{e}^{\mathrm{F}(\underline{p})}=\sum_{\nu \in\mathcal{P}}\exp\Big{(}\sum_{\square\in\nu}\hat{\psi}(\mathrm{h}^{2},- \mathrm{h}\mathrm{c}_{\square})\Big{)}s_{\nu}(\underline{p})s_{\nu}\big{(}\{ \frac{\hat{g}_{k}(\mathrm{h}^{2})}{\hbar}\}\big{)}\,. \tag{119}\] _Define_ \[\begin{split}\psi(\mathrm{y})&\coloneqq\hat{\psi}( \mathrm{0},\mathrm{y})\,,\qquad y(z)\coloneqq\hat{g}(\mathrm{0},z)\,,\qquad x( z)\coloneqq\log z-\psi(y(z))\,,\\ X(z)&\coloneqq\mathrm{e}^{x(z)}\,,\qquad\qquad D \coloneqq\frac{\partial}{\partial x}\,,\qquad\qquad Q\coloneqq z\frac{dx}{dz} \end{split} \tag{120}\] \begin{table} \begin{tabular}{|l|l|l|} \hline Spectral curve & yields generating functions for & Quantum curve \\ \hline \(\Big{(}\mathrm{C},\mathrm{ze}^{-x^{2}},\mathrm{e}^{x^{\intercal}},\frac{ \mathrm{dz}_{1}\mathrm{dz}_{2}}{(z_{1}-z_{2})^{2}}\Big{)}\) & \(\tau\)-completed cycles Hurwitz & \(\hat{g}-\hat{\xi}^{1/2}\mathrm{e}^{\tau+1}\sum_{i=0}^{\tau}\hat{\xi}^{-1}( \hat{g})^{1}\hat{\xi}(\hat{g})^{\tau-i}\hat{\xi}^{-1/2}\) \\ \(\Big{(}\mathbb{P}^{1},\mathrm{ze}^{-x^{2}},\mathrm{e}^{x^{\intercal}},\frac{ \mathrm{dz}_{1}\mathrm{dz}_{2}}{(z_{1}-z_{2})^{2}}\Big{)}\) & \(\mathrm{r}\)-Atlantes Hurwitz & \(\hat{g}-\mathrm{e}^{(\hat{g})}\,\)? \\ \hline \end{tabular} \end{table} Table 1. Topological recursion for \(\tau\)-completed cycles and Atlantes Hurwitz numbers _and write_ \[H_{n}\coloneqq\sum_{k_{1},\dots,k_{n}=1}^{\infty}\frac{\partial^{n}F}{\partial p_ {k_{1}}\cdots\partial p_{k_{n}}}\bigg{|}_{p=0}\chi_{1}^{k_{1}}\cdots\chi_{n}^{k _{n}}\,. \tag{121}\] _Then these can be decomposed as_ \[H_{n}=\sum_{g=0}^{\infty}\hbar^{2g-2+n}H_{g,n}\,, \tag{122}\] _with \(H_{g,n}\) independent of \(\hbar\), and_ \[DH_{0,1}(X(z))=y(z)\,,\qquad H_{0,2}(X(z_{1}),X(z_{2}))=\log\left(\frac{z_{1}^ {-1}-z_{2}^{-1}}{\chi_{1}^{-1}-\chi_{2}^{-1}}\right)\,. \tag{123}\] _If moreover \(\frac{d\psi(y)}{dy}\big{|}_{y=y(z)}\) and \(\frac{d\psi(z)}{dz}\) have analytic continuations to meromorphic functions in \(z\) and all coefficients of positive powers of \(\hbar^{2}\) in \(\dot{\psi}(\hbar^{2},y(z))\) and \(\dot{\phi}(\hbar^{2},z)\) are rational functions of \(z\) whose singular points are disjoint from the zeroes of \(dx\), then the \(n\)-point differentials_ \[\omega_{g,n}\coloneqq d_{1}\cdots d_{n}H_{g,n}+\delta_{g,0}\delta_{n,2}\frac{ dX_{1}\,dX_{2}}{(X_{1}-X_{2})^{2}} \tag{124}\] _can be extended analytically to \((\mathbb{P}^{1})^{n}\) as global rational forms, and the collection of \(n\)-point differentials satisfies meromorphicity and the linear and quadratic loop equations, i.e. blobbed topological recursion [1], for the spectral curve \((\Sigma,X(z),\frac{y(z)}{X(z)},\frac{dz_{1}\,dz_{2}}{(z_{1}-z_{2})^{2}})\), where \(\Sigma\) is \(\mathbb{P}^{1}\) minus the exponential singularities of \(X(z)\)._ _Finally, if \(\dot{\psi}\) and \(\dot{\phi}\) belong to one of the two families_ \[\text{Family I}\qquad\dot{\psi}(\hbar^{2},y) =\mathcal{S}(\hbar\partial_{y})P_{1}(y)+\log\left(\frac{P_{2}(y)}{ P_{3}(y)}\right);\quad\dot{\phi}(\hbar^{2},z) =\frac{R_{1}(z)}{R_{2}(z)}\,,\] \[\text{Family II}\qquad\dot{\psi}(\hbar^{2},y) =\alpha y\,;\qquad\qquad\qquad\qquad\qquad\dot{\phi}(\hbar^{2},z) =\frac{R_{1}(z)}{R_{2}(z)}+\mathcal{S}(\hbar z\partial_{z})^{-1}\,\log\left( \frac{R_{3}(z)}{R_{4}(z)}\right),\] _where \(\alpha\in\mathbb{C}^{\times}\) and the \(P_{i}\) and \(R_{j}\) are arbitrary polynomials such that \(\psi(y)\) and \(y(z)\) are non-zero, but vanishing at zero, and no singular points of \(y\) are mapped to branch points by \(x\), then the \(n\)-point differentials also satisfy the projection property, and hence topological recursion, for the spectral curve above._ Theorem 6.2 does not explicitly mention Hurwitz numbers, but these are the coefficients of the power series \(H_{g,n}\) of equation (122). The function \(\dot{\psi}\) encodes the generic ramification profile, the function \(\hat{\mathfrak{g}}\) a specified (fixed) ramification profile, and the exponents \(k_{i}\) are make up the final, freely chosen, ramification profile. #### 6.1.1. \(r\)-completed cycles Hurwitz numbers A special case of theorem 6.2 is given by the \(r\)-completed cycles Hurwitz numbers. It corresponds to the choice \[\hat{\psi}(\hbar^{2},y)=\mathcal{S}(\hbar\partial_{y})y^{r},\qquad\hat{ \mathfrak{g}}(\hbar^{2},z)=z. \tag{125}\] in Family I. From the theorem, we see that the \(\omega_{g,n}\) defined in (124) satisfy topological recursion for the meromorphic spectral curve \(\mathcal{S}=\left(C,z\epsilon^{-z^{r}},\epsilon^{z^{r}},\frac{dz_{1}\,dz_{2}}{ (z_{1}-z_{2})^{2}}\right)\), as stated earlier in this section. This is precisely the spectral curve of example 2.6. Moreover, as stated earlier a quantum curve for \(r\)-completed cycles Hurwitz numbers has been obtained in [13] from the geometry of Hurwitz numbers, see equation (115). It is a quantisation of the relation \(P(x,y)=y-e^{x^{*}y^{*}}\). #### 6.1.2. Atlantes Hurwitz numbers Another type of Hurwitz numbers that will play a key role in the following is Atlantes Hurwitz numbers. In the context of theorem 6.2, Atlantes Hurwitz numbers correspond to the case: \[\hat{\mathfrak{g}}(\hbar^{2},y)=y^{r}\,,\qquad\hat{\mathfrak{g}}(\hbar^{2},z)=z\,. \tag{126}\] If \(r>1\), this does not fit in one of the two families, so the projection property for Atlantes Hurwitz numbers does not follow from theorem 6.2, but the meromorphicity property and the linear and quadratic loop equations do. But what are Atlantes Hurwitz numbers? The notion of Atlantes Hurwitz numbers was introduced in [1], to encode the value of power-sum symmetric functions evaluated at the Jucy-Murphy elements. **Definition 6.3**.: Let \(d\geqslant 1\) and let \(\mathfrak{S}_{d}\) be the symmetric group on \(d\) elements. The _Jucys-Murphy elements_ are defined as \[\mathcal{J}_{k}=\sum_{j=1}^{k-1}(j\,k)\in\mathbb{C}[\mathfrak{S}_{d}]\,. \tag{127}\] They generate a maximally commutative subalgebra of \(\mathbb{C}[\mathfrak{S}_{d}]\), called the _Gelfand-Tsetlin algebra_. **Proposition 6.4** (Jucys correspondence [14]).: _Let \(\sigma_{b}\) be the elementary symmetric function. Then_ \[\sigma_{b}(\mathcal{J}_{2},\dots,\mathcal{J}_{d})=\sum_{\begin{subarray}{c} \alpha\vdash n\\ t(\alpha)=d-b\end{subarray}}C_{\alpha}\,. \tag{128}\] _Hence, any symmetric function evaluated at the Jucys-Murphy elements gives a central element in \(\mathbb{C}[\mathfrak{S}_{d}]\)._ **Proposition 6.5** ([6]).: _The collection of elements given in equation (128) generate \(\mathbb{Z}\mathbb{C}[\mathfrak{S}_{d}]\)._ **Definition 6.6** ([1]).: An \(r\)-block of Atlantes is \(B_{r}^{\times}\coloneqq p_{r}(\mathcal{J}_{2},\dots,\mathcal{J}_{d})\in \mathbb{Z}\mathbb{C}[\mathfrak{S}_{d}]\). The name Atlantes comes from the following lemma: **Lemma 6.7** ([1, Lemma 4.3]).: _The geometric interpretation of the block of Atlantes is the following: we have \(r\) simple ramifications, whose monodromies are given by the transpositions \((x_{i}y)\), \(x_{i}<y\), \(i=1,\dots,r\). Here \(y\) is an arbitrary number from \(2\) to \(d\), which is not fixed in advance, but is the same for all transpositions._ Graphically, we often draw a cover as a couple of parallel horizontal lines (sheets) mapped to one horizontal line. Simple ramifications are drawn as crosses connecting two sheets. In a block of Atlantes, we interpret the sheet \(y\) as the sky, the sheets \(x_{i}\) as part of the earth, and the transposition crosses \((x_{i}y)\) as Atlas holding the sky. **Definition 6.8** ([1]).: Let \(r\geqslant 1\). We define the disconnected \(r\)_-Atlantes single Hurwitz numbers_ as \[h_{g,\mu}^{\bullet,\times r}\coloneqq\frac{1}{d!}[1]C_{\mu}(B_{r}^{\times})^{b}\,, \tag{129}\] where \(\mu\vdash d\), \(g\in\mathbb{Z}\), and \[b=\frac{2g-2+d+\ell(\mu)}{r} \tag{130}\] is determined by the Riemann-Hurwitz formula. We can define a wave function for Atlantes Hurwitz numbers, and show that it satisfies a differential equation. **Proposition 6.9** ([1, Proposition 7.4]).: _Let_ \[Z^{\times r}(\underline{p},h)=\exp\Big{(}\sum_{g,\mu}\frac{h^{2g-2+\ell(\mu)+ |\mu|}}{(2g-2+\ell(\mu)+|\mu|)!}h_{g,\mu}^{\times r}p_{\mu}\Big{)} \tag{131}\] _be the generating function, and \(\Psi^{\times r}(x,h)=Z^{\times r}(\{p_{k}=(h^{-1}x)^{k}\},h)\) be the wave function. Then_ \[\Psi^{\times r}(x,h)=\sum_{n=0}^{\infty}\frac{x^{n}}{n!h^{n}}\exp\Big{(}h^{r} \sum_{j=1}^{n-1}j^{r}\Big{)} \tag{132}\] _satisfies the differential equation_ \[\big{(}\hat{g}-e^{(\hat{g}\hat{g})^{\top}}\big{)}\Psi^{\times r}(x,h)=0\,, \tag{133}\] _where \(\hat{x}=x\cdot\) and \(\hat{g}=h\frac{d}{dx}\)._ We of course recognise the quantum curve that we obtained in proposition 5.9, that is, (109). ### The projection property for Atlantes Hurwitz numbers As Atlantes Hurwitz numbers do not fall within Families I and II of theorem 6.2, while a different h deformation of its \((\mathsf{v},\mathsf{y})\) does, the correlators \(\omega_{\mathsf{g},\mathsf{n}}\) do not satisfy the usual topological recursion on the non-compact meromorphic spectral curve \(\mathcal{S}=\left(\mathbb{C},\mathsf{ze}^{-z^{r}},\mathsf{e}^{z^{r}},\frac{ \mathrm{d}z_{1}\mathrm{d}z_{2}}{(z_{1}-z_{2})^{2}}\right)\). Indeed, topological recursion on \(\mathcal{S}\) produces generating functions for \(\mathsf{r}\)-completed cycles Hurwitz numbers, not Atlantes Hurwitz numbers, as stated above. The reason is that the projection property does not follow from theorem 6.2 for Atlantes Hurwitz numbers. However, the meromorphicity property and the linear and quadratic loop equations do. In this section, we analyse to what extent the projection property holds for Atlantes Hurwitz numbers. This will be needed to establish that topological recursion on the transalgebraic spectral curve \(\mathcal{S}_{\infty}=\left(\mathbb{P}^{1},\mathsf{ze}^{-z^{r}},\mathsf{e}^{z^{r }},\frac{\mathrm{d}z_{1}\mathrm{d}z_{2}}{(z_{1}-z_{2})^{2}}\right)\) produces generating functions for Atlantes Hurwitz numbers. We do this by analysing the proof of the projection property for Family I, as given in [1, Sections 3 & 4]. Notation is borrowed from that paper. **Definition 6.10** ([1, Definition 3.7]).: The space \(\Theta_{\mathsf{n}}\) (or \(\Theta\) if \(\mathsf{n}\) is clear from context) is defined as the linear span of functions of the form \(\prod_{i=1}\mathsf{f}_{i}(z_{i})\), where each \(\mathsf{f}_{i}(z_{i})\) * is a rational function on the Riemann sphere; * has poles only at the ramification points \(\mathsf{p}_{1},\ldots,\mathsf{p}_{\mathsf{N}}\) of \(\mathsf{x}\); * its principal part at these ramification points is odd with respect to the corresponding deck transformation. **Proposition 6.11** ([1, Proposition 3.9]).: _The differentials \(\omega_{\mathsf{g},\mathsf{n}}\) satisfy the projection property and the linear loop equations if and only if \(\mathsf{H}_{\mathsf{g},\mathsf{n}}\in\Theta_{\mathsf{n}}\) for \(2\mathsf{g}-2+\mathsf{n}>0\)._ **Proposition 6.12** ([1, Proposition 3.10]).: _For \(\mathsf{n}\geqslant 3\),_ \[\mathsf{H}_{\mathsf{g},\mathsf{n}}=[\mathsf{h}^{2\mathsf{g}-2+\mathsf{n}}] \sum_{\mathsf{\gamma}\in\mathsf{\Gamma}_{\mathsf{n}}}\prod_{\mathsf{v}_{i}\in \mathcal{S}_{\mathsf{\gamma}}}\bar{\mathsf{U}}_{\mathsf{i}}\prod_{\{\mathsf{ v}_{i},\mathsf{v}_{i}\}\in\mathsf{E}_{\mathsf{\gamma}}\backslash \mathsf{X}_{\mathsf{\gamma}}}w_{\mathsf{i},\mathsf{k}}\prod_{\{\mathsf{v}_{i}, \mathsf{v}_{i}\}\in\mathcal{S}_{\mathsf{\gamma}}}\left(\bar{\mathsf{U}}_{ \mathsf{i}}w_{\mathsf{i},\mathsf{k}}+\mathsf{hu}_{\mathsf{k}}\mathcal{S}( \mathsf{u}_{\mathsf{k}}\mathsf{h}\mathsf{Q}_{\mathsf{k}}\mathsf{D}_{\mathsf{ k}})\frac{z_{\mathsf{i}}}{z_{\mathsf{k}}-z_{\mathsf{i}}}\right)+\operatorname{const}, \tag{134}\] _where \(\mathsf{\Gamma}_{\mathsf{n}}\) is the set of simple graphs on \(\mathsf{n}\) vertices \(\mathsf{v}_{1},\ldots,\mathsf{v}_{n}\), \(\mathsf{E}_{\mathsf{\gamma}}\) is the set of edges of a graph \(\mathsf{\gamma}\), \(\mathcal{I}_{\mathsf{\gamma}}\) is the subset of vertices of valency \(\geqslant 2\), and \(\mathcal{K}_{\mathsf{\gamma}}\) is the subset of edges with one end \(\mathsf{v}_{i}\) of valency \(1\) and another end \(\mathsf{v}_{\mathsf{k}}\), and where_ \[\bar{\mathsf{U}}_{\mathsf{i}}\mathsf{f} \coloneqq\sum_{\mathsf{s}=0}^{\infty}\sum_{\mathsf{j}=1}^{ \infty}\mathrm{D}_{\mathsf{i}}^{\mathsf{j}-1}\left(\frac{\mathsf{L}_{\mathsf{ s},\mathsf{i}}^{\mathsf{j}}}{\mathsf{Q}_{\mathsf{i}}}[\mathsf{u}_{\mathsf{s}}^{ \mathsf{j}}]\frac{\mathsf{e}^{\mathsf{u}_{\mathsf{i}}\mathcal{S}(\mathsf{u}_{ \mathsf{i}}\mathcal{S}(\mathsf{u}_{\mathsf{i}}\mathsf{h}\mathsf{Q}_{\mathsf{ i}}\mathsf{D}_{\mathsf{i}})\mathsf{g}(z_{\mathsf{i}})-\mathsf{y}(z_{\mathsf{i}})}}{ \mathsf{u}_{\mathsf{i}}\mathsf{h}\mathcal{S}(\mathsf{u}_{\mathsf{i}}\mathsf{h} \mathsf{h})}\mathsf{f}\right), \tag{135}\] \[w_{\mathsf{k},\mathsf{l}} \coloneqq\mathsf{e}^{\mathsf{h}^{2}\mathsf{u}_{\mathsf{k}}\mathsf{ u}_{\mathsf{l}}\mathcal{S}(\mathsf{u}_{\mathsf{k}}\mathsf{h}\mathsf{Q}_{\mathsf{k}} \mathsf{D}_{\mathsf{k}})\mathcal{S}(\mathsf{u}_{\mathsf{l}}\mathsf{h}\mathsf{ Q}_{\mathsf{l}})\frac{z_{\mathsf{i}}}{(z_{\mathsf{k}}-z_{\mathsf{l}})^{2}}}-1\,,\] \[\mathsf{L}_{\mathsf{s},\mathsf{i}}^{\mathsf{j}} \coloneqq\left([\mathsf{v}^{\mathsf{j}}](\partial_{\mathsf{y}}+ \mathsf{v}\mathsf{v}^{\mathsf{j}}(\mathsf{y})]^{\mathsf{s}}\mathsf{e}^{\mathsf{ v}\left(\frac{\mathsf{s}(\mathsf{v}\mathsf{h}\mathsf{a}_{\mathsf{j}})}{ \mathsf{s}(\mathsf{h}\mathsf{a}_{\mathsf{j}})}\hat{\mathsf{v}}(\mathsf{y})- \mathsf{\psi}(\mathsf{y})\right)}\right)\Big{|}_{\mathsf{y}=\mathsf{y}(z_{ \mathsf{i}})}\,.\] _For \(\mathsf{n}=2\) and \(\mathsf{g}>0\) we have:_ \[\mathsf{H}_{\mathsf{g},2} =[\mathsf{h}^{2\mathsf{g}}]\left(\bar{\mathsf{U}}_{\mathsf{1}} \bar{\mathsf{U}}_{\mathsf{2}}w_{\mathsf{1},2}+\bar{\mathsf{U}}_{\mathsf{1}} \Big{(}\mathsf{hu}_{\mathsf{1}}\mathcal{S}(\mathsf{u}_{\mathsf{1}}\mathsf{h} \mathsf{Q}_{\mathsf{1}}\mathsf{D}_{\mathsf{1}})\frac{z_{\mathsf{1}}}{z_{\mathsf{1} }-z_{\mathsf{2}}}\Big{)}+\bar{\mathsf{U}}_{2}\Big{(}\mathsf{hu}_{\mathsf{2}} \mathcal{S}(\mathsf{u}_{\mathsf{2}}\mathsf{h}\mathsf{Q}_{\mathsf{2}}\mathsf{D}_{ \mathsf{2}})\frac{z_{\mathsf{1}}}{z_{\mathsf{2}}-z_{\mathsf{1}}}\Big{)}\right)+ \operatorname{const}. \tag{136}\] _For \(\mathsf{n}=1\) and \(\mathsf{g}>0\) we have:_ \[\mathsf{H}_{\mathsf{g},1} =[\mathsf{h}^{2\mathsf{g}}]\left(\bar{\mathsf{h}}\bar{\mathsf{U}}_{ \mathsf{1}}\bar{\mathsf{1}}+\sum_{\mathsf{j}=1}^{\infty}\mathrm{D}_{\mathsf{1}}^{ \mathsf{j}-1}\mathrm{L}_{\mathsf{0},1}^{\mathsf{j}+1}\mathrm{D}_{\mathsf{1}} \mathsf{y}(z_{\mathsf{1}})+\int_{0}^{z_{\mathsf{1}}}\frac{\hat{\mathsf{g}}(z)- \mathsf{y}(z)}{z}\mathrm{d}z\right. \tag{137}\] \[\left.+\int_{0}^{z_{\mathsf{1}}}\frac{\mathsf{Q}(z)}{z}\Big{(} \frac{1}{\mathsf{s}(\mathsf{h}\mathsf{a}_{\mathsf{y}})}\hat{\mathsf{v}}(\mathsf{y} )-\mathsf{\psi}(\mathsf{y})\Big{)}\Big{|}_{\mathsf{y}=\mathsf{y}(z)}\mathrm{ Dy}(z)\mathrm{d}z\right)+\operatorname{const}.\] _In each case the extra constant can be determined from the condition that \(\mathsf{H}_{\mathsf{g},\mathsf{n}}\) vanishes at zero. These constants are not important for the argument below and can be ignored._ We will use this in a similar way to [1, Section 4], so let us quote part of the first page of that section: We begin with the a few general observations related to the structure of the formulas (102)-(107) [the ones in proposition 6.12] for \(\mathsf{H}_{\mathsf{g},\mathsf{n}}\), \(2\mathsf{g}-2+\mathsf{n}\geqslant 0\), and relevant for the both families of parameters. These formulas give manifestly rational functions, whose principal parts at the points \(\mathsf{p}_{1},\ldots,\mathsf{p}_{\mathsf{N}}\) are odd with respect to the deck transformations. So, we have to show that these functions have no other poles in each variable \(z_{1},\ldots,z_{\mathsf{n}}\) Consider a particular \(H_{g,n}(z_{1},\ldots,z_{n})\). From the shape of the formula it is clear that its possible poles in the variable \(z_{1}\) in addition to \(p_{1},\ldots,p_{N}\) are either at the diagonals \(z_{1}-z_{i}=0\), \(i\neq 1\)(but it is known from [1] that these functions have no poles at the diagonals \(z_{i}-z_{j}=0\)), or at \(\infty\), or at the special points related to the specific form of the operator \(\bar{U}_{1}\) for Family I and Family II. A bit more special case is the case of \(H_{g,1}\), where we have to analyse some extra terms as well. Note that it is in fact sufficient to analyse the pole structure just for \(H_{g,1}\), \(g\geq 1\), since this case subsumes the corresponding analysis of the pole structure for \(H_{g,n}\), \(n\geq 2\). Indeed, the factors of the form \(w_{k,1}\) and \(h_{k1}g(u_{k}hQ_{k}D_{k})\frac{z_{i}}{z_{k}-z_{i}}\) do not contribute any poles to the resulting expressions, as all diagonal poles get cancelled and these factors are regular at infinity as well. Therefore, the possible extra poles can only occur at the special points of \(\bar{U}_{i}\), which enters the formula for \(H_{g,1}\) in exactly the same way as formulas for \(H_{g,n}\) for other values of \(n\). The argument for the \(n=1\) case includes analysis of the singularities of \(\bar{U}_{1}\), and once we show that it has no poles outside \(p_{1},\ldots,p_{N}\), it immediately implies the same statement for any \(n\geq 2\) as well. For Atlantes Hurwitz numbers, the same strategy applies, with the one difference that the second line of equation (137) contributes a pole. Hence we have: **Proposition 6.13**.: _For \(\dot{\psi}(n^{2},y)=P_{1}(y)+\log\frac{P_{2}(y)}{P_{3}(y)}\) and \(\dot{g}(h^{2},z)=\frac{R_{1}(z)}{R_{2}(z)}\), where the \(P_{i}\) and \(R_{j}\) are polynomials with simple zeroes,18 the functions \(H_{g,n}\) for \(2g-2+n>0\) and \(n\geq 2\) lie in \(\Theta_{n}\), while the \(H_{g,1}\) are rational functions on the Riemann sphere, with poles at \(p_{1},\ldots,p_{N},\infty\), and principal parts at the \(p_{i}\) odd with respect to the deck transformation._ Footnote 18: This holds for more general polynomials \(P_{i}\) and \(R_{j}\) using the results of [1]. The proof of this proposition is a fairly straightforward extension of the methods of [1]. As it contains no new ideas, we give it in appendix B. By [1, Remark 4.11], **Corollary 6.14**.: _In the situation of proposition 6.13, the pole at infinity of \(H_{g,1}\) is given by_ \[[h^{2g}]\int_{0}^{z_{1}}\frac{Q(z)}{z}\Big{(}\frac{1}{S(h\partial_{y})}-1 \Big{)}\psi(y)\Big{|}_{y=y(z)}Dy(z)dz=\frac{(2^{1-2g}-1)B_{2g}}{(2g)!}\partial _{y}^{2g-1}\psi(y)dy\Big{|}_{y=y(z_{1})} \tag{138}\] Proof.: This is the pole coming from \(\tau_{g}^{3}\), cf. appendix B. In particular, for \(\tau\)-Atlantes, the pole of \(H_{1,1}\) is \[\frac{(2^{1-2}-1)B_{2}}{2!}\partial_{y}y^{*}\Big{|}_{y=z_{1}}=-\frac{\tau}{24 }z_{1}^{\tau-1} \tag{139}\] which agrees with our calculation in example 4.17 and the result of proposition 4.16. In fact, in general, we recognise the result of corollary 6.14 as precisely the principal part predicted by conjecture 4.15 for all Family I spectral curves without the \(h\) deformations! This gives the following result. **Corollary 6.15**.: _Assuming conjecture 4.15 is true, the correlators produced by Family I spectral curves with no \(h\) deformations are calculated by definition 4.2._ ### Topological recursion for Atlantes Hurwitz numbers To summarize the main conclusions of the previous section, namely proposition 6.13, let us introduce differentials for Atlantes Hurwitz numbers as in theorem 6.2: \[\omega_{g,n}^{\infty}=d_{1}\cdots d_{n}H_{g,n}+\delta_{g,0}\delta_{n,2}\frac{ dX_{1}dX_{2}}{(X_{1}-X_{2})^{2}}. \tag{140}\] Then theorem 6.2 and proposition 6.13 state that: * The \(\omega_{g,n}^{\infty}\) are meromorphic differentials on \(P^{1}\). * The \(\omega_{g,n}^{\infty}\) satisfy the linear and quadratic loop equations for \(\delta_{\infty}=(\mathbb{P}^{1},ze^{-z^{\tau}},e^{z^{\tau}},\frac{dz_{1}dz_{2}} {(z_{1}-z_{2})^{2}})\). * The \(\omega_{g,n}^{\infty}\) for \(2g-2+n>0\) and \(n\geq 2\) satisfy the projection property, and hence have poles only at the finite ramification points of the function \(x=ze^{-z^{\tau}}\). * The \(\omega_{g,1}^{\infty}\) may have poles at both the finite ramification points and the essential singularity of \(x\) at infinity. We also know from proposition 6.9 that the wave function \(\psi_{\infty}\) constructed from the \(\omega_{g,n}^{\infty}\) satisfies the differential equation \[(\hat{g}-e^{(\hat{g}g)^{\tau}})\psi_{\infty}=0. \tag{141}\] With this information we can prove that the \(\omega_{g,n}^{\infty}\) are the correlators constructed from topological recursion (definition 4.2) on the transalgebraic spectral curve \(\delta_{\infty}=(\mathbb{P}^{1},2e^{-z^{\tau}},e^{z^{\tau}},\frac{dz_{1}dz_{2 }}{(z_{1}-z_{2})^{2}})\). **Theorem 6.16**.: _Let \(\omega_{g,n}^{\infty}\) be the differentials for Altantes Hurwitz numbers as in (140). Then they satisfy topological recursion (definition 4.2) on the transalgebraic spectral curve \(\delta_{\infty}=(\mathbb{P}^{1},2e^{-z^{\tau}},e^{z^{\tau}},\frac{dz_{1}dz_{2 }}{(z_{1}-z_{2})^{2}})\)._ Proof.: On the one hand, let \(\omega_{g,n}^{\infty}\) be the differentials defined by (140). For \(n\geqslant 2\), since they satisfy the projection property, we know that they satisfy the usual topological recursion formula with residues only at the finite ramification points of \(x\). However, the \(\omega_{g,1}^{\infty}\) do not, since they do not satisfy the projection property. On the other hand, let \(\tilde{\omega}_{g,n}^{\infty}\) be the correlators defined by topological recursion on the transalgebraic spectral curve \(\delta_{\infty}\). By proposition 4.12, we know that the correlators \(\tilde{\omega}_{g,n}^{\infty}\) for \(n\geqslant 2\) satisfy the usual topological recursion formula with residues only at the finite ramification points of \(x\), just like the \(\omega_{g,n}^{\infty}\) for \(n\geqslant 2\). However, the \(\tilde{\omega}_{g,1}^{\infty}\) do not, as there are contributions from the essential singularity of \(x\) at infinity. To establish that \(\omega_{g,n}^{\infty}=\tilde{\omega}_{g,n}^{\infty}\) for all \(g\) and \(n\), we proceed by induction, using the fact that the wave function \(\psi_{\infty}\) constructed from the \(\omega_{g,n}^{\infty}\) and the wave function \(\tilde{\psi}_{\infty}\) constructed from the \(\tilde{\omega}_{g,n}^{\infty}\) satisfy the same quantum curve equation (141). The base case is obvious, since \(\omega_{0,1}^{\infty}=\tilde{\omega}_{0,1}^{\infty}\) and \(\omega_{0,2}^{\infty}=\tilde{\omega}_{0,2}^{\infty}\). So we proceed with the induction step. Assume that \(\omega_{g,n}^{\infty}=\tilde{\omega}_{g,n}^{\infty}\) for all \(g^{\prime},n^{\prime}\) such that \(2g^{\prime}-2+n^{\prime}<k\). Then we show that \(\omega_{g,n}^{\infty}=\tilde{\omega}_{g,n}^{\infty}\) for all \(g,n\) such that \(2g-2+n=k\). For the correlators with \(n\geqslant 2\), this is clear, since they are constructed from the same topological recursion formula for the same lower order correlators. As for the correlators \(\omega_{g,1}^{\infty}\) and \(\tilde{\omega}_{g,1}^{\infty}\), we conclude that they must be equal from the fact that \(\psi_{\infty}\) and \(\tilde{\psi}_{\infty}\) satisfy the same quantum curve equation. Indeed, the quantum curve equation must be satisfied order by order in \(h\), and up to a given order \(O(h^{k})\), only correlators \(\omega_{g^{\prime},n}^{\infty}\), with \(2g^{\prime}-2+n^{\prime}\leqslant k\) contribute to the equation. Thus, up to \(O(h^{k})\), we know that all correlators contributing to the equation are the same, except potentially for \(\omega_{g,1}^{\infty}\) and \(\tilde{\omega}_{g,1}^{\infty}\). But as the quantum curve equation is the same for both wave functions, we conclude that \(\omega_{g,1}^{\infty}=\tilde{\omega}_{g,1}^{\infty}\). **Corollary 6.17**.: _Conjecture 4.15 holds for the \(r\)-Atlantes curves._ Proof.: This follows directly from corollary 6.14 and the previous theorem. ### Relation between the Altantes and \(r\)-completed cycles Hurwitz quantum curves Recall section 6, which summarizes the relations between spectral curves, quantum curves, and topological recursion for \(r\)-completed cycles and Altantes Hurwitz numbers. The spectral curves are very similar, differing only by the choice of Riemann surface. As the functions \(x\) and \(y\) are the same, they satisfy the same relation \(P(x,y)=y-e^{x^{\tau}y^{\tau}}=0\). Indeed, the two quantum curves are both quantisations of this relation. In this section we study in more details the relation between these two quantum curves. Let \[\hat{P}_{\infty}(\hat{x},\hat{y};h)=\hat{g}-e^{(\hat{g}g)^{\tau}} \tag{142}\] be the quantum curve for Altantes Hurwitz numbers with wave function \(\psi_{\infty}\), and \[\hat{P}(\hat{x},\hat{y};h)=\hat{g}-\hat{x}^{1/2}e^{\frac{1}{r+1} \sum_{i=-0}^{r}\hat{x}^{-1}(\hat{x}\hat{g})^{i}k(\hat{x}\hat{g})^{\tau-i}} \hat{x}^{-1/2} \tag{143}\] be the quantum curve for \(r\)-completed cycles Hurwitz numbers, with wave-function \(\psi\). Clearly, in general, this is not the same quantum curve. However, we can observe an interesting relation between the two results that was first noticed in a more limited form in [10]. Let \(\hat{Y}=\hat{x}\hat{g}\), and assume there exists an operator \[\hat{H}=\exp\left(\sum_{n=1}^{r}h^{r-n}h_{n}\hat{Y}^{n}\right), \tag{144}\] such that \(\hat{\mathsf{H}}\psi_{\infty}=\psi\). Then we find, using that \(\hat{\kappa}^{-1}\hat{\gamma}\hat{\kappa}=\hat{\gamma}+h\), so \(\hat{\kappa}^{-1}\hat{\mathsf{H}}\hat{\kappa}\) commutes with \(\hat{\gamma}\), \[0 =\hat{\mathsf{H}}0=\hat{\mathsf{H}}(\hat{\gamma}-\hat{\kappa}\hat{ \kappa}\hat{\kappa}^{\gamma^{\prime}})\psi_{\infty} \tag{145}\] \[=\big{(}\hat{\gamma}-\hat{\kappa}\hat{\kappa}\hat{\kappa}^{\gamma ^{\prime}}\hat{\mathsf{H}}^{-1}\big{)}\psi\] \[=\big{(}\hat{\gamma}-\hat{\kappa}\hat{\kappa}^{\gamma^{\prime}} \hat{\kappa}^{-1}\hat{\mathsf{H}}\hat{\kappa}\hat{\mathsf{H}}^{-1}\big{)}\psi.\] Ergo, our operator \(\hat{\mathsf{H}}\) is the solution to \[\hat{\kappa}\hat{\kappa}^{\gamma^{\prime}}\hat{\kappa}^{-1}\hat{ \mathsf{H}}\hat{\kappa}\hat{\mathsf{H}}^{-1} =\hat{\kappa}^{3/2}\mathrm{e}^{\frac{1}{\tau+1}\sum_{i=0}^{\tau }\hat{\kappa}^{-1}\hat{\kappa}\hat{\gamma}^{i}-\hat{\kappa}}\hat{\kappa}^{-1/2}\] \[\hat{\kappa}^{-1/2}\mathrm{e}^{\hat{\gamma}^{\prime}}\mathrm{e}^ {\sum_{n=1}^{\tau}\hat{\mathsf{h}}^{\tau-n}\mathsf{h}_{n}\big{(}\hat{\gamma} +\hat{\kappa}\big{)}^{n}}\mathrm{e}^{-\sum_{n=1}^{\tau}\hat{\mathsf{h}}^{-n} \mathsf{h}_{n}\hat{\gamma}^{n}}\hat{\kappa}^{1/2} =\mathrm{e}^{\frac{1}{\tau+1}\sum_{i=0}^{\tau}(\hat{\gamma}+h)^{ i}\hat{\gamma}^{\tau-i}}\] \[\mathrm{e}^{(\hat{\gamma}+\frac{h}{2})^{\tau}}\mathrm{e}^{\sum_ {n=1}^{\tau}\mathsf{h}^{-n}\mathsf{h}_{n}\big{(}(\hat{\gamma}+\frac{3h}{2})^{n }-(\hat{\gamma}+\frac{h}{2})^{n}\big{)}} =\mathrm{e}^{\frac{1}{\tau+1}\sum_{i=0}^{\tau}(\hat{\gamma}+h)^{ i}\hat{\gamma}^{\tau-i}}\] \[\big{(}\hat{\gamma}+\frac{h}{2}\big{)}^{\tau}+\sum_{n=1}^{\tau} \mathsf{h}^{r-n}\mathsf{h}_{n}\Big{(}\big{(}\hat{\gamma}+\frac{3h}{2}\big{)}^{ n}-\big{(}\hat{\gamma}+\frac{h}{2}\big{)}^{n}\Big{)} =\frac{1}{\tau+1}\sum_{i=0}^{\tau}(\hat{\gamma}+h)^{i}\hat{\gamma}^{\tau-i}\] \[\sum_{n=1}^{\tau}\mathsf{h}_{n}\sum_{j=0}^{n}\binom{n}{j}\Big{(} \big{(}\frac{3}{2}\big{)}^{n-j}-\big{(}\frac{1}{2}\big{)}^{n-j}\Big{)}\mathsf{ h}^{r-j}\hat{\gamma}^{j} =\frac{1}{\tau+1}\sum_{i=\tau-j}^{\tau}\binom{i}{\tau-j}-\binom{ r}{j}\big{(}\frac{1}{2}\big{)}^{r-j}\Big{)}\mathsf{h}^{r-j}\hat{\gamma}^{j}\] \[\sum_{n=j}^{\tau}\mathsf{h}_{n}\binom{n}{j}\Big{(}\big{(}\frac{3} {2}\big{)}^{n-j}-\big{(}\frac{1}{2}\big{)}^{n-j}\Big{)} =\frac{1}{\tau+1}\binom{\tau+1}{j}-\binom{r}{j}\big{(}\frac{1}{2} \big{)}^{r-j}\,,\quad 0\leqslant j\leqslant\tau\,.\] This can be solved recursively from \(j=r\), and the first few numbers are \[h_{r} =0\,, \tag{147}\] \[h_{r-1} =\frac{1}{24}r=\frac{1}{24}\binom{r}{1}\,,\] (148) \[h_{r-2} =-\frac{1}{16}r(\tau-1)=-\frac{1}{8}\binom{r}{2}\,. \tag{149}\] Such an \(\hat{\mathsf{H}}\) therefore exists and is unique (up to an arbitrary multiplicative constant). In particular, \(\hat{\mathsf{H}}\neq 1\) except in the case when \(r=1\), where it can be seen that (142) and (143) agree. The degree of the operator is no surprise; a degree \(r-1\) operator is precisely the degree needed to reduce all contributions from the essential singularity to constants by proposition 4.14 and the fact that all contributions from the essential singularity vanish for \(r=1\) by corollary 4.11. ## 7. Conclusion and open questions In this paper, we defined topological recursion for transalgebraic spectral curves via limits of sequences of meromorphic spectral curves. We studied properties of the topological recursion on transalgebraic spectral curves and proved the topological recursion/quantum curve correspondence for a subclass of transalgebraic spectral curves. As a particular example, we used our formalism to show that generating series for Atlantes Hurwitz numbers satisfy topological recursion on a transalgebraic spectral curve. This has only been a first investigation of topological recursion on transalgebraic spectral curves; further research could shed more light on this new extension. In particular, we give a number of open questions below: * We have an explicit conjecture for the contribution of the essential singularities to topological recursion on transalgebraic spectral curves, conjecture 4.15, with substantial evidence. However, we have not been able to prove this yet. * We defined topological recursion on transalgebraic spectral curves as a specific limit of topological recursion on meromorphic spectral curves. However, it would be more satisfactory to have a direct definition without resorting to limits. A partial answer in this direction is given by lemma 4.5, but even this does not mimic the definition of the original Eynard-Bouchard formalism. It would be very satisfying to prove the correlators satisfy an analogous formula with a sum over each local deck transformation group and a contour integral around each essential singularity. This approach faces significant challenges, as the sum over each deck transformation group must be defined in terms of a principal value, and the resulting integrand of the contour integral may have a non-isolated singularity at each infinite ramification point. * For a given transalgebraic spectral curve, the quantum curve that we construct a priori seems to depend on the particular sequence of meromorphic spectral curves considered. In the particular case of Atlantes Hurwitz numbers, we showed that all of the quantum curves constructed this way are equivalent, but we do not know if this property holds in general. * For topological recursion on meromorphic spectral curves, the expansion of the correlators \(\omega_{g,n}\) at the ramification points has a nice interpretation in terms of intersection theory of the moduli spaces of curves [15, 16]. Is there a similar interpretation for the expansion of the correlators associated to a transalgebraic spectral curve at the exponential singularities? * Following up on the previous point, in cases where the correlators \(\omega_{g,n}\) constructed from topological recursion on a meromorphic spectral curve have an interpretation in Hurwitz theory, the relation between the expansion of the correlators at punctures of the curve and at ramification points give rise to 'ELSV-type formulae' for Hurwitz numbers. Do similar formulae hold for Atlantes Hurwitz numbers, using topological recursion on transalgebraic spectral curves? * The quantum curve for Atlantes Hurwitz numbers looks simpler than the one for \(r\)-completed cycles Hurwitz numbers, which has nearly the same spectral curve, but excludes the essential singularity. On the other hand, \(r\)-completed cycles have natural relations to cohomological field theory, via Chiodo classes [14, 15, 16], and to Gromov-Witten theory of curves [17, 18], where the 'completion' of the cycles seems related to the boundary of \(\overline{M}_{g,n}\). It would be interesting to see if Atlantes Hurwitz numbers admit a similar interpretation. * Using the Airy structure approach pioneered by Kontsevich and Soibelman [19, 2], topological recursion on meromorphic spectral curves can be reformulated in terms of representation theory of \(\mathcal{W}\)-algebras [1]. It would be interesting to investigate whether topological recursion on transalgebraic spectral curves has a similar reformulation in terms of \(\mathcal{W}\)-algebras. ## Appendix A Extensions of [1] to the transalgebraic case In this appendix we extend the results of [1] on quantum curves in a way that make them suitable for the construction of quantum curves for transalgebraic spectral curves. For the sake of maintaining a reasonable level of brevity, this appendix will not be self contained and will frequently reference [1]. To avoid confusion, we will refer explicitly to the analogues of what we are doing in [1] when possible. However, in [1] everything was indexed starting from the degree of the curve which for us may be infinite;19 therefore, we will have to re-index virtually all objects considered in [1], where infinite degree curves were not a consideration. Footnote 19: For us, recall, any curve that does not have a well-defined finite degree is of infinite degree. Let \(P(x,y)=0\) be the (trans)algebraic equation corresponding to a compact spectral curve \(\mathcal{S}\), with Newton polygon \(\Delta\) (recall definition 5.3), and set the notation \[P(x,y)=\sum_{i=0}^{d}q_{i}(x)y^{i}=\sum_{(i,j)\in N^{2}}\alpha_{i,j}y^{i}x^{j}\,, \tag{150}\] where \(d\) is the degree of the curve (which may be infinite if the curve is transalgebraic) and the \(q_{i}\) correspond to reindexed versions of the \(p_{m}\) in [1, Remark 2.2]. The following definitions will be necessary to explain the construction of quantum curves. **Definition A.1**.: For \(m=0,\dots,d\) define \[Q_{m}(x,y)=\sum_{i=1}^{d-m-1}q_{m++1}(x)y^{i}\,. \tag{151}\] **Definition A.2**.: Given \(m=0,\dots,d\) denote \[\alpha_{m}=\inf\{a\,|\,(a,m)\in\Delta\}\,,\qquad\beta_{m}=\sup\{a\,|\,(a,m)\in \Delta\}\,. \tag{152}\] The \(\alpha_{m}\) and \(\beta_{m}\) correspond directly to [14, Definition 2.3], whereas the \(Q_{m}\) correspond to reindexed versions of the \(P_{m}\) in [14, Definition 2.5]. Given that we now have a couple of notations that will be critical to our construction of quantum curves, let us consider an example. **Example A.3**.: Consider the spectral curve \[\delta=\left(\mathbb{P}^{1},\quad x(z)=z+1/z,\quad y(z)=z^{2},\quad B=\frac{dz_ {1}dz_{2}}{(z_{1}-z_{2})^{2}}\right)\,. \tag{153}\] The functions \(x\) and \(y\) satisfy the degree two polynomial equation: \[P(x,y)=y^{2}+(2-x^{2})y+1=0\,.\] Here the non-zero \(q_{m}\) and \(Q_{m}\) are \[q_{0}(x)=1,\quad q_{1}(x)=2-x^{2},\quad q_{2}(x)=1,\quad Q_{0}(x,y)=y=z^{2}\,. \tag{154}\] The Newton polygon \(\Delta\) is the hull of \(\{(0,0),(0,1),(0,2),(2,1)\}\), which means that this spectral curve is not regular as \((1,1)\) is an interior point. From \(\Delta\) we also write down the \(\alpha_{m}\) and \(\beta_{m}\) for illustrative purposes \[\alpha_{0}=0\,,\quad\alpha_{1}=0\,,\quad\alpha_{2}=0\,,\quad\beta_{0}=0\,, \quad\beta_{1}=2\,,\quad\beta_{2}=0\,. \tag{155}\] In this simple case, the infimum and supremum in the definition of the \(\alpha_{m}\) and \(\beta_{m}\), respectively, are actually achieved by points in \(\lambda\); however, in general, this may not be the case and the \(\alpha_{m}\) and \(\beta_{m}\) could take on non-integer values. We now define analogues of the \(C_{k}\) and \(D_{k}\) that appear in [14, Equations (5.31) & (5.34)]. Let \(x=x(z)\) and \(x_{i}=x(z_{i})\) for \(z,z_{i}\in C\) and \(i\in\mathbb{Z}_{\geqslant 1}\). **Definition A.4**.: Let \(b\in C\) be a pole of \(dx\) where all the \(\omega_{g,n}\) are holomorphic and \(x\) is meromorphic. We define \[E_{i}=-\lim_{z_{1}\to b}\frac{Q_{i-1}(x,y)}{x^{\lfloor\alpha_{\mathrm{d}- \mathrm{k}}\rfloor+1}}\,,\qquad F_{i}=\hbar\frac{x^{\lfloor\alpha_{i}\rfloor}} {x^{\lfloor\alpha_{\mathrm{d}-\mathrm{k}}\rfloor}}\frac{d}{dx}\,, \tag{156}\] where \(\lfloor\cdot\rfloor\) is the floor function. With these definitions out of the way we can construct quantum curves for compact transalgebraic admissible regular spectral curves. **Theorem A.5**.: _Let \(\delta=(\Sigma,x,y,\frac{dz_{1}dz_{2}}{(z_{1}-z_{2})^{2}})\) be a compact transalgebraic admissible regular spectral curve. Let \(b\) be a pole of \(dx\) at which the \(\omega_{g,n}\) are holomorphic. Then the wave-function_ \[\psi(z;b)=\exp\left[\sum_{n=1}^{\infty}\sum_{g=0}^{\infty}\frac{h^{2g+n-2}}{n! }\int_{b}^{z}\cdots\int_{b}^{z}\left(\omega_{g,n}-\delta_{n,2}\delta_{g,0}\frac {dx(z_{1})dx(z_{2})}{(x(z_{1})-x(z_{2}))^{2}}\right)\right] \tag{157}\] _satisfies the differential equation_ \[\left(\frac{q_{0}(x)}{x^{\lfloor\alpha_{0}\rfloor}}+\sum_{i=1}^{d}F_{1}F_{2} \cdots F_{i-1}\frac{q_{i}(x)}{x^{\lfloor\alpha_{i}\rfloor}}F_{i}+\hbar\sum_{ i=1}^{d-1}E_{i}F_{1}F_{2}\cdots F_{i-1}\frac{x^{\lfloor\alpha_{i}\rfloor}}{x^{ \lfloor\alpha_{i-1}\rfloor}}\right)\psi(z;b)=0\,. \tag{158}\] _(Note that \(d\) may be infinite if the curve is transalgebraic.)_ Proof.: Since \(\delta\) is regular, it has genus zero, and thus we can take \(\Sigma=\mathbb{P}^{1}\). Let \(x=M_{0}\exp(M_{1})\), \(y=M_{2}/x\) with \(M_{0},M_{1},M_{2}\) rational functions on \(\mathbb{P}^{1}\). Consider the sequence of compact, meromorphic spectral curves: \[\delta^{N}=\left(\mathbb{P}^{1},\quad x^{N}=M_{0}\left(1+(\tau-1)\frac{M_{1}}{ N}\right)^{-N}\left(1+\tau\frac{M_{1}}{N}\right)^{N}\,,\quad y^{N}=\frac{M_{2}}{x^{N}}, \quad B=\frac{dz_{1}dz_{2}}{(z_{1}-z_{2})^{2}}\right)\,.\] By assumption, those spectral curves are all regular, so we can apply [14, Lemma 5.14]. As the \(\omega_{g,n}^{N}\) converge to the correlators \(\omega_{g,n}\), if the \(\omega_{g,n}\) are regular at \(b\), then the \(\omega_{g,n}^{N}\) must also be regular for large enough \(N\). We then define \(E_{i}^{N}\), \(F_{i}^{N}\), and \(\psi^{N}(z;b)\) in the natural fashion. We quickly see that \(\psi^{N}(z;b)\to\psi(z;b)\) as the exponential is continuous, the sum is formal, and the \(\omega_{g,n}^{N}\) are well-defined in the limit so we can bring the limit inside the integrals using dominated convergence. The fact that \(F_{i}^{N}\to F_{i}\) is clear, as the Newton polygon will converge. Finally, we must deal with the \(E_{i}^{N}\). In [14] it was argued, using arguments based on an inequality of divisors, that the \(C_{i}\) (which correspond to re-indexed \(E_{i}\)) must be finite as \(z_{1}\to b\). The argument will carry over in the limit as \(N\to\infty\) as 1. \(x\) is meromorphic near \(b\) by assumptions on \(b\); 2. as \(x^{N}\) will be uniformly convergent away from \(x=\infty\), it will be uniformly convergent, in particular, near \(b\); 3. the required inequalities are non-strict. Finally, as already noted, near \(b\), \(x^{N}\) is uniformly convergent so by the Moore-Osgood theorem, \[\lim_{N\to\infty}\lim_{z_{1}\to b}\frac{Q_{1-\mathrm{i}}^{N}(x_{1},y_{1})}{(x^ {N}-x_{1}^{N})(x_{1}^{N})^{|\alpha_{d-k}^{N}|}}=\lim_{z_{1}\to b}\lim_{N\to \infty}\frac{Q_{1-\mathrm{i}}^{N}(x_{1},y_{1})}{(x^{N}-x_{1}^{N})(x_{1}^{N})^{ |\alpha_{d-k}^{N}|}}\,, \tag{159}\] at which point we may just take the limit, concluding that \(E_{i}^{N}\to E_{i}\) and the \(E_{i}\)'s are not identically infinity. This theorem therefore gives us a canonical way of creating a quantum curve for compact transalgebraic admissible spectral curves that are regular. In particular, we do not actually have to construct a quantum curve for each finite \(N\) and take the limit; the existence of such a sequence of curves guarantees we can construct the quantum curve directly from the limiting curve. It is important to note that if \(d=\infty\), the constructed quantum curve need not be simple, in contrast to the \(d<\infty\) case. _Remark A.6_.: While we may construct the quantum curve directly from the limiting curve as \(N\to\infty\), different sequences of curves \(\mathcal{S}_{N}\) may yield different presentations for the limiting curve \(\mathcal{S}\). What do we mean by that? The limiting curve is given by a quadruple \((\mathbb{P}^{1},x,y,B)\). But to extract the quantum curve, one needs to write down the relation \(P(x,y)=0\) satisfied by the functions \(x\) and \(y\). This relation can be written in different ways, and they may produce a priori different quantum curves via theorem A.5. To make this statement clear, consider the spectral curve \(\mathcal{S}\) of example 3.8, with \(M_{0}=z\), \(M_{1}=-z^{r}\) and \(M_{2}=z\), and the usual \(\tau\)-dependent sequence of curves \(\mathcal{S}_{N}\). Theorem A.5 gives a quantum curve that can be read off directly from the equation \(P(x,y)=0\) satisfied by the functions of \(\mathcal{S}\), but the way this equation is presented depends on \(\tau\). More precisely, the relation \(P(x,y)=0\) used to extract the quantum curve should be written as \[P(x,y)=ye^{\tau(xy)^{\tau}}-e^{(\tau-1)(xy)^{\tau}}\,. \tag{160}\] As a result, the quantum curve may a priori depend on \(\tau\), that is, on the choice of sequence used to construct the limiting curve. This is not too surprising as for each choice of \(\tau\) the entire function \(P\) is different, and yet all such \(P\) correspond to the same spectral curve. Thus the fact that the limiting quantum curve may depend on the choice of \(\tau\) should not be seen as an artefact of defining things in terms of limits, but an actual degeneracy in the choice of \(P\) that exists in the limit. As in [1], the choice of integration divisor can be generalised from \(D=[z]-[b]\) in an analogous way to the generalisation presented in [1, Remark 5.15]; the key steps of the proof carry over virtually without modification. However, choosing one's base point to be a pole of \(dx\) is inconvenient when \(dx\) has no pole; a case that may arise when \(x\) has an essential singularity. In [1], the authors considered the case of the base point \(b\) being a zero of \(q_{d}(x(b))=0\), but only for \(d=2\). Here, we generalise this choice to the case \(\infty>d>2\) and then use it to construct quantum curves with this base point. We begin this process with a lemma before proving a theorem analogous to theorem A.5. **Lemma A.7**.: _For \(b\) a zero of \(q_{d}(x)\) that is not in the ramification locus of \(x\),_ \[\psi_{i}(x(b);z;b)=\psi(z;b)\lim_{z_{1}\to b}\frac{1}{x(z_{1})^{|\alpha_{d-i}|} }Q_{d-\mathrm{i}-\mathrm{i}}(x(z_{1}),y(z_{1}))\,, \tag{161}\] _where the \(\psi_{i}\) are defined in [1, Definition 5.9]._ Proof.: From [1, Definition 5.9] \[\psi_{i}(x(b);z;b)=\psi(z;b)\lim_{z_{1}\to b}\left(\frac{1}{x(z_{1})^{|\alpha_ {d-i}|}}\big{(}q_{d}(x(z_{1}))\xi_{i}(x(z_{1});D)-q_{d-\mathrm{i}}(x(z_{1})) \big{)}\right)\,. \tag{162}\] Using the notation of [1] and substituting in the definition of the \(\xi_{k}\), [1, Definition 5.6], \[q_{d}(x(z_{1}))\xi_{i}(z_{1})=(-1)^{i}q_{d}(x(z_{1}))\sum_{n=0}^{\infty}\sum_{ g=0}^{\infty}\frac{n^{2g+n}}{n!}\frac{G_{g,n+1}^{(i)}(z_{1})}{dx(z_{1})^{ \mathrm{i}}}\,, \tag{163}\] where the \(G_{g,n+1}^{(i)}\) are defined in [13, Definition 5.3]. First we examine the power \(h^{0}\). Here we have, where the \(U_{0,1}^{(i)}\) are defined in [13, Definition 4.1] \[(-1)^{i}q_{d}(x(z_{1}))\frac{G_{0,1}^{(i)}(z_{1})}{dx(z_{1})^{i}}=(-1)^{i}q_{d} (x(z_{1}))\frac{U_{0,1}^{(i)}(z_{1})}{dx(z_{1})^{i}}=Q_{d-i-1}(x(z_{1}),y(z_{1} ))+q_{d-i}(x(z_{1}))\,. \tag{164}\] Note that we then have the inequality of divisors ([13, Lemma 2.6]) \[\operatorname{div}(Q_{d-i-1}(x,y))\geqslant\alpha_{d-i}\operatorname{div}_{0 }(x)-\beta_{d-i}\operatorname{div}_{\infty}(x)\,. \tag{165}\] So we therefore have that the limit \[\lim_{z_{1}\to b}\frac{1}{x(z_{1})^{\lfloor\alpha_{d-i}\rfloor}}Q_{d-i-1}(x( z_{1}),y(z_{1}))\,, \tag{166}\] is finite. This is in agreement with the results of [13, Section 5.3.2] for \(d=2\) as in that case \(Q_{0}(x,y)=q_{d}(x)y\). Now we examine the higher order powers of \(h\). As \(b\) is not in the ramification locus of \(x\), each \(G_{g,n+1}^{(i)}(z_{1})\) is regular at \(b\) for \(2g+n\geqslant 1\). Furthermore, it cannot be a zero of \(dx\) so for each \(i\), \[\frac{G_{g,n+1}^{(i)}(z_{1})}{dx(z_{1})^{i}} \tag{167}\] is regular at \(b\). Ergo, if \(b\) is not a zero of \(x\), the terms of higher order in \(h\) never contribute. Assume then that \(b\) is a simple zero of \(x\); we claim that still \[\lim_{z_{1}\to b}\frac{q_{d}(x(z_{1}))}{x(z_{1})^{\lfloor\alpha_{d-i}\rfloor} }=0\,, \tag{168}\] for \(i=1,\ldots,d-1\). As our curve is irreducible, there is some \(k=0,\ldots,d-1\) with \(q_{k}(0)\neq 0\) as we could otherwise cancel out an overall factor of \(x\) in \(P(x,y)\). Let \(k_{1}\) and \(k_{2}\) be the minimum and maximum such \(k\), respectively. By convexity, \(\alpha_{k}=0\) if and only if \(k_{1}\leqslant k\leqslant k_{2}\). Then, as the \(\alpha_{m}\) are the smallest point on the convex hull at the power of \(y^{m}\), they are strictly increasing for \(m\geqslant k_{2}\) and strictly decreasing for \(m\leqslant k_{1}\). Finally, \(\alpha_{0}\) and \(\alpha_{d}\) will be non-negative integers. Furthermore \(\alpha_{0}\leqslant\alpha_{d}\) as, if this inequality did not hold, \((1,k_{1})\) would be an interior point of the Newton polygon. Thus, we have \(\alpha_{d}=\lfloor\alpha_{d}\rfloor>\lfloor\alpha_{m}\rfloor\) for all \(d>m>0\). This establishes (168) as the order of the zero of \(q_{d}(x)\) in \(x\) is \(\alpha_{d}\). So we get that the \(h\) corrections vanish and we have the explicit expressions, \[\psi_{i}(x(b);z;b)=\psi(z;b)\lim_{z_{1}\to b}\left(\frac{1}{x(z_{1})^{\lfloor \alpha_{d-i}\rfloor}}Q_{d-i-1}(x(z_{1}),y(z_{1}))\right)\,, \tag{169}\] as claimed. This gives a theorem analogous to theorem A.5 except with this new choice of base point. First, we define the new coefficients \(G_{i}\) and \(H_{i}\) \[G_{i}=\lim_{z_{1}\to b}\frac{1}{x(z_{1})^{\lfloor\alpha_{i}\rfloor}}Q_{i-1}(x (z_{1}),y(z_{1})),\quad H_{i}=\hbar\frac{x^{\lfloor\alpha_{i}\rfloor}}{x^{ \lfloor\alpha_{i-1}\rfloor}}\left(\frac{d}{dx}-\frac{1}{x-x(b)}\right)\,. \tag{170}\] Then, [13, Theorem 5.11] reduces to \[\hbar\frac{d}{dx}\psi_{i-1}(x;z;b)=\frac{x^{\lfloor\alpha_{d-i} \rfloor}}{x^{\lfloor\alpha_{d-i+1}\rfloor}}\psi_{i}(x;z;b)-\frac{q_{d-i+1}(x)x ^{\lfloor\alpha_{d-1}\rfloor}}{q_{d}(x)x^{\lfloor\alpha_{d-i+1}\rfloor}}\psi_ {1}(x;z;b)\\ +\hbar\frac{1}{x-x(b)}\left(\psi_{i-1}(x;z;b)-G_{d-i+1}\psi(z;b) \right)\,. \tag{171}\] We can now derive a quantum curve in the manner of [13, Lemma 5.14]. **Theorem A.8**.: _Let \(\mathcal{S}=(\Sigma,x,y,\frac{dz_{1}dz_{2}}{(z_{1}-z_{2})^{2}})\) be a compact transalgebraic admissible regular spectral curve. Let \(b\) be a zero of \(q_{d}(x)\) for \(d<\infty\) or, if \(d=\infty\), a zero of \(x\), with \(b\) not in the ramification locus of \(x\). Then the wave-function_ \[\psi(z;b)=\exp\left[\sum_{n=1}^{\infty}\sum_{g=0}^{\infty}\frac{h^{2g+n-2}}{n! }\int_{b}^{z}\cdots\int_{b}^{z}\left(\omega_{g,n}-\delta_{n,2}\delta_{g,0}\frac {dx(z_{1})dx(z_{2})}{(x(z_{1})-x(z_{2}))^{2}}\right)\right] \tag{172}\] _satisfies the differential equation_ \[\left(\frac{\mathrm{q}_{0}(\mathrm{x})}{\chi^{\lfloor\alpha_{0}\rfloor}}+\sum_{i= 1}^{\mathrm{d}}H_{1}\cdots H_{i-1}\frac{\mathrm{q}_{i}(\mathrm{x})}{\chi^{ \lfloor\alpha_{i}\rfloor}}F_{i}+h\sum_{i=1}^{\mathrm{d}-1}G_{i}H_{1}\cdots H_{i -1}\frac{\chi^{\lfloor\alpha_{i}\rfloor}}{\chi^{\lfloor\alpha_{0}\rfloor}( \mathrm{x}-\mathrm{x}(\mathrm{b}))}\right)\psi(z;\mathrm{b})=0\,. \tag{173}\] Proof.: First assume \(\mathrm{d}<\infty\). Rewriting (171), \[\psi_{i}(\mathrm{x};z;\mathrm{b})=H_{\mathrm{d}-i+1}\psi_{i-1}( \mathrm{x};z;\mathrm{b})+\frac{\mathrm{q}_{\mathrm{d}-i+1}(\mathrm{x})}{\chi^{ \lfloor\alpha_{d-i+1}\rfloor}}F_{\mathrm{d}-i+1}\psi(z;\mathrm{b})\\ +\hbar\frac{\chi^{\lfloor\alpha_{d-i+1}\rfloor}}{\chi^{\lfloor \alpha_{d-i}\rfloor}(\mathrm{x}-\mathrm{x}(\mathrm{b}))}G_{\mathrm{d}-i+1} \psi(z;\mathrm{b})\,, \tag{174}\] where we used the fact that by [1, Lemma 5.10]) \[\psi_{1}(x;\mathrm{b})=\frac{\mathrm{q}_{d}(\mathrm{x})}{\chi^{\lfloor\alpha_ {d-1}\rfloor}}\hbar\frac{\mathrm{d}}{\mathrm{d}}\psi(z;\mathrm{b})\,.\] We can substitute the \(i=\mathrm{d}-1\) result into the \(i=\mathrm{d}\) result to obtain \[\psi_{\mathrm{d}}(\mathrm{x};z;\mathrm{b})=H_{1}\psi_{\mathrm{d}- 1}(\mathrm{x};z;\mathrm{b})+\frac{\mathrm{q}_{1}(\mathrm{x})}{\chi^{\lfloor \alpha_{1}\rfloor}}F_{1}\psi(z;\mathrm{b})+\hbar\frac{\chi^{\lfloor\alpha_{1} \rfloor}}{\chi^{\lfloor\alpha_{0}\rfloor}(\mathrm{x}-\mathrm{x}(\mathrm{b}))} G_{1}\psi(z;\mathrm{b})\\ =H_{1}H_{2}\psi_{\mathrm{d}-2}(\mathrm{x};z;\mathrm{b})+H_{1} \frac{\mathrm{q}_{2}(\mathrm{x})}{\chi^{\lfloor\alpha_{2}\rfloor}}F_{2}\psi(z; \mathrm{b})+\hbar H_{1}\frac{\chi^{\lfloor\alpha_{2}\rfloor}}{\chi^{\lfloor \alpha_{1}\rfloor}(\mathrm{x}-\mathrm{x}(\mathrm{b}))}G_{2}\psi(z;\mathrm{b}) \\ +\frac{\mathrm{q}_{1}(\mathrm{x})}{\chi^{\lfloor\alpha_{1}\rfloor} }F_{1}\psi(z;\mathrm{b})+\hbar\frac{\chi^{\lfloor\alpha_{1}\rfloor}}{\chi^{ \lfloor\alpha_{0}\rfloor}(\mathrm{x}-\mathrm{x}(\mathrm{b}))}G_{1}\psi(z; \mathrm{b})\,. \tag{175}\] Applying this iteratively, before finally using the fact that (again by [1, Lemma 5.10]) \[\psi_{\mathrm{d}}(\mathrm{x};z;\mathrm{b})=-\frac{\mathrm{q}_{0}(\mathrm{x})} {\chi^{\lfloor\alpha_{0}\rfloor}}\psi(z;\mathrm{b})\,,\] yields the desired result. Taking the limit to get the \(\mathrm{d}=\infty\) result is completely analogous to the \(\mathrm{d}=\infty\) case in theorem A.5. ## Appendix B Proof of proposition 6.13 In this appendix, we give the proof of proposition 6.13. All statements and proofs can be directly adapted from [1, Section 4.2]; in particular [1, Lemma 4.3-10] will be altered for our purposes. We will give all statements, together with a full proof of the analogue of [1, Lemma 4.3], and will indicate what has to be changed for the other ones. Denote \(\mathrm{d}_{i}\coloneqq\deg P_{i}\), \(e_{i}\coloneqq\deg R_{j}\). As we only need to consider \(n=1\), let us specialise some formulae, omitting subscripts \(n=i=1\) where convenient. \[\bar{\mathrm{d}}f \coloneqq\sum_{s=0}^{\infty}\sum_{j=1}^{\infty}D^{j-1}\left( \frac{L_{s}^{j}}{Q}[u^{s}]\frac{e^{u(S(u\mathrm{u}\mathrm{d}\mathrm{D})-1) \frac{R_{1}(z)}{R_{2}(z)}}}{u\hbar S(u\mathrm{u})}f\right), \tag{176}\] \[L_{s}^{j} \coloneqq\left([v^{j}](\partial_{y}+v\mathrm{b}^{\prime}(y)]^{s}e ^{v\left(\frac{S(u\times\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b} \mathrm{b})}{S(u\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b} \mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b} \mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b} \mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b} \mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b} \mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}\mathrm{b}})\right)\Big{|}_{y= \frac{R_{1}(z)}{R_{2}(z)}}\,,\] (177) \[H_{g,1} =\tau_{g}^{1}+\tau_{g}^{2}+\tau_{g}^{3}+\mathrm{const}\,,\] (178) \[\tau_{g}^{1} =[\hbar^{2g}]\hbar\bar{\mathrm{d}}\bar{\mathrm{d}}\bar{\mathrm{d}} 1\,,\] (179) \[\tau_{g}^{2} =[\hbar^{2g}]\sum_{j=1}^{\infty}D^{j-1}L_{0}^{j+1}D\frac{R_{1}(z)} {R_{2}(z)}\,,\] (180) \[\tau_{g}^{3} =\left([u^{2g}]\frac{1}{S(u)}\right)\left(\partial_{y}^{2g-1} \left(P_{1}(y)+\log\left(\frac{P_{2}(y)}{P_{3}(y)}\right)\right)\right)\right) \Big{|}_{y=\frac{R_{1}(z)}{R_{2}(z)}}\,. \tag{181}\] We also introduce the following notation: for some polynomials \(\bar{P}_{j}(z)\), \[\psi(y(z))=\frac{\bar{P}_{1}(z)}{\left(R_{2}(z)\right)^{\mathrm{d}_{1}}}+\log \left(\frac{\bar{P}_{2}(z)}{\bar{P}_{3}(z)}\right), \tag{182}\] \[\tilde{Q}(z) \coloneqq\operatorname{R}_{2}^{d_{1}+1}\tilde{p}_{2}\tilde{p}_{3}+z \Big{(}d_{1}R_{2}^{\prime}\tilde{p}_{1}\tilde{p}_{2}\tilde{p}_{3}-R_{2}\tilde{p }_{1}^{\prime}\tilde{p}_{2}\tilde{p}_{3}+R_{2}^{d_{1}+1}\tilde{p}_{2}\tilde{p}_ {3}^{\prime}-R_{2}^{d_{1}+1}\tilde{p}_{2}^{\prime}\tilde{p}_{3}\Big{)} \tag{183}\] \[=c\prod_{j=1}^{N}(z-p_{j}) \tag{184}\] such that \[Q(z)=\frac{\tilde{Q}(z)}{R_{2}^{d_{1}+1}\tilde{p}_{2}\tilde{p}_{3}}\,. \tag{185}\] Now consider the pole contributions of the \(\tau_{g}^{j}\) separately, starting with \(\tau_{g}^{1}\). For a given power of \(h\), the sum over \(s\) in equation (176) has an upper bound as each \(u_{1}\) must come with at least \(h^{2/3}\). For a similar reason, the sum over \(j\) is finite. As we also have that \[\left(\frac{S(vh\partial_{y})}{S(h\partial y)}-1\right)\left(P_{1}(y)+\log \frac{P_{2}(y)}{P_{3}(y)}\right) \tag{186}\] is a series in \(h\) with coefficients rational functions in \(y\), \(\tau_{g}^{1}\) is a rational function in \(z_{1}\). Its set of possible poles consists of the \(p_{j}\), the zeros of \(\tilde{p}_{2}\), \(\tilde{p}_{3}\), and \(R_{2}\), and \(z=\infty\). We will show that the \(\tau_{g}^{j}\) have no poles at the zeros of \(\tilde{P}_{2}\), \(\tilde{P}_{3}\), and \(R_{2}\), and that \(\tau_{g}^{1}\) and \(\tau_{g}^{2}\) also have no zeroes at \(z=\infty\). **Lemma B.1**.: \(\tau_{g}^{1}\) _has no poles at the zeroes of \(R_{2}(z)\)._ Proof.: This is the analogue of [1, Lemma 4.3]. The only difference is that \[e^{v(S(vh\partial_{y})-1)P_{1}(y)} \tag{187}\] should be replaced by \[e^{v\big{(}\frac{S(vh\partial_{y})}{S(h\partial_{y})}-1\big{)}P_{1}(y)}\,. \tag{188}\] Let us show how this works. Let \(B\) be a zero of \(R_{2}(z)\). For \(z\to B\), we have \(y(z)=R_{1}(z)/R_{2}(z)\to\infty\), and if \(B\) is a simple zero of \(R_{2}\), then it is a simple pole of \(y(z)\). Let us count the order of the pole of \[\sum_{s=0}^{\infty}\sum_{j=1}^{\infty}D^{j-1}\left(\frac{1}{Q}\Big{(}[v^{j}]( \partial_{y}+v\psi b^{\prime}(y))^{s}e^{v\big{(}\frac{S(vh\partial_{y})}{2(1 \partial_{y})}-1\big{)}\big{(}P_{1}(y)+\log\frac{P_{2}(y)}{P_{3}(y)}\big{)} }\Big{)}\Big{|}_{y=\frac{R_{1}(z)}{R_{2}(z)}}[u^{s}]\frac{e^{h(S(uh\operatorname {QD})-1)\frac{R_{1}(z)}{R_{2}(z)}}}{u\hbar S(u\hbar)}\right), \tag{189}\] at \(z_{1}=B\). To this end, two immediate observations are in order: * Firstly, note that \(e^{v\big{(}\frac{S(vh\partial_{y})}{S(h\partial_{y})}-1\big{)}\log\frac{P_{2} (y)}{P_{3}(y)}}\) does not contribute to the pole at infinity in \(y\), and, therefore, to the pole in \(z\) at \(z=B\), and can be safely ignored in this computation. * Secondly, note that \(Q^{-1}\) has zero of order \(d_{1}+1\) at \(z=B\) and each application of \(D=Q^{-1}z\partial_{z}\) decreases the degree of the pole in \(z\) at \(B\) by \(d_{1}\). The total effect of the factor \(Q^{-1}\) and \(D^{j-1}\) is the decrease of the order of pole by \(d_{j}+1\). Therefore, the order of the pole of equation (189) is equal to the order of pole at \(z=B\) of \[(z-B)\sum_{s=0}^{\infty}\left((\partial_{y}+v\psi b^{\prime}(y))^{s}e^{v\big{(} \frac{S(vh\partial_{y})}{S(h\partial_{y})}-1\big{)}P_{1}(y)}\Big{|}_{y=\frac{ R_{1}(z)}{R_{2}(z)}}[u^{s}]\frac{e^{h(S(u\operatorname{QD})-1)\frac{R_{1}(z)}{R_{2}(z)} }}{u\hbar S(u\hbar)}\right)\Big{|}_{v=(z-B)^{d_{1}}}^{\prime}\,, \tag{190}\] where by \(\big{|}^{\prime}\), we mean that we only select the terms with \(\deg_{v}\geqslant 1\) before the substitution \(v=(z-B)^{d_{1}}\). Note also that * Since \(y(z)\) has a simple pole at \(z=B\), each \(\partial_{y}\) decreases the order of pole in the resulting expression by 1. * Multiplication by \(\psi^{\prime}(y)\) increases the order of pole by \(d_{1}-1\). Taking into account these two observations and that each \(v\) factor decreases the order of pole by \(d_{1}\), we see that each application of the operator \(\partial_{y}+v\psi^{\prime}(y)\) decreases the order of pole in the resulting expression by 1. Therefore, the order of the pole of equation (190) is equal to the order of pole at \(z=B\) of \[(z-B)\bigg{(}e^{v\big{(}\frac{S(vh\partial_{y})}{S(h\partial_{y})}-1\big{)}P_{1}( y)}\Big{|}_{y=\frac{R_{1}(z)}{R_{2}(z)}}\frac{e^{u(S(u\operatorname{QD})-1) \frac{R_{1}(z)}{R_{2}(z)}}}{u\hbar S(u\hbar)}\bigg{)}\Big{|}_{v=(z-B)^{d_{1}} }^{\prime\prime}\,, \tag{191}\] where by \(\big{|}^{\prime\prime}\) we mean that we only select the terms with \(\deg v\geqslant 1\) and regular in \(u\) before the substitutions \(v=(z-B)^{d_{1}}\), \(u=z-B\). Here, the first exponent is different from [12, Lemma 4.3], and this slightly changes the argument. In this first exponent, specifically in \(\frac{8(v\mathsf{h}\partial_{u})}{8(1\mathsf{h}\partial_{y})}-1\), each \(\mathsf{h}\partial_{y}\) does not increase the order of the pole at \(z=B\), and actually decreases it ([12] only argues that \(v\mathsf{h}\partial_{y}\) does not increase it; we do not have or need the \(v\)); since \(v\mathsf{P}_{1}(y)\) has no pole at \(z=B\), this means that the whole first exponential is regular. In the second exponent, in \(8(v\mathsf{h}\partial_{z})-1\), each \(v\mathsf{h}\partial_{z}\) preserves the order of the pole at \(z=B\); since \(\mathsf{u}\mathsf{R}_{1}(z)/\mathsf{R}_{2}(z)\) has no pole at \(z=B\), this means that the whole second exponential is regular. Finally, \((z-B)/(\mathsf{u}\mathsf{h}\mathsf{h}(v\mathsf{h}))\) is also regular at \(z=B\) in this expression. Thus, equation (191) is regular at \(z=B\), and therefore equation (189) is regular at \(z=B\) as well. **Lemma B.2**.: \(\tau^{1}_{g}\) _is regular at the zeros of \(\tilde{\mathsf{P}}_{2}\) that are not zeros of \(\mathbb{R}_{2}\)._ Proof.: Analogous to [12, Lemma 4.4]. The same kind of modification of the exponent should be applied to the left-hand side of [12, (143)], but the right-hand side and the rest of the proof hold verbatim. **Lemma B.3**.: \(\tau^{1}_{g}\) _is regular at the zeros of \(\tilde{\mathsf{P}}_{3}\) that are not zeros or \(\mathbb{R}_{2}\)._ Proof.: Analogous to [12, Lemma 4.5] in the same way that lemma B.2 is analogous to [12, Lemma 4.4]. **Lemma B.4**.: \(\tau^{1}_{g}\) _is regular at \(z=\infty\)._ Proof.: Analogous to [12, Lemma 4.6]. If \(e_{1}\leqslant e_{2}\), by the same degree counting, all parts are regular. If \(e_{1}>e_{2}\), we should again do the same substitution as at the start of the proof of lemma B.1. As here we also have that \(\mathsf{h}\partial_{y}\) decreases the pole order, and not just \(v\mathsf{h}\partial_{y}\), the penultimate paragraph of the proof of [12, Lemma 4.6] goes through. The rest holds without any changes. **Lemma B.5**.: \(\tau^{2}_{g}+\tau^{3}_{g}\) _is regular at the zeros of \(\mathbb{R}_{2}(z)\)._ Proof.: Analogous to [12, Lemma 4.7]: \(\tau^{3}_{g}\) is regular, and for \(\tau^{2}_{g}\) the same change as in the start of the proof of [12, Lemma 4.6] must be adopted. Again, this is sufficient for pole counting, as \(\mathsf{h}\partial_{y}\) is applies at least once and hence cancels the simple pole of \(\mathsf{R}_{1}(z)/\mathsf{R}_{2}(z)\). **Lemma B.6**.: \(\tau^{2}_{g}+\tau^{3}_{g}\) _is regular at zeros of \(\tilde{\mathsf{P}}_{2}\) or \(\tilde{\mathsf{P}}_{3}\) that are not poles of \(\mathbb{R}_{2}\)._ Proof.: Analogous to [12, Lemma 4.8&4.9]. Again, the same modification as before works. **Lemma B.7**.: \(\tau^{2}_{g}\) _is regular at \(z_{1}=\infty\)._ Proof.: Analogous to [12, Lemma 4.10], but now only for \(\tau^{2}_{g}\). As for lemma B.4, do the case distinction whether or not \(e_{1}>e_{2}\), and use the same scheme as before. Proof of proposition 6.13.: By lemma B.1 to B.4, the only possible poles of \(\tau^{1}_{g}\) are at the \(p_{j}\). As the poles of \(\frac{1}{Q}\) at the \(p_{j}\) are odd with respect to the deck transformations and iterative application of \(D_{1}\) preserves this, this suffices to prove that \(\tau^{1}_{g}\in\Theta\). Similarly, by lemma B.5 to B.7, the only possible poles of \(\tau^{2}_{g}\) are at the \(p_{j}\), and the only possible poles of \(\tau^{3}_{g}\) are at the \(p_{j}\) and \(z=\infty\). Poles at the \(p_{j}\) can only be introduced by and application of \(D\); the first such appearance will give a simple pole, while repeated applications of \(D\) will preserve the oddness of the principal parts with respect to deck transformations. This proves the claim for the \(H_{g,1}\). For \(n>1\), the only terms are from \(\tilde{U}_{i}\) applications, and by the extended quote above, they are similar to the analogous term in \(H_{g,1}\), which is \(\tau^{1}\). As this lies in \(\Theta\), do the \(H_{g,n}\) for \(n>1\). So the only possible poles of \(H_{g,1}\) are at the \(p_{j}\), or, for \(n=1\), at the point \(z_{1}=\infty\), coming from \(\tau^{(3)}_{g}\), i.e. the second line of equation (137).
2305.10266
Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM's Translation Capability
Large, multilingual language models exhibit surprisingly good zero- or few-shot machine translation capabilities, despite having never seen the intentionally-included translation examples provided to typical neural translation systems. We investigate the role of incidental bilingualism -- the unintentional consumption of bilingual signals, including translation examples -- in explaining the translation capabilities of large language models, taking the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method approach to measure and understand incidental bilingualism at scale. We show that PaLM is exposed to over 30 million translation pairs across at least 44 languages. Furthermore, the amount of incidental bilingual content is highly correlated with the amount of monolingual in-language content for non-English languages. We relate incidental bilingual content to zero-shot prompts and show that it can be used to mine new prompts to improve PaLM's out-of-English zero-shot translation quality. Finally, in a series of small-scale ablations, we show that its presence has a substantial impact on translation capabilities, although this impact diminishes with model scale.
Eleftheria Briakou, Colin Cherry, George Foster
2023-05-17T14:58:06Z
http://arxiv.org/abs/2305.10266v1
# Searching for Needles in a Haystack: ###### Abstract Large, multilingual language models exhibit surprisingly good zero- or few-shot machine translation capabilities, despite having never seen the intentionally-included translation examples provided to typical neural translation systems. We investigate the role of _incidental bilingualism_--the unintentional consumption of bilingual signals, including translation examples--in explaining the translation capabilities of large language models, taking the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method approach to measure and understand incidental bilingualism at scale. We show that PaLM is exposed to over \(30\) million translation pairs across at least \(44\) languages. Furthermore, the amount of incidental bilingual content is highly correlated with the amount of monolingual in-language content for non-English languages. We relate incidental bilingual content to zero-shot prompts and show that it can be used to mine new prompts to improve PaLM's out-of-English zero-shot translation quality. Finally, in a series of small-scale ablations, we show that its presence has a substantial impact on translation capabilities, although this impact diminishes with model scale. ## 1 Introduction Recent work has shown that large language models (llms) exhibit impressive capabilities in performing various natural language generation tasks, even in the zero-shot paradigm. In particular, such models have shown interesting machine translation (mt) capabilities Brown et al. (2020); Chowdhery et al. (2022); Vilar et al. (2022)--especially when translating into English, despite never having been _explicitly_ and _intentionally_ exposed to translation data in the way their supervised counterparts are. This raises the question: where do these translation capabilities come from? We hypothesize that the translation capabilities of llms connect to _incidental bilingualism_: the unintentional consumption of bilingual text within a single training instance. To test this hypothesis, we take PaLM Chowdhery et al. (2022)--a \(540\)-billion parameter Transformer language model--as a case study. We first conduct a large-scale analysis of its training data in order to characterize the nature and quantity of bilingual text, then perform experiments to assess the impact of this text on translation performance. To measure incidental bilingualism at scale, we develop a processing pipeline that alternates between quantitative and qualitative analysis (SS3): first detect bilingual versus monolingual text using a language tagger, then qualitatively analyze the nature of bilingual text, and finally measure the amount of translation data within bilingual instances. Our analysis spans \(44\) languages, for which we study bilingualism paired with English. Our findings are: * In all, \(1.4\%\) of palm's training instances are detected as bilingual, while \(0.34\)% contain at least one translated sentence pair. We were able to mine such pairs across all languages studied; therefore, none of these languages is truly zero-shot in the context of translation. * The number of monolingual instances in a language is predictive of the number of instances containing bilingual or translation content for that language (paired with English). After establishing that both bilingual and translation content are incidentally consumed during PaLM's training, we study how they connect to its mt capabilities (SS4). We run a series of training and prompting experiments and found that: * Prompting the full PaLM model with alternative, data-driven prompts improves out-of-English zero-shot translation by \(14\) chrF points on average across languages, indicating that its zero-shot translation capabilities were underestimated due to sub-optimal prompts. * Ablating detected translation pairs with smaller versions of PaLM has a dramatic effect on the translation capabilities of 1B-parameter models for high-resource languages, reducing average into-English zero-shot results by \(7.4\) bleu and \(5\)-shot results by \(5.9\) bleu. The effect falls off but remains notable (\(+2\)-\(3\) bleu across several conditions) as we scale to \(8\)B-parameter models. ## 2 Related Work Translation Capabilities of LLMsLarge-scale generative language models, such as gpt-3 Brown et al. (2020), PaLM Chowdhery et al. (2022), and xglm Lin et al. (2021) have been shown to exhibit translation capabilities, despite not being explicitly trained to translate. These capabilities are surprisingly strong, particularly when translating into English with few-shot examples. One explanation for this behavior is that it results from incidental multitask learning Radford et al. (2018); Sanh et al. (2021). This hypothesis has not been explored for mt, where recent work has mostly focused on improving llm translation capabilities by optimizing few-shot prompting strategies Vilar et al. (2022); Agrawal et al. (2022). Rather than trying to improve translation quality for llms, our goal is to understand where their translation abilities stem from by tracing them back to the properties of the pretraining data. Large-Scale Data Analysisllms rely on massive amounts of unlabeled corpora for training. These corpora are primarily acquired by combining heterogeneous online resources (e.g., Wikipedia, Web forums, Common Crawl, etc.)--whose properties are usually unknown. Recent work on large-scale analysis has shed some light: Dodge et al. (2021) analyze C4 Raffel et al. (2019)--a dataset created from a snapshot of Common Crawl--and show that it contains machine generated texts as well as evaluation samples from commonly used nlp benchmarks; Kreutzer et al. (2022) manually audit the quality of multilingual datasets and find systematic quality issues amongst popular pretraining datasets. Most related to our work, Blevins and Zettlemoyer (2022) show that popular corpora routinely used for training English-only llms contain a non-negligible amount of non-English text, which helps explain their cross-lingual capabilities. Their manual analysis of corpus subsamples covers several bilingual categories, including a translation category. But where analysis of bilingualism is a side result of their work, it is our primary contribution. We extend their work by proposing automatic tools to quantify bilingualism at scale and directly relate it to llm translation performance. Eliciting Knowledge from LLMsPrompting language models to elicit knowledge acquired during pre-training has received a lot of research interest. Petroni et al. (2019) show that llms can recall factual knowledge by answering queries structured as cloze statements. Jiang et al. (2020) further show that query-based prompts outperform manually created cloze statements, suggesting that the latter provide a lower bound estimate on the actual abilities of llms. Follow-up work confirms those findings by suggesting better prompts with automatic generation methods Shin et al. (2020) or prompt engineering Reynolds and McDonell (2021). We similarly explore how to extract translation knowledge from llms using data-driven prompts. ## 3 Measuring & Understanding Incidental Bilingualism We introduce a mixed-method approach Creswell and Clark (2017); Shorten and Smith (2017) to measure and understand _incidental bilingualism_--the unintentional consumption of bilingual signals--at scale. Since we expect bilingual signals to be rare, we explore the huge data space by alternating between quantitative and qualitative steps, with results from each step complementing and informing one another (Figure 1). The quantitative steps play the role of inducing a smaller-scale focus space to study, while the qualitative steps provide insights into the nature of bilingual signals. PreliminariesPaLM's pretraining dataset consists of \(780\) billion tokens from a mixture of multilingual sources (social media conversations (\(50\%\)), filtered webpages (\(27\%\)), and Wikipedia (\(4\%\))), presumably English sources (books (\(13\%\)) and news articles (\(1\%\))), and source code (\(5\%\)). PaLM was trained on \(2,\!048\)-subword-token examples formed by concatenating and truncating documents. As PaLM is a multi-source lm, a document may be a web page, a book, or a conversation, depending on the source. Our primary units for data analysis are _instances_ we created by splitting training examples along document boundaries. As such, each instance is either a complete document or a contiguous fragment of one, up to \(2{,}048\) tokens in length. A more detailed discussion of instances is given in Appendix A. We study bilingualism between English and \(44\) other languages. We choose language pairs that: a) are supported by our language identification models, and b) have Flores-\(101\)[11] evaluation data. We divide languages into high, medium, and low-resource groups according to their monolingual instance counts, as shown below: ``` ### Detecting Bilingual Instances Our first goal is to automatically detect all training instances that contain bilingual text without presupposing a specific granularity for bilingualism. To that end, we use cmx[15]--a language identification model for codemixed texts--to produce a sequence of token-level language tags for each training instance. An instance is labeled as bilingual if it contains at least two contiguous segments in different languages, each consisting of at least \(N\) consecutive identical language tags. Instances with more than two languages are interpreted as bilingual, as discussed in Appendix B. One of the two languages must always be English, both to simplify our analysis and to work within the limits of the cmx tool. FindingsFigure 2 presents the per-language monolingual and bilingual instance counts. We include raw counts per language in Table 7. We observe that across the languages studied, PaLM consumes bilingual instances that, in total, account for \(1.4\%\) of its training instances. ### Characterizing Bilingual Instances Next, we turn to understanding the nature of bilingual instances detected by the above procedure. To make manual analysis easier, we used the KnowYourData tool1 to highlight spans of the less frequent language in each bilingual instance. Footnote 1: [https://knowyourdata.withgoogle.com](https://knowyourdata.withgoogle.com) FindingsOur qualitative analysis of a sample of \(100\) English-French bilingual instances reveals that bilingualism manifests in various cross-lingual phenomena (examples of bilingual instances are presented in Table 8 of Appendix E). Our detection approach is reasonably accurate: only \(5\%\) of instances correspond to errors mostly attributed to language identification issues (i.e., the detected instances are indeed bilingual, but at least one of the two languages is not English or French). Each correctly detected bilingual instance is annotated as belonging to one of five categories, with the typology shown in Figure 3. Most bilingual instances (\(55\%\)) fall under the broader class of "Not Translations" and cover cases Figure 1: A mixed-method approach to measure and understand incidental bilingualism at scale. We alternate between quantitative and qualitative steps to detect (§3.1) and analyze (§3.2) bilingual instances, then detect (§3.3) and analyze (§3.4) translation instances. Figure 2: Number of monolingual, bilingual, and translation instances detected within PaLM’s training data. PaLM consumes bilingual signals, including translation examples, across (at least) \(44\) languages. where the two languages encode information that does not correspond to translation content. This class is further decomposed into three sub-classes. First, we found a few instances (\(10\%\)) of code-switching where one or two speakers alternate between two languages in the context of a single conversation. As expected, most code-switching instances were spotted in social media conversations, as it is primarily used within multilingual communities in informal communication. Second, we observed that many bilingual instances (\(21\%\)) are attributed to references, where named entities or bibliography entries are cited in their native language, such as instances drawn from Wikipedia. Third, we also found a considerable number of bilingual instances (\(24\%\)) that include completely unrelated content in the two languages that just happened to co-exist within the same web page. The remaining bilingual instances are evenly distributed (\(20\%\)) across two categories that fall loosely under the rubric of "Translations". Here, we distinguish between cases where some amount of the text expresses a typical translation relation and cases where content across languages is semantically related, but not exactly by translation. The latter involves a rich spectrum of cross-lingual semantic relations, including cross-lingual entailment, summarization, and paraphrasing, mainly noticed within books in the genre of literary criticism and interpretation. We also spotted a few cases of forum discussions around explanations of translation or stylistic manipulation of translations. ### Detecting Translation Pairs Our manual analysis exposed an opportunity to automatically extract and count translated sentence pairs (_translation pairs_ for short). We cast the problem of within-instance translation detection as a local mining task following recent advances in parallel text acquisition. Concretely, for each bilingual instance from SS3.1, we run a sentence breaker and extract two pools of candidate sentences \(x\) and \(y\) in the two languages. The language of each sentence is inferred by majority voting over token-level language tags. Whichever language has fewer sentences is labeled the embedded language and the other becomes the primary. Each candidate sentence is then encoded to a vector representation using the Labse(Feng et al., 2022) cross-lingual sentence encoder. Translation pairs are extracted by finding the most similar primary sentence for each embedded sentence and then checking whether the cosine distance of their representations falls below a threshold. We choose a threshold of \(0.6\) on the cosine distance to mine plausible translation pairs, following Feng et al. (2022). We also apply a series of length-and-language-based heuristic data quality filters, adapted from Alibaba's WMT Data Filtering submissions (Lu et al., 2018, 2020), described in Appendix C. Note that this extraction process is oblivious to document structure: the instance may be formatted as parallel sentences, paragraphs, documents, or as a free-form discussion that happens to mention both a sentence and its translation. Our extraction is also incapable of detecting translation relations below the sentence level. If we can extract at least one translation pair from an instance, then we label it as a _translation instance_. FindingsWe find that \(0.34\%\) of PaLM's training instances contain at least one translation pair. Note that this number provides a lower bound on the amount of incidental bilingualism and translation that PaLM consumes, as we are restricted to a specific set of language pairs, and we only study bilingualism with English. Figure 4 presents the number of translation pairs we mined within PaLM's training instances between English and each language. At a minimum, PaLM consumes thousands of parallel texts for all language pairs studied, while for high-resource languages it sees more than a million translation pairs. Furthermore, we investigate the correlation between the number of monolingual instances in each language and their bilingual and translation counterparts. Our results in Figure 5 indicate that, surprisingly, the monolingual counts in each language correlate strongly with the bilingual (r=\(0.944\)) and Figure 3: Typology of bilingual instances, along with their distribution within an en-fr annotated sample. Bilingual instances cover a range of cross-lingual phenomena, including cases of translated content. translation (r=\(0.938\)) counts. This strong correlation implies that, when working at scale, we can predict the bilingual and translation sizes for a given language (within an error rate) by simply counting monolingual instances. ### Discovering Natural Prompts After identifying a smaller-scale set consisting of training instances that contain translation pairs, we further manually inspect them to understand how the translation task is naturally modeled by PaLM. We find that sentence-level translations are presented within a training instance in three ways. The majority of them appear across paragraphs and do not follow a canonical pattern. Among the remainder, we noticed two canonical patterns: translation pairs that belong to stacked translated paragraphs (e.g., \(\{x_{1},x_{2},y_{1},y_{2}\}\)) and interleaved translations where a sentence and each translation are adjacent to each other (e.g., \(\{x_{1},y_{1},x_{2},y_{2}\}\)). Among the latter, we saw an opportunity to extract natural prompts automatically. We do so by analyzing the prefixes of the translation pairs mined in SS3.3. Drawing on our manual observations, we mine the most frequent prefixes per language pair that follow a simple colon prompt format: any sequence of non-whitespace characters followed by a colon. Finally, we manually filter the automatically mined prefix lists to look for consistent natural prompt patterns across languages. "Francais:"), iii) **translation**: source language in English, and the word "translation" in the target language (e.g., "Traduction:"). Interestingly, prompt types are not evenly distributed across our language groups: language codes appear primarily with high-resource languages, while low-resource languages favor prompts written in their native language. We include a complete list of prompt counts per language in Figure 6 of Appendix E. ## 4 Analyzing the Impact of Bilingualism We analyze the impact of bilingualism on the translation capabilities of PaLM with a series of mt experiments on the flores-101 (Goyal et al., 2022) evaluation set, which provides translations of a common set of English Wikipedia sentences into all of our \(44\) languages. We report results on the \(1{,}012\) sentence devtest set. We use the \(997\) sentence dev set primarily as a source of randomlyrawn exemplars when reporting \(5\)-shot results. We report bleu(Papineni et al., 2002) for into-English translation and chrF (Popovic, 2015) for out-of-English translation, both computed by Saccheleu (Post, 2018) with default settings. For llm-based translation, we follow the template from Viilar et al. (2022) unless stated otherwise: [source]: [\(X\)] [target]: where [source], and [target] are the source and target language names (in English) and [\(X\)] is the source text. When present, few-shot exemplars are provided above the template in the same format, as detailed in Appendix D. ### Prompting PaLM with Natural Prompts We prompt the original \(540\)B parameter PaLM model with templates that use naturally-occurring prefixes of incidental translations, as discussed in SS3.4. In our template, we replace [source] and [target] with each alternative, data-driven prompt. We experiment with zero-shot and 5-shot prompting. FindingsTable 2 presents average translation quality results for different prompts across high, medium, and low resource settings. We present the complete, per language results in Table 9 of Appendix E. When translating into English (\(\mathbf{xx\rightarrow}\mathbf{EN}\)), the default prompt yields the best results, while alternative prompts result in a small degradation in quality; overall, translating into English seems to be robust across different prompts supported by our data. On the other hand, PaLM's translation quality is surprisingly sensitive to the choice of prompt when translating out of English (\(\mathbf{EN\rightarrow}\mathbf{xx}\)): simply changing the default prompt to its native variant improves quality by \(~{}14\) chrF points, with most of the improvement reported in medium and low-resource languages. The "translation" prompt also yields consistent improvements over the default. Finally, prompting with language codes only improves translation out of English for the high-resource group--this is expected as this prompt was only present for a few high-resource languages. Further analysis of out-of-English results reveals that native prompts trigger text in the desired language, while the default prompt results in high rates of generating the wrong target language (see gray percentages in Table 2). The output's target language is determined by a sequence-level language-identification tool (Botha et al., 2017). Finally, although choosing natural prompts that arise from the data can help us better understand PaLM's zero-shot capabilities, large differences between prompts do not carry over to the few-shot setting (right-most columns of Table 2). ### Extrinsic Evaluation of Translation Pairs It is one thing to report counts of translation pairs mined from bilingual instances, but is the resulting bitext of high quality? We adopt the parallel text quality evaluation framework of the wmt Shared Task on Parallel Corpus Filtering and Alignment (Koehn et al., 2020) and train supervised neural machine translation models from scratch on the mined translations. This allows us to jointly assess the quality of PaLM's translation content and our extraction heuristics. We focus this analysis on fr\(\rightarrow\)en, PaLM's highest-resource language pair. DataFor PaLM translation pairs, we explore a number of thresholds on the labse distance. To put our results in perspective, we additionally train a model on all pairs from the WMT\(14\) fr\(\rightarrow\)en task (Bojar et al., 2014) and on random samples thereof to establish fair data comparison points at notable labse thresholds. Sentence counts for all conditions are shown in Table 3. ArchitectureWe adopt the \(6\)-layer encoder-decoder Transformer Base (Vaswani et al., 2017) architecture, with minimal hyper-parameter tuning. Shared sentence piece (Kudo and Richardson, 2018) vocabularies with \(32\)K tokens are constructed from bitext for each scenario. Dropout is set to \(0.3\) for all systems except for the full wmt system, which uses \(0.1\). Systems are trained up to \(450\)K steps with a batch size of \(1{,}024\). Checkpoints are selected by flores dev bleu. FindingsTable 3 presents the results of our analysis. In general, the mined translation pairs from our analysis pipeline provide useful signal for training supervised mt systems with reasonable translation quality (i.e., \(37\) to \(38\) bleu across various thresholds, compared to \(41\) that we achieve using \(40\)M translations from available wmt parallel corpora). Moreover, these results confirm that \(0.6\) seems to be the right threshold for detecting translation pairs that are useful, or at least not harmful in the presence of other positive signals (i.e., at \(0.6\) we are within 1 bleu point of a system trained on the same amounts of wmt parallel text). ### Ablating Incidental Bilingual We now explore the impact of bilingualism on the translation capabilities of PaLM. To do so, we conduct smaller-scale experiments by training \(1\)B and \(8\)B parameter models on different training samples to measure the effect of removing various types of multilingual data. ArchitectureOur \(1\)B and \(8\)B models are scaled-down versions of PaLM with small changes. Like PaLM, each is a decoder-only model trained with a causal language modeling objective, using a dense transformer architecture and a sentence piece tokenizer (Kudo and Richardson, 2018) that retains spacing information. Unlike PaLM, we do not share key and value tensors across attention heads (Shazeer, 2019), which should affect only decoding speed. We include a hyper-parameter summary in Table 6 in Appendix E. Also, we use a smaller vocabulary size of \(128\)K tokens compared to PaLM's \(256\)K tokens, a concession to fit the models onto available hardware. Both \(1\)B and \(8\)B train on examples of \(2{,}048\) tokens with a batch size of \(512\) for \(100\)K steps. Note that using the same number of examples for both scales means that the \(8\)B models are likely under-trained; however, holding data quantity constant is useful for directly measuring the effect of model scale. DataTo simulate PaLM's data conditions with smaller models, we begin by partitioning PaLM's training instances into four non-overlapping groups: eng: English instances, **nen**: non-English (excluding bilingual) instances, **bil**: bilingual (excluding translation) instances, and **tra**: translation instances. We then merge instances within their groups into \(2{,}048\) token examples. Counting examples from each group allows us to determine the full data's implicit mixture of these groups: eng: \(84.4\%\); **nen**: \(14.1\%\); **bil**: \(1.0\%\); **tra**: \(0.5\%\). These should not match the instance \begin{table} \begin{tabular}{l r r r} \hline \hline **t** & **\#translations** & **PaLM (mined)** & **wmt** \\ \hline N/A & 40,836,876 & ✗\(\times\) & 42.0 \\ 0.90 & 9,084,429 & 33.7 & \\ 0.80 & 7,056,441 & 35.7 & \\ 0.70 & 4,874,173 & 36.4 & \\ \hline 0.60 & 3,341,187 & 37.3 & 38.1 \\ 0.50 & 2,474,703 & 37.2 & \\ 0.40 & 1,948,820 & 37.1 & \\ 0.30 & 1,477,535 & 38.4 & 36.5 \\ \hline 0.20 & 906,937 & 37.8 & \\ \(0.15\) & 549,705 & 36.3 & \\ \hline \hline \end{tabular} \end{table} Table 3: bleu scores for fr\(\rightarrow\)en nmt models trained on various translation pairs, evaluated on flores devtest. \(t\) corresponds to the labse threshold. PaLM-mined translation pairs provide useful signal for training supervised nmt models. level proportions reported earlier, as these count examples, which are merged instances. Also, they will not match the multilinguality proportions reported by Chowdhery et al. (2022), as we have removed non-natural-language (code) data and any non-English text not in our \(44\)-language set. We can now sample examples from our partitions to create a smaller training set with the same proportions of incidental bilingualism. No attempt is made to retain PaLM's original proportions for other aspects like data source or language. Counts for this sample are shown as full in Table 5. We ablate each group in the following order: **TRA**, **BIL** and then **nEN**. At each step, we replace ablated examples with examples from the next group in the chain. The counts for all ablation conditions are shown in Table 5. The **-nEN** setting corresponds to the English-only setting studied by Blevins and Zettlemoyer (2022), but as they show, this will contain some non-English content due to language-identification errors. Analogous provisos exist for each ablation, as all our automatic tools make errors. We aim to measure the effect of removing most of a type of content, not all of it. FindingsTable 4 presents the results of our ablation--the complete, per language, results are in Table 10 of Appendix E. Focusing on our 1B model, we note that examples containing translation pairs (**TRA**) have an outsized impact on translation quality for being only \(0.5\%\) of the training data. In the high-resource xx\(\rightarrow\)en, zero-shot scenario, replacing **TRA** examples with **bIL** results in a drop of \(7.4\) bleu. With **TRA** removed, the additional impact of removing the remaining bilingual instances (**bIL**) is much smaller: \(1.2\) bleu. One might expect the utility of translation data to fall off as we add \(5\)-shot examples at inference time, but **TRA** is still quite important, with its removal resulting in a reduction of \(5.9\) bleu. The importance of **TRA** holds throughout our 1B experiments, to the extent that the system cannot translate at all, i.e. for \(5\)-shot versions of xx\(\rightarrow\)en medium and en\(\rightarrow\)xx high. Turning to our \(8\)B model, we see that translation content continues to have a substantial impact on translation quality, though the absolute score differences have diminished, hovering between \(2\)-\(3\) bleu or \(3\)-\(4\) chrF, depending on the scenario. This result, where a \(4\)x increase in parameters leads to a roughly \(2\)x reduction in the absolute impact of **TRA** suggests that it would be interesting to build scaling laws to study the impact of incidental translation data, which we leave to future work. Also, for \(5\)-shot scenarios, there is no longer such a big difference between the impact of **bIL** and **TRA** data. Given exemplars, the larger model seems to be able to make better use of weaker bilingual signals. Surprisingly, the \(8\)B model that does not have access to multilingual content (**-nEN**), exhibits some translation capabilities for xx\(\rightarrow\)en high (i.e., \(17.3\) and \(25.9\) bleu for zero- and few-shot, respectively). A closer look at the per-language breakdown (see Table 10) reveals that those capabilities are restricted to languages written in Latin script. This adds evidence for larger models being better equipped to leverage either sparse signals (i.e., language-identification failures during ablation) and weak signals (i.e., language similarities from shared scripts). As expected, non-English content is critical for translation out of English. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}\))**} & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}\))**} & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\))**} & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\)** }}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\)** }}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}\)** }}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}\)** }}\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}\)** }\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}\)** }\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbf{ }}}}}}}}}}}}}}}}}\)** }\)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf \mathbf{}}}}}}} \mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}} \)** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\}{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\} {\mathbf{\mathbf{\}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ {\}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\} \mathbf{\mathbf{\mathbf{}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\} {\mathbf{\}}}}}}}}}}}}\))** & **nEN\(\rightarrow\)(\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\}{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}\))** & ## 5 Conclusion We explore the role of incidental bilingualism--the unintentional consumption of bilingual signals--in PaLM's translation capabilities. We introduce a mixed-method approach that alternates between quantitative and qualitative analyses to measure and understand incidental bilingualism at scale by processing \(780\) billion tokens. Our work shows that PaLM consumes a significant amount of bilingual text: \(1.4\%\) of training instances in natural language are bilingual. At the same time, it is naturally exposed to translation signals, having seen more than \(30\) million translation pairs in \(44\) languages paired with English. Furthermore, we extrinsically evaluate the quality of these translations, showing that they can be used to train supervised models that roughly match the quality of equal amounts of WMT data. Finally, we show that incidental bilingualism connects to the machine translation capabilities of PaLM. First, we show that data-driven prompts extracted from incidental translations can improve the zero-shot abilities of PaLM when translating out of English by \(14\) chrF on average. Second, we provide empirical evidence that bilingual and translation signals can partially explain the translation capabilities of smaller-scale LLMs. ## Limitations Our findings should be interpreted considering a series of problem definitions and design choices. First, our quantitative results on measuring incidental bilingualism at scale are subject to language identification, sentence splitting, and mining errors. Our qualitative analysis for the English-French language pair revealed that those errors are reasonably small (see SS3.2). However, we expect the accuracy of our tools to vary across languages and, crucially, exhibit unanticipated failure modes on web text and low-resource languages (Caswell et al., 2020). Second, our findings are restricted to quantifying bilingualism and translations within a limited set of language pairs and only paired with English. Thus, by problem definition, we are limited to computing a lower-bound estimate on incidental bilingualism of PaLM. The above limitations should also be taken into consideration when interpreting our ablation results. Although we attempted to remove most bilingual signals in our series of mt experiments, it is still possible that bilingualism slips through due to either model errors or due to bilingual signals beyond our focus set of languages. Finally, any results and findings of our work are restricted to PaLM; the single llm studied in this work. However, our finer-grained analysis (see Table 11 of Appendix E) reveals that _incidental bilingualism_, including translation signals, is observed across various data sources (e.g., webpages, books, etc.) that are commonly included in the training data of other popular llms. ## Acknowledgements We thank Jiaming Luo, Julia Kreutzer, Orhan Firat, Xavier Garcia, Markus Freitag, Sweta Agrawal, Marine Carpuat, Elijah Rippeth, and the anonymous reviewers for their helpful and constructive comments.
2306.06154
HypLL: The Hyperbolic Learning Library
Deep learning in hyperbolic space is quickly gaining traction in the fields of machine learning, multimedia, and computer vision. Deep networks commonly operate in Euclidean space, implicitly assuming that data lies on regular grids. Recent advances have shown that hyperbolic geometry provides a viable alternative foundation for deep learning, especially when data is hierarchical in nature and when working with few embedding dimensions. Currently however, no accessible open-source library exists to build hyperbolic network modules akin to well-known deep learning libraries. We present HypLL, the Hyperbolic Learning Library to bring the progress on hyperbolic deep learning together. HypLL is built on top of PyTorch, with an emphasis in its design for ease-of-use, in order to attract a broad audience towards this new and open-ended research direction. The code is available at: https://github.com/maxvanspengler/hyperbolic_learning_library.
Max van Spengler, Philipp Wirth, Pascal Mettes
2023-06-09T14:49:20Z
http://arxiv.org/abs/2306.06154v3
# HypLL: The Hyperbolic Learning Library ###### Abstract. Deep learning in hyperbolic space is quickly gaining traction in the fields of machine learning, multimedia, and computer vision. Deep networks commonly operate in Euclidean space, implicitly assuming that data lies on regular grids. Recent advances have shown that hyperbolic geometry provides a viable alternative foundation for deep learning, especially when data is hierarchical in nature and when working with few embedding dimensions. Currently however, no accessible open-source library exists to build hyperbolic network modules akin to well-known deep learning libraries. We present HypLL, the Hyperbolic Learning Library to bring the progress on hyperbolic deep learning together. HypLL is built on top of PyTorch, with an emphasis in its design for ease-of-use, in order to attract a broad audience towards this new and open-ended research direction. The code is available at: [https://github.com/maxvanspengler/hyperbolic_learning_library](https://github.com/maxvanspengler/hyperbolic_learning_library). hyperbolic geometry, deep learning, software library + Footnote †: journal: Computer Vision and Pattern Recognition the number of tensors in the computational graph increases rapidly, this problem becomes challenging. As a result, mistakes happen frequently and tend to be difficult to spot and correct. We have built our design around keeping track of manifolds, to make the network design transparent and easy to debug. In contrast, while Hyperlib also provides hyperbolic learning functionalities (Hersersh et al., 2017), it is only available for Tensorflow, does not keep track of manifolds, and does not contain important layers such as convolutions and batch normalization. In this Section, we will highlight the design of the core modules in HypLL. The overall structure of the library is shown in Figure 1. The library is centered around four modules: (i) the tensors module, (ii) the manifolds module, (iii) the nn module, and (iv) the optim module. The modules are discussed sequentially below. ### The tensors module The tensors module forms the foundation and contains three important components: 1) the manifold tensor, 2) the manifold parameter, and 3) the tangent tensor. The first and third of these take over a part of the role of the original tensor class from the PyTorch library. The manifold parameter takes over the role of the PyTorch parameter class. As such, these classes form the basic objects for storing data throughout any computations with the other modules. **The manifold tensor** is a class with three important properties: 1) a PyTorch tensor, 2) a manifold \(\mathcal{M}\) from the manifold module which will be discussed in Subsection 2.2 and 3) a manifold dimension \(d\in\mathbb{Z}\). The manifold indicates where the data -- stored in the tensor object -- lives. The manifold dimension indicates the dimension that stores the points on the manifold. For example, if we have a 2-dimensional manifold tensor with a Poincare ball manifold and manifold dimension 1, then each row in our tensor contains a point on the Poincare ball. By storing this additional data on the manifold tensor, we can later ensure that any operation applied to the manifold tensor is indeed allowed. For example, if an operation assumes data to be Euclidean, while the data is actually hyperbolic, we can easily point out the mistake. Not all data lives on a manifold. For example, a tensor with label indices does not have an underlying manifold. In such cases we revert to the tensor class from PyTorch. Hence in HypLL, this class bears a slightly different interpretation; a tensor containing values which do not form vectors or points on a manifold. **The manifold parameter** is simply a manifold tensor which subclasses the parameter class from PyTorch. This allows creating layers with points on a manifold as its parameters, which will prove important when discussing the nn module. **The tangent tensor** is similar to the manifold tensor in that it stores metadata for a collection of data stored in its tensor attribute. However, here the data consists of vectors living in the tangent space of the manifold \(\mathcal{M}\) that is within the manifold attribute of the tangent tensor. A tangent space, written as \(\mathcal{T}_{x}\mathcal{M}\), is defined by a manifold \(\mathcal{M}\) and a point \(x\in\mathcal{M}\). When working in hyperbolic space, it is convenient to have tangent vectors from various tangent spaces \(\{\mathcal{T}_{x_{i}}\mathcal{M}\}_{i}\), stored in the same tangent tensor. Therefore, the tangent tensor also contains a manifold tensor which has the same manifold and for which the tensor attribute is broadcasting with the tensor of tangent vectors. Allowing broadcastable tensors instead of tensors of the same shape makes these tangent tensors more flexible while reducing memory requirements. If this manifold tensor is set to None, every tangent vector is located at the origin of the manifold. Lastly, the tangent tensor contains a manifold dimension, which is again an integer indicating what dimension contains the vectors. To summarize, the tangent tensor contains a tensor attribute containing tangent vectors, a manifold attribute indicating the manifold to which the vectors are tangent, a manifold tensor attribute containing the points on the manifold where the tangent spaces are located and is broadcasting with the tangent vectors; and a manifold dimension indicating the dimension of the vectors. ### The manifolds module The manifolds module contains the different manifolds that the library currently supports. These classes contain all of the usual operations that are defined on these manifolds and which are required for the operations defined in the nn module. Each different manifold subclasses the base manifold class, which is a metaclass containing the methods that each manifold should have. In the current implementation, we have focused on the Euclidean manifold and the Poincare ball, the most commonly used model of hyperbolic space. The library is designed to be able to incorporate any other manifold as well in future updates, such as the hyperboloid and Klein models. The inclusion of the Euclidean manifold within this module is required for optimally providing flexible manifold-agnostic layers in the nn module. Each manifold submodule contains the mathematical functions that define its operations, such as exponential maps, logarithmic maps, the Frechet mean, vector addition and more. These operations apply checks to see if their inputs live on the correct manifold and, when there are multiple inputs, if the manifold dimensions of the inputs align properly. This is the largest contributor to our second design principle, as these operations significantly reduce the difficulty of debugging by explicitly disallowing mathematically nonsensical operations. Without such checks, operations can easily lead to silent failures and tricky bugs. For a complete overview of Figure 1. The structure of HypLL, centered around the tensors, manifolds, nn, and optim modules. the different operations defined on these manifolds, with implementations based on (Han et al., 2016; Li et al., 2017; Li et al., 2018), we refer to the source code. Another important component of this module is the curvature of the Poincare ball manifold. The curvature is a module containing a single parameter, which is used to compute the absolute value of the curvature and can be made learnable. The reason to use the absolute value of the curvature instead of the true negative value is to avoid having to add a minus sign throughout, which increases ease-of-use. This does not lead to down-stream issues as we only support non-positive curvature manifolds. Such a curvature object is supplied as input to the Poincare ball class during initialization to define the curvature of the manifold. ### The nn module The nn module is where all of the neural network methodology is implemented. It is structured similarly to the nn module from PyTorch and contains the currently available learning tools for Euclidean space and the Poincare ball model. Similar to the classes in the PyTorch nn module, each of the classes in our nn module subclasses the Module class from PyTorch, which ensures that they will be properly registered for optimization. This module will be expanded whenever new methodology becomes available in literature. An overview of the available layers is shown in Figure 1. The implementations are based on (Han et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018). Each of the layers in the nn module is designed to be manifold-agnostic. In practice, each layer is supplied with a manifold object and it uses the operations defined on this manifold object to define its forward pass. So, when the supplied manifold is Euclidean space, the layers are equivalent to their PyTorch counterparts. Due to the usage of these manifold operations, all of the compatibility checks on the inputs are automatically built-in, which increases ease-of-use. Following our first design principle, this manifold argument that is supplied to each layer is the only difference between the signature of the original PyTorch layers and our layers. As a result, the only difference between building a neural network with HypLL compared to with PyTorch is having to define a manifold and supplying this manifold to the layers of the network. ### The optim module The optim module implements the Riemannian SGD and Riemannian Adam optimizers as introduced in (Kingmare et al., 2014), based on the implementations from (Li et al., 2017). These implementations work both with manifold parameters and PyTorch's parameters. When optimizing manifold parameters, the optimizers use the manifold of the manifold parameter for each of the operations that is performed during optimization. As a result, the checks are again built-in automatically through the manifold objects. When training with manifold parameters on the Euclidean manifold or with PyTorch parameters, the optimizers are equivalent to their PyTorch counterparts. Following our first design principle, initialization of these optimizers is identical to the optimization of the PyTorch optimizers. Moreover, these optimizers inherit from the base PyTorch optimizer, which makes them compatible with learning rate schedulers and other such tools. ## 3. Example Usage To showcase how easy it becomes to define and train a neural network with HypLL, we will describe the similarities and differences with the usage of PyTorch. The major differences that come with using our library are 1) defining a manifold on which our data will live and on which our model will act; and 2) moving our input data onto this manifold as a pre-processing step. For this example we Figure 2. Comparison of the implementation of a small convnet in HypLL (left) versus in PyTorch (right). will use the CIFAR-10 tutorial from the PyTorch documentation1, which we have also adapted as a tutorial for our library. We will show several steps involved in training this small convolutional network and compare it to the PyTorch implementation. Footnote 1: [https://pytorch.org/tutorials/beginmer/bitz/cifar10_tutorial.html](https://pytorch.org/tutorials/beginmer/bitz/cifar10_tutorial.html) (07-06-2023) **Creating a network.** We start by defining the manifold and then using this manifold to define a small convolutional network. We will use a Poincare ball with curvature -1 for this example. The implementations of this network in HypLL and in PyTorch are shown side-by-side in Figure 2. The only true difference is that we define a manifold and supply it to the model layers in the HypLL code. Adding hyperbolic geometry to a network is as simple as that with this library. **Feeding data to our model.** Second, we show part of the training loop, where we only show the part in which our implementation differs from PyTorch for brevity. Mapping Euclidean vectors to hyperbolic space is usually performed by assuming the vectors to be tangent vectors at the origin and then mapping these to the hyperbolic space using the exponential map. We will use this approach here as well. The example is shown in Figure 3. So, when using HypLL, a little bit of logic has to be added to move the inputs to the manifold on which the network acts. Namely, we first wrap the inputs in a tangent tensor with no manifold tensor argument and then map it using the exponential map of the Poincare ball. This operation is left to the user so they have full control over how to move their inputs to hyperbolic space. Aside from that, nothing else changes, making hyperbolic deep learning a tool that can be used by a broad audience. ## 4. Conclusions and Outlook This paper presents the Hyperbolic Learning Library, enabling researchers and practitioners to perform deep learning without hassle in hyperbolic space, a new and open-ended research direction. HypLL is designed to make the step from PyTorch minimal and to keep debugging easy by tracking manifolds. The library is a continual effort, where the latest advances in the field are continually integrated and forms a central point to work on challenges in the field, such as increasing stability in optimization and performing learning at large scale. The main structure follows the ideology of PyTorch (Herschel and others, 2018) with corresponding modules. In the future, we strive to build a geometric framework on top of the library for graph-based data, in the spirit of PyG, PyTorch geometric (Herschel and others, 2018).
2307.11701
A mean curvature flow method for numerical cosmology
We provide a mean curvature flow method for numerical cosmology and test it on cases of inhomogenous inflation. The results show (in a proof of concept way) that the method can handle even large inhomogeneities that result from different regions exiting inflation at different times.
Matthew Doniere, David Garfinkle
2023-07-21T16:56:04Z
http://arxiv.org/abs/2307.11701v2
# A mean curvature flow method for numerical cosmology ###### Abstract We provide a mean curvature flow method for numerical cosmology and test it on cases of inhomogenous inflation. The results show (in a proof of concept way) that the method can handle even large inhomogeneities that result from different regions exiting inflation at different times. Introduction In recent years there have been several numerical studies of inhomogeneous expanding cosmologies.[1; 2; 3; 4; 5; 6; 7] Ultimately the goal of these simulations is to use a wide enough class of initial data and to evolve long enough to determine whether inflation occurs generically. These simulations use several different slicing conditions, with some using generalized harmonic coordinates, others using the puncture gauge, and still others using constant mean curvature (CMC) slicing. While any of these slicing conditions can evolve for some time, it is not clear whether (in all cases of interest) they can evolve long enough to extract all relevant physics. This issue of long time evolution is addressed from the mathematical side in a recent paper by Wang and Senatore[8] which uses mean curvature flow to study inflationary cosmology. In this paper the authors show the long time existence of their slicing and the asymptotic behavior of the spacetime. Mean curvature flow has been extensively studied by pure mathematicians, but has (so far) seen comparitively little application in physics. We are thus motivated to try mean curvature flow slicing as a numerical method to study expanding cosmologies. The variables, equations of motion, and numerical methods used are described in section II. Our results are presented in section III. Conclusions are given in section IV. ## II Equations of motion The spacetime is described in terms of a coordinate system \((t,x^{i})\) and a tetrad \((\mathbf{e}_{0}^{a},\mathbf{e}_{\alpha}^{a})\), where both \(i\) and \(\alpha\) go from 1 to 3. We choose \(\mathbf{e}_{0}\) to be hypersurface orthogonal with the relation between tetrad and coordinates of the form \(\mathbf{e}_{0}=N^{-1}\partial_{t}\) and \(\mathbf{e}_{\alpha}=e_{\alpha}{}^{i}\partial_{i}\). Here \(N\) is the lapse and the shift is chosen to be zero. Note that this means that for any quantity \(F\) we have \(\partial_{t}F=Ne_{0}(F)\). Choose the spatial triad to be Fermi propagated along the integral curves of \(\mathbf{e}_{0}\). The commutators of the tetrad components are decomposed as follows: \[[\mathbf{e}_{0},\mathbf{e}_{\alpha}] = \dot{u}_{\alpha}\mathbf{e}_{0}\ -\ (H\delta_{\alpha}{}^{\beta}+\sigma_{\alpha}{}^{\beta})\mathbf{e}_{\beta} \tag{1}\] \[[\mathbf{e}_{\alpha},\mathbf{e}_{\beta}] = \left(2a_{[\alpha}\delta_{\beta]}{}^{\gamma}\ +\ \epsilon_{\alpha\beta\delta}n^{\delta\gamma}\right)\mathbf{e}_{\gamma} \tag{2}\] where \(n^{\alpha\beta}\) is symmetric, and \(\sigma^{\alpha\beta}\) is symmetric and trace-free. In physical terms, \(H\) is one third of the mean curvature and is therefore equal to the Hubble constant when the spacetime is Friedmann-Lemaitre-Robertson-Walker (FLRW). The shear \(\sigma_{\alpha\beta}\) gives the extent to which different directions are expanding at different rates. The quantity \(\dot{u}_{\alpha}\) is not an independent variable, but is given in terms of the lapse by \(\dot{u}_{\alpha}=N^{-1}{\bf e}_{\alpha}N\). The matter is a scalar field \(\phi\) with potential \(V\). In order to obtain evolution equations for the matter variables that are first order in space and time, we define the quantities \(P\) and \(S_{\alpha}\) by \(P\equiv{\bf e}_{0}(\phi)\) and \(S_{\alpha}\equiv{\bf e}_{\alpha}(\phi)\). Mean curvature flow means that the surfaces of constant time evolve by flowing along their normal vector an amount equal to the mean curvature. In terms of our variables, this means that the lapse \(N\) is given by \(N=3H\). Note that this means that \(\dot{u}_{\alpha}=H^{-1}e_{\alpha}H\). The evolution equations for the tetrad and matter quantities are as follows: \[\partial_{t}e_{\alpha}{}^{i} = -N(H\delta_{\alpha}{}^{\beta}+\sigma_{\alpha}{}^{\beta})e_{\beta }{}^{i} \tag{3}\] \[\partial_{t}H = e^{\alpha}e_{\alpha}H\;-\;2a^{\alpha}e_{\alpha}H\;+\;N\left[-H^ {2}\;-\;{{1\over 3}}\sigma_{\alpha\beta}\sigma^{\alpha\beta}\;-\;{ {1\over 3}}(P^{2}-V)\right]\] (4) \[\partial_{t}a_{\alpha} = N\biggl{[}3{\bf e}_{\alpha}(H)\;-\;{{3\over 2}}{\bf e }_{\beta}(\sigma_{\alpha}{}^{\beta})\;-\;H(\dot{u}_{\alpha}+a_{\alpha})\;+\; \sigma_{\alpha}{}^{\beta}\left({{1\over 2}}\dot{u}_{\beta}+5a_{ \beta}\right)\] (5) \[+ 2\epsilon_{\alpha\beta\gamma}n^{\beta\delta}\sigma_{\delta}{}^{ \gamma}\;+\;2PS_{\alpha}\biggr{]}\] \[\partial_{t}n^{\alpha\beta} = N\left[-\epsilon^{\gamma\delta(\alpha}{\bf e}_{\gamma}(\sigma_{ \delta}{}^{\beta})\right)\;-\;Hn^{\alpha\beta}\;+\;2n^{(\alpha}{}_{\lambda} \sigma^{\beta)\lambda}\;-\;\epsilon^{\gamma\delta(\alpha}\dot{u}_{\gamma} \sigma_{\delta}{}^{\beta)}\right]\] (6) \[\partial_{t}\sigma_{\alpha\beta} = 3{\bf e}_{<\alpha}{\bf e}_{\beta>}H\;+\;N\biggl{[}-\;{\bf e}_{< \alpha}(a_{\beta>})\;-\;3H\sigma_{\alpha\beta}\;+\;a_{<\alpha}\dot{u}_{\beta> }+\;\epsilon_{\gamma\delta(\alpha}{\bf e}^{\gamma}(n_{\beta)}{}^{\delta})\] (7) \[+ \epsilon_{\gamma\delta(\alpha}n_{\beta)}{}^{\delta}(\dot{u}^{ \gamma}-2a^{\gamma})\;-\;2n_{<\alpha}{}^{\gamma}n_{\beta>\gamma}+nn_{<\alpha \beta>}\;+\;S_{<\alpha}S_{\beta>}\biggr{]}\] \[\partial_{t}\phi = NP\] (8) \[\partial_{t}S_{\alpha} = N\left[{\bf e}_{\alpha}(P)\;+\;P\dot{u}_{\alpha}\;-\;HS_{ \alpha}\;-\;\sigma_{\alpha}{}^{\beta}S_{\beta}\right]\] (9) \[\partial_{t}P = N\left[{\bf e}^{\alpha}(S_{\alpha})\;-\;3HP\;+\;S_{\alpha}(\dot{ u}^{\alpha}-2a^{\alpha})\;-\;{dV\over d\phi}\right] \tag{10}\] These variables are also subject to the vanishing of the following constraint quantities: \[{\cal C}_{\rm com} = \epsilon^{\alpha\beta\lambda}\left(e_{\alpha}(e_{\beta}{}^{i})-a_{ \alpha}e_{\beta}{}^{i}\right)-n^{\lambda\gamma}e_{\gamma}{}^{i} \tag{11}\] \[{\cal C}_{u2} = \epsilon^{\alpha\beta\lambda}\left(e_{\beta}(\dot{u}_{\alpha})+a _{\alpha}\dot{u}_{\beta}\right)+n^{\lambda\gamma}\dot{u}_{\gamma}\] (12) \[{\cal C}_{J} = e_{\alpha}(n^{\alpha\delta})+\epsilon^{\alpha\beta\delta}e_{ \alpha}(a_{\beta})-2a_{\alpha}n^{\alpha\delta}\] (13) \[{\cal C}_{C} = e_{\beta}(\sigma_{\alpha}{}^{\beta})-2e_{\alpha}(H)-3\sigma_{ \alpha}{}^{\beta}a_{\beta}-\epsilon_{\alpha\beta\gamma}n^{\beta\delta}\sigma_{ \delta}{}^{\gamma}-PS_{\alpha}\] (14) \[{\cal C}_{G} = 4e^{\alpha}(a_{\alpha})+6H^{2}-6a^{\alpha}a_{\alpha}-n^{\alpha \beta}n_{\alpha\beta}+\frac{1}{2}n^{2}-\sigma_{\alpha\beta}\sigma^{\alpha\beta}\] (15) \[- \left(P^{2}+S^{\alpha}S_{\alpha}+2V\right)\] \[{\cal C}_{S} = S_{\alpha}-e_{\alpha}(\phi) \tag{16}\] Initial data are chosen to solve the constraints of eqns. (11-16) which are then preserved (to within numerical truncation error) under evolution. Preservation of the constraints up to truncation error is used as a code test and as a test that the resolution is adequate. The data are evolved using eqns. (3-10). Here eqn. (4) is parabolic, and to obtain a mixed hyperbolic-parabolic system we add a multiple of eqn. (14) to the right hand side of eqn. (5). The hyperbolic equations are evolved using the iterated Crank-Nicholson method with the time step proportional to the space step as required by the Courant condition. Numerical evolution of parabolic systems using explicit methods is usually slow because it requires a time step proportional to the square of the space step. Instead we use an implicit method to treat eqn. (4), which allows us to use the same time step as for the hyperbolic equations. ## III Results To perform simulations quickly and with high resolution, we treat spacetimes with two spatial symmetries. We use Cartesian coordinates \((x,y,z)\) and have dependence only on \(x\). We use periodic boundary conditions with \(0\leq x\leq 2\pi\) with \(0\) and \(2\pi\) identified. Initial data are found using the York method [9]. That is, we write the initial data in terms of a freely specifiable piece and an unknown conformal factor which we solve for numerically. The initial data for the metric variables are the following: \[H = h_{0} \tag{17}\] \[e_{\alpha}{}^{i} = \psi^{-2}\delta_{\alpha}{}^{i}\] (18) \[a_{\alpha} = -2\psi^{-1}e_{\alpha}{}^{i}\partial_{i}\psi\] (19) \[n_{\alpha\beta} = 0\] (20) \[\sigma_{\alpha\beta} = \psi^{-6}Z_{\alpha\beta} \tag{21}\] Here \(h_{0}\) is a constant, which means that our initial data surface is a constant mean curvature surface. The initial data for the matter variables are as follows: \(P=\psi^{-6}Q\) and \(S_{\alpha}={\bf e}_{\alpha}\phi\) where \(Q\) and \(\phi\) are given by \[Q = p_{0}+f_{0}\cos x \tag{22}\] \[\phi = \phi_{0}+f_{1}\cos x \tag{23}\] where \(p_{0}\), \(f_{0}\), \(\phi_{0}\), and \(f_{1}\) are constants. For consistency with the momentum constraint (eqn. (14)) we pick \(Z_{\alpha\beta}\) to be \[Z_{\alpha\beta}={\rm diag}(a_{11},\lambda a_{11},-(1+\lambda)a_{11}) \tag{24}\] where \(\lambda\) is a constant and \(a_{11}\) is given by \[a_{11}=f_{1}\left(p_{0}\cos x+{\frac{1}{4}}f_{0}\cos 2x\right) \tag{25}\] The Hamiltonian constraint (eqn. (15) then becomes the following elliptic equation for \(\psi\) \[\partial^{i}\partial_{i}\psi\;+\;{\frac{1}{4}}(V(\phi)-3H^{2})\psi^{5} \;+\;{\frac{1}{8}}\left(Q^{2}+Z^{ij}Z_{ij}\right)\psi^{-7}\;+\;( \partial^{i}\phi\partial_{i}\phi)\psi=0 \tag{26}\] which we solve numerically. One of the conditions for the theorems of [8] is a potential \(V\) satisfying \(0<\Lambda_{1}\leq V\leq\Lambda_{2}\) where \(\Lambda_{1}\) and \(\Lambda_{2}\) are constants. In order to investigate this case, we choose a potential of the form \[V(\phi)=\frac{\Lambda_{1}e^{-\phi/c}+\Lambda_{2}e^{\phi/c}}{e^{-\phi/c}+e^{ \phi/c}} \tag{27}\] where \(\Lambda_{1}\), \(\Lambda_{2}\), and \(c\) are constants. A plot of the potential with parameters \(\Lambda_{1}=1,\Lambda_{2}=2,c=1\) is shown in figure 1. Note that this potential has two plateaus: the upper one at and the lower one at \(\Lambda_{1}\). We perform runs with \(\Lambda_{1}\) and \(\Lambda_{2}\) greater than zero to compare to the results of [8]. However, as noted in [8] their conditions are only designed to model the onset of inflation. In particular, with \(\Lambda_{1}>0\) one cannot model the exit from inflation. In order to treat this case too, we also perform simulations with \(\Lambda_{1}=0\). Potentials of this form are chosen in this preliminary study for purposes of comparison with the results of [8]. However, our mean curvature flow method is compatible with any potential, including more commonly used potentials like \(m^{2}\phi^{2}\). Results for a simulation using the potential of figure 1 are shown in figures 2 and 3. Here figure 2 shows the time development of \(H\) while figure 3 shows the time development of \(\phi\). Since the coordinate \(x\) has \(0\) and \(2\pi\) identified, this means that the left hand side of each panel of each graph is identified with the right hand side. Note in figure 2 that at intermediate times two regions develop with two different values of \(H\), while by the end there is a uniform value of \(H\) corresponding to what was the lower value at intermediate times. This behavior can be understood by looking at the corresponding panels of figure 3: at intermediate times one region has the scalar field \(\phi\) at the top plateau of the potential, while the other region has \(\phi\) at the bottom plateau of the potential. By the final time of the simulation, \(\phi\) is at the bottom plateau everywhere. Figure 2: \(H\) vs. \(x\) for \(t=0,4,8,12,16\) and \(20\). The parameters for the potential are \(\Lambda_{1}=1,\Lambda_{2}=2,c=1\). The parameters for the initial data are \(h_{0}=2,p_{0}=5,f_{0}=0.1,\phi_{0}=0.5,f_{1}=0.8,\lambda=0.5\). In a sense, these two regions are present from the begining, since the field starts out at the top plateau in one region and the bottom plateau in another region. From this point of view, it may seem odd that \(H\) is uniform in the initial data. However, constant \(H\) is a requirement of the York method, since it allows the momentum constraint equation to decouple from the Hamiltonian constraint equation and thus be solved independently. In order to model both inflation and the exit from inflation, we perform simulations with the potential given in figure 4, which has \(\Lambda_{1}=0\) and \(\Lambda_{2}=2\). Results for a simulation with this potential is shown in figures 5 and 6. Other than the change in the potential, this simulation uses the same parameters for initial data as the simulation presented in figures 2 and 3. Note that the overall behavior is very similar to that of the previous simulation: figure 5 shows that at intermediate times there are two regions with different values of \(H\), while by the end of the simulation \(H\) has become fairly uniform and is evolving towards zero. Once again, this behavior can be understood by looking at the behavior of \(\phi\) given in figure 6: at intermediate times there are two regions, one with \(\phi\) at the top plateau of the potential, while the other has \(\phi\) at the bottom plateau of the potential. By the end of the simulation, \(\phi\) is at the bottom plateau everywhere. In order to give the method a more stringent test, we use initial data with a significantly Figure 5: \(H\) vs. \(x\) for \(t=0,4,8,12,16\) and \(20\). The parameters for the potential are \(\Lambda_{1}=0,\Lambda_{2}=2,c=1\). The parameters for the initial data are \(h_{0}=2,p_{0}=5,f_{0}=0.1,\phi_{0}=0.5,f_{1}=0.8,\lambda=0.5\). larger amplitude for the inhomogeneity of the scalar field. Results for this simulation are presented in figures 7 and 8. Here the results are somewhat different from those of the previous two simulations. Though this simulation is run for twice as much time as the previous simulations, nonetheless even at the end of the simulation there are two distinct regions. In one region the scalar field has reached the bottom plateau of the potential and thus inflation has ended. However in the other region the scalar field is still at the top plateau of the potential and it is evolving very slowly. We can therefore expect inflation in this region to go on for an extended period of time. Furthermore, the behavior of \(H\) is becoming quite steep in the transiton region between the inflationary region and the region where inflation has ended. In cases in which gradients are steep, one must ensure that there are enough spatial points to provide adequate spatial resolution. To address this issue, we present the results of a convergence test in figure (9). In this figure, we plot the natural logarithm of the \(L_{2}\) norm of the constraint \({\cal C}_{G}\) of eqn. (15) vs. time at two different resolutions. The top curve is at the resolution of the simulations of figures (7) and (8). The bottom curve is with twice as many spatial points. The results show that we are in the convergent regime and thus have adequate spatial resolution. ## IV Conclusion Because the mechanism of inflation is local, inhomogeneous cosmologies remain inhomogeneous even when inflation occurs. This is because different regions may undergo different amounts of inflation and exit inflation at different times. Thus a numerical cosmology method needs to be able to accurately evolve the spacetime for sufficiently long times even in the presence of ongoing and possibly large inhomogeneities. We have shown (in a proof of concept way) that mean curvature flow is a promising method for such robust long term evolution. It would be interesting to do a comparison of robustness (using the same initial data) with the slicing methods used in refs. [1]-[7]. One limitation of our current simulations is the restriction to dependence on one spatial coordinate. In particular this limitation does not allow us to treat the case where black holes form in an inhomogeneous inflating cosmology. It would be interesting to perform simulations using our method in the case of no symmetry. Since under mean curvature Figure 7: \(H\) vs. \(x\) for \(t=0,8,16,24,32\) and \(40\). The parameters for the potential are \(\Lambda_{1}=0,\Lambda_{2}=2,c=1\). The parameters for the initial data are \(h_{0}=2,p_{0}=1,f_{0}=0,\phi_{0}=0,f_{1}=3,\lambda=0.5\). flow the mean curvature remains positive, we expect that in such a simulation the slices would slow down and essentially freeze in the black hole region. That is, we would expect a "collapse of the lapse" phenomenon similar to what occurs in maximal slicing. ###### Acknowledgements. It is a pleasure to thank Frans Pretorius, Paul Steinhardt, and Anna Ijjas for helpful discussions. This work was supported by NSF Grant PHY-2102914.
2301.12186
Gravitational billiards -- bouncing inside a paraboloid cavity
In this work the confined domains for a point-like particle propagating within the boundary of an ideally reflecting paraboloid mirror are derived. Thereby it is proven that all consecutive flight parabola foci points lie on the surface of a common sphere of radius $R$. The main results are illustrated in various limiting cases and are compared to its two-dimensional counterpart.
Daniel Jaud
2023-01-28T13:01:40Z
http://arxiv.org/abs/2301.12186v1
# Gravitational billiards - bouncing inside a paraboloid cavity ###### Abstract. In this work the confined domains for a point-like particle propagating within the boundary of an ideally reflecting paraboloid mirror are derived. Thereby it is proven that all consecutive flight parabola foci points lie on the surface of a common sphere of radius \(R\). The main results are illustrated in various limiting cases and are compared to its two-dimensional counterpart. Gyonnasium Holzxirichen, Germany _E-mail address:_ [email protected]. OrcID: 0000-0002-0163-7586. **Keywords:** billiards, gravity, foci, paraboloid mirror, confined trajectories **MSC-Classification:** 14H81, 37C50, 37N05 ## 1. Introduction Over the last decades, the dynamics of a point-like particle confined to some domain under the influence of a constant gravitational force, shortly called gravitational billiards, has been studied from various perspectives. Starting from [10] who first performed a numerical study of the simplest imaginable system, the wedge, showing that the system can be integrable for certain angles of the wedge. Further study of the system have e.g. been performed in [1, 6]. Extensions to other two dimensional boundaries (circular, elliptic, oval) or potentials have been performed in e.g. [3, 7], who showed that for the quadratic and Coulomb potential the system becomes integrable. In 2015, the first study of the dynamics in a three-dimensional cone were performed by [8, 9] showing that certain quantities of the two-dimensional framework map one-to-one in \(\mathbb{R}^{3}\). In general, the motion of the particle is highly non-trivial and a neat expression for the trajectory at each time is not accessible. For this reason, following [11, 12], the confined domains for a point like particle bouncing in a parabolic, two-dimensional cavity under the influence of a homogeneous gravitational force were derived through a geometric-analytic approach. Recently associated foci curves and confined domains for bouncing inside general two-dimensional boundaries were obtained in [4]. In the following the confined domains for a particle bouncing inside a rotational symmetric paraboloid under the influence of a constant gravitational force parallel to the axis of symmetry is studied. Our analysis will show that some two-dimensional features obtained e.g. in [4, 5, 12] will carry over (in some cases) to the three-dimensional scenario. Due to the additional rotational movement associated to conserved angular momentum along the \(z-\)direction further restrictions compared to the two-dimensional case will emerge. The structure of this work presents as follows: in Section 2 we will briefly introduce all necessary assumptions and general ideas that we will benefit from in our later analysis. Before diving into a general analysis, Section 3 will show that under certain restrictions the three-dimensional particle motion can be reduced to the two dimensional force free case within a circle. In Section 4, we will first show that all consecutive flight parabola foci points lie on a sphere of radius \(R\). With this result, we derive general formulas for the confined regions depending on the system parameters. For a deeper understanding of the general results and related physics, Section 5 will derive the associated envelope curves and therefore two-dimensional sections of the rotational confined regions for different values of the sphere radius \(R\) as well as (reduced) angular momentum \(l_{z}\). Finally a conclusion and outlook on possible future research topics related to this work is made in Section 6. ## 2. Generalities for the paraboloid billiard Here some general results for the motion of an particle under the influence of a constant gravitational force within a cavity are stated. All obtained results are direct generalizations from the two-dimensional case already discussed in e.g. [4, 12]. We are considering the movement for a particle of mass \(m\) propagating inside a paraboloid mirror under the influence of the constant gravitational force \(\vec{F}=-mg\vec{e}_{z}\) parallel to the \(z-\)axis. The equation for the boundary of the paraboloid in Cartesian coordinates \((x,y,z)\) reads \[M(x,y,z)=z-\frac{x^{2}+y^{2}}{4f_{M}}+f_{M}=0. \tag{1}\] The focus of the paraboloid is centered at the origin of the coordinate system and \(f_{M}\) denotes the focal length of this ideally reflecting mirror (see Figure 1). Figure 1. Visualization of main quantities for the paraboloid gravitational billiard. For a general point \(P(x,y,z)\) along the mirror boundary, the associated normalized normal-vector pointing inside the mirror domain is given by \[\vec{n}_{0}|_{P}=\frac{1}{|\vec{\nabla}M|}\vec{\nabla}M|_{P}=\frac{1}{\sqrt{1+ \frac{x^{2}+y^{2}}{4f_{M}^{2}}}}\begin{pmatrix}-\frac{\frac{x}{2f_{M}}}{2f_{M} }\\ \frac{y}{2f_{M}}\\ 1\end{pmatrix}. \tag{2}\] As usual, the trajectory of one specific flight parabola can be written as a function of time \(t\) via \[\vec{r}(t)=-\frac{1}{2}gt^{2}\vec{e}_{z}+\vec{v}t+\vec{r}_{0}. \tag{3}\] All flight parabolas posses a focal length \(F\) associated with the velocity components in \(x-\) and \(y-\)direction by \[F=\frac{v_{x}^{2}+v_{y}^{2}}{2g}. \tag{4}\] Conservation of energy \(E=\frac{m}{2}\vec{v}^{2}+mgz\) yields a maximal reachable height \(H\) for all flight parabolas within the paraboloid. Considering the velocities \(\vec{v}_{S}\) and associated heights \(z_{S}\) at the vertex of a flight parabola we find \[H=\frac{E}{mg}=\frac{v_{S}^{2}}{2g}+z_{S}=const.. \tag{5}\] In analogy to the two-dimensional case (see [4, 12]) \(H\) refers to the flight parabola vertex plane (see Figure 1). As a direct consequence, the \(z-\)coordinate of the flight parabola focus point \(\vec{F}\) fulfills \(F_{z}=H-2F\). In Section 4 we will make use of this relation. As a last component we state the law of reflection in vector form whenever the particle hits the boundary of the mirror at a point \(P\) and gets ideally reflected. For the velocities \(\vec{v}\) before and \(\vec{v}^{\prime}\) after the reflection holds \[\vec{v}^{\prime}=\vec{v}-2(\vec{n}_{0}\circ\vec{v})\cdot\vec{n}_{0}|_{P}=\vec {v}-\frac{2}{|\vec{\nabla}M|^{2}}(\vec{v}\circ\vec{\nabla}M)\cdot\vec{\nabla} M|_{P}. \tag{6}\] A direct consequence of the law of reflection is stated in the following Lemma which in Section 4 will be used in order to reduce the number free parameters of our system. **Lemma 2.1**.: _Angular momentum per unit mass along the \(z-\)direction, i.e. \(l_{z}=L_{z}/m\), in the further course to be called reduced angular momentum, is a conserved quantity in particular at any point \(P\) of reflection._ Proof.: It is sufficient to proof the statement at some general point of reflection \(P\) associated with the vector \(\vec{r}\). Using the law of reflection (6) for reduced angular momentum in \(z-\)direction results in: \[l_{z}^{\prime}=(\vec{r}\times\vec{v}^{\prime})_{z}=(\vec{r}\times\vec{v})_{z} -\frac{2}{|\vec{\nabla}M|^{2}}(\vec{v}\circ\vec{\nabla}M)\cdot(\vec{r}\times \vec{\nabla}M)_{z}=(\vec{r}\times\vec{v})_{z}=l_{z}.\] Here we used in the last step the fact, that \((\vec{r}\times\vec{\nabla}M)_{z}=0\) in \(P\). Therefore \(l_{z}^{\prime}=x_{0}v_{y}-y_{0}v_{x}=x_{0}v_{y}^{\prime}-y_{0}v_{x}^{\prime}= l_{z}\) is conserved. Note that \(l_{z}\) conservation along the flight parabola is a direct consequence by the properties of the cross product. ## 3. Reduction to reflection along the same circle In this section first we want to discuss the simplified case in which all consecutive points of reflection \(P_{i}\) lie on a common circle of radius \(r_{0}\) (and consequently height \(z_{0}=\frac{1}{4f_{M}}r_{0}^{2}-f_{M}\)) with respect to the \(z-\)axis. Naturally for this section polar coordinates are chosen to describe the dynamics. When viewed from above, the system can uniquely be described by the angle \(\vartheta\) enclosed by two consecutive points of reflection and the 'origin' at height \(z_{0}\) (see Figure 2). Without loss of generality our starting position may be chosen in polar coordinates at \(P_{0}(r_{0},0,z_{0})\) and consequently \(P_{i}(r_{0},i\cdot\vartheta,z_{0})\). We choose our particle, when viewed from above, traveling in counter-clockwise direction. The velocity values for the new flight parabola at the point of reflections are given by \((v_{r,i},v_{\varphi,i},v_{z,i})\). From our setup it is clear that the allowed values for \((v_{r,i},v_{\varphi,i},v_{z,i})\) are restricted by the condition that all point of reflection \(P_{i}\) have to lie on the same circle. In particular, such kind of behavior can only exist if the associated flight parabolas are each a copy of the same fundamental, symmetric, parabola up to an rotation by \(\vartheta\) spanned by the two initial reflection points \(P_{0}\) and \(P_{1}\). Due to rotational symmetry it thus is sufficient to determine the restrictions on \((v_{r,0},v_{\varphi,0},v_{z,0})=:(v_{r},v_{\varphi},v_{z})\). Applying the law of reflection (6) at \(P_{0}\) yields expressions \((v_{r}^{\prime},v_{\varphi}^{\prime},v_{z}^{\prime})\) right before the reflection. Those velocity values correspond to the flight parabola starting at \(P_{-1}\) propagating to \(P_{0}\). For all flight parabolas being the same copy one thus obtains the restriction \[|v_{r}|=|v_{r}^{\prime}|\text{ and }|v_{z}|=|v_{z}^{\prime}|. \tag{7}\] Both conditions are fulfilled if \[v_{r}\vec{e}_{r}+v_{z}\vec{e}_{z}\parallel\vec{\nabla}M|_{P_{0}}, \tag{8}\] i.e. the \((r,z)-\)components of the velocity vector stand perpendicular on the tangent plane at the point of reflection. The flight time \(t=\frac{2v_{z}}{g}\) for reaching the initial height \(z_{0}\) again is uniquely determined by the motion in \(z-\)direction. Within this time Figure 2. Projection to the parallel \(x-y-\)plane at height \(z_{0}\) with relevant system parameter \(\vartheta\) associated to force free billiards inside the circle. the particle starting at \(P_{0}\) has to reach \(P_{1}\). In the \(x-y-\)plane, the angle \(\vartheta\) between two consecutive points of reflection (see Figure 2) is related to the velocity values \((v_{r},v_{\varphi})\) via \[\vartheta=\pi-2\arctan\left(\left|\frac{v_{\varphi}}{v_{r}}\right|\right). \tag{9}\] Demanding for the reflection points all to lie on the same circle of radius \(r_{0}\) gives a further restriction on the system within the given flight time \(t\) from \(P_{i}\) to \(P_{i+1}\). Direct calculation shows that the allowed velocity components are completely determined by the angle \(\vartheta\), the radius \(r_{0}\) of the common reflection points circle as well as the focal length \(f_{M}\) of the paraboloid mirror: \[v_{r} =-\frac{r_{0}}{2f_{M}}\cdot\sqrt{gf_{M}\cdot[1-\cos\left(\vartheta \right)]}, \tag{10}\] \[v_{\varphi} =\frac{r_{0}}{2f_{M}}\cdot\sqrt{gf_{M}\cdot[1+\cos\left(\vartheta \right)]},\] (11) \[v_{z} =\sqrt{gf_{M}\cdot[1-\cos\left(\vartheta\right)]}. \tag{12}\] Depending on the values for \(\vartheta\) (see. e.g. [5, 13]) we obtain periodic or non-periodic orbits, where all flight parabolas lie on a common rotational surface around the \(z-\)axis (see Figure 3) whose radial function is purely determined by the mirror parameters \[g(r)=\frac{r_{0}^{2}}{4f_{M}}-\frac{f_{M}\cdot r^{2}}{r_{0}^{2}},\text{ with }r\in[r_{0}\cos(\vartheta/2);r_{0}]. \tag{13}\] Considering the flight parabolas dividing the rotational flight surface \(g(r)\) consecutively into smaller sub regions, one can map this to the circle case as shown in [5] that for specific values of \(\vartheta\) the surface division sequence is given by an integer series. As a remark for \(\vartheta=\pi\) the \(\varphi-\)velocity component equals zero, i.e. \(v_{\varphi}=0\). For this case there is no rotational motion (angular momentum being zero) and the particle bounces along \(g(r)\) forming a two periodic orbit reproducing two-dimensional results obtained in e.g. [4, 7]. Figure 3. _left_: Example for 3-periodic orbit (\(\vartheta=\frac{2\pi}{3}\)) along the same circle. _right_: Swept out flight parabola surface (light red) for non-periodic case and \(\vartheta\neq\pi\). ## 4. General flight parabola domain In this section now we want to derive the confined domains which the particle at a given initial condition can not leave during its motion. We will use the same notations as introduced in Section 2. It is clear that for certain choices of initial conditions the actual flight orbits will not fill out the entire confined domains. In particular we are considering non periodic orbits in which the swept out region becomes dense. A main component for obtaining expressions for the confined domains is stated in the following theorem. **Theorem 4.1**.: _All flight parabola foci \(\vec{F}_{i}\) at given \(H\) and \(l_{z}\) lie on the surface of a sphere with radius \(R\) centered at the origin \(O\), i.e. the focus of the paraboloid._ Proof.: Without loss of generality consider let the point of reflection being located in Cartesian coordinates at \(P(r_{0},0,z_{0})\) and the associated velocity vector right before the reflection take the form \(\vec{v}=(v_{x},v_{y},v_{z})^{T}\). The flight parabola focus then is given by \[\vec{F}=\begin{pmatrix}r_{0}\\ 0\\ 2z_{0}-H\end{pmatrix}+\frac{v_{z}}{g}\begin{pmatrix}v_{x}\\ v_{y}\\ v_{z}\end{pmatrix}. \tag{14}\] Applying the law of reflection in \(P\) yields the velocity vector \(\vec{v}^{\prime}\) after the reflection and consequently \(\vec{F}^{\prime}\). A direct but lengthy calculation shows that \[|\vec{F}|^{2}=|\vec{F}^{\prime}|^{2}=R^{2}=const.,\] i.e. both foci lie on a common sphere of radius \(R\). Due to rotational symmetry of the system, conservation of energy as well as reduced angular momentum conservation in \(z-\)direction, all consecutive foci lie on the same sphere of radius \(R\). Since \(\vec{r}_{0}\) and the initial velocity \(\vec{v}_{0}\) have been chosen arbitrarily (under assumption of same total energy) all consecutive foci have to saturate this equality. Note that this is a direct generalization of the two-dimensional case. Now we are in the position to reduce the six-dimensional phase space with the knowledge of Theorem 4.1, conservation of energy and reduced angular momentum as well as rotational invariance, to two free parameters corresponding to confined domains which the particle at given values \((H,l_{z},R)\) cannot leave. The vertex \(\vec{S}\) of each flight parabola in spherical coordinates thus can be written as \[\vec{S}(R,\varphi,\vartheta)=\vec{F}(R,\varphi,\vartheta)+F\vec{e}_{z}=R \begin{pmatrix}\cos(\varphi)\sin(\vartheta)\\ \sin(\varphi)\sin(\vartheta)\\ \cos(\vartheta)\end{pmatrix}+F\vec{e}_{z}=\begin{pmatrix}R\cos(\varphi)\sin( \vartheta)\\ R\sin(\varphi)\sin(\vartheta)\\ \frac{H+R\cos(\vartheta)}{2}\end{pmatrix}. \tag{15}\] Thereby we used that for the focal length of Section 2 holds \(F=\frac{H-R\cos(\vartheta)}{2}\). Note that \(\vartheta\) in this case corresponds to the polar angle (compare Figure 4) in contrast to the definition for \(\vartheta\) of Section 3. Energy conservation yields an expression for the absolute value of the velocity \(|\vec{v}_{S}|\) at the vertex \[H=\frac{\vec{v}_{S}^{2}}{2g}+z_{S}\ \leftrightarrow\ v_{S}=|\vec{v}_{S}|= \sqrt{2g(H-z_{S})}=\sqrt{g(H-R\cos(\vartheta)}, \tag{16}\] where \(H\) is the height of the directrix plane and \(z_{S}\) is the associated vertex height (consider Figure 4). We choose \(v_{S}\) to be positive; negative values simply correspond to a time inverted system. Since the orientation of \(\vec{v}_{S}\) is not fixed by the equation above we may take a general ansatz \(\vec{v}_{S}=v_{S}(\cos(\varphi^{\prime}),\sin(\varphi^{\prime}),0)^{T}\). Reduced angular momentum conservation along the \(z-\)axis (compare Lemma 2.1) \[l_{z}=(\vec{S}\times\vec{v}_{S})_{z}=R\sqrt{g(H-R\cos(\vartheta))}\cdot\sin( \vartheta)\cdot\sin(\varphi^{\prime}-\varphi)=const., \tag{17}\] restricts the allowed values for the orientation of \(\vec{v}_{S}\) related to the rotation angle \(\varphi^{\prime}\) as follows \[\varphi^{\prime}=\varphi+\arcsin\left(\frac{l_{z}}{R\sin(\vartheta)\cdot \sqrt{g(H-R\cos(\vartheta))}}\right). \tag{18}\] Due to rotational symmetry it is sufficient to consider the case \(\varphi=0\) from here on. The corresponding allowed flight parabolas \[\vec{r}(t,\vartheta)=\begin{pmatrix}R\sin(\vartheta)+t\cdot\sqrt{g(H-R\cos( \vartheta))-\frac{l_{z}^{2}}{R^{2}\sin^{2}(\vartheta)}}\\ t\cdot\frac{l_{z}}{R\sin(\vartheta)}\\ -\frac{1}{2}gt^{2}+\frac{H+R\cos(\vartheta)}{2}\end{pmatrix}, \tag{19}\] at fixed \((H,l_{z},R)\) form a one-parameter family of curves in \(\mathbb{R}^{3}\). Note that the \(x-\)component velocity term restricts the allowed values for \(\vartheta\) at a given value of \(l_{z}\) according to (compare Figure 5) \[J(H,R,\vartheta)=gR^{2}\sin^{2}(\vartheta)\cdot(H-R\cos(\vartheta))\geq l_{z} ^{2}\ \leftrightarrow\ \vartheta\in[\vartheta_{0};\vartheta_{1}]. \tag{20}\] Clearly \(l_{z}\) is the main limiting factor to \(\vartheta\) with large \(l_{z}\) associated to a motion farther away from the \(z-\)axis as in the case for \(l_{z}\) being small, resulting in the possibility Figure 4. Flight parabola setup. of approaching the \(z-\)axis. Further \(l_{z}\) is bound from above by the maximum of \(J(H,R,l_{z})\) which is saturated for \[\cos(\vartheta_{max})=\frac{H-\sqrt{H^{2}+3R^{2}}}{3R}, \tag{21}\] and therefore takes the value \[J(H,R,\vartheta_{max})=\frac{2}{27}g\left(\sqrt{H^{2}+3R^{2}}+2H\right)\left(H \left(\sqrt{H^{2}+3R^{2}}-H\right)+3R^{2}\right). \tag{22}\] **Theorem 4.2**.: _The rotational symmetric allowed propagation heights \(h\) of the particle at a given radial distance \(r\) and angle \(\vartheta\) are given by_ \[h_{\pm}(r,\vartheta)=\frac{H+R\cos(\vartheta)}{2}-\frac{\vartheta}{2}\left( \frac{\sqrt{r^{2}\cdot g(H-R\cos(\vartheta))-l_{z}^{2}\pm}\sqrt{R^{2}\sin^{2}( \vartheta)\cdot g(H-R\cos(\vartheta))-l_{z}^{2}}}{g(H-R\cos(\vartheta))} \right)^{2},\] _with the restriction on the radius \(r\geq\frac{l_{z}}{\sqrt{g(H-R\cos(\vartheta))}}\)._ Proof.: Equation (19) defines possible trajectories at given \((H,R,l_{z})\). Considering the associated radial distance \(r^{2}=x(t,\vartheta)^{2}+y(t,\vartheta)^{2}\) one can express the \(t\) variable in terms of \(r\) and \(\vartheta\). Inserting this expression into the \(z-\)component of (19) yields the expression for the allowed heights \(h_{\pm}\), where the two different solutions correspond to the left and right parabola arc measured from the minimal distance \(r_{min}=l_{z}/\sqrt{g(H-R\cos(\vartheta))}\). In order to obtain expressions for the associated envelope curves we define a new quantity \[K(z,r,\vartheta,H,R,l_{z}):=z-h_{\pm}(r,\vartheta). \tag{23}\] The envelope curves restricting the confined domains then are obtained eliminating \(\vartheta\) by solving the following system of equations (see [2]) \[K(z,r,\vartheta,H,R,l_{z})=0\text{ and }\frac{\partial K}{\partial\vartheta}=0. \tag{24}\] A computer animated picture of allowed flight parabolas is shown in Figure 6. In the next section our obtained results will be illustrated in various extremal limits. Figure 5. Qualitative restriction of allowed \(\vartheta\) values at given \(l_{z}^{2}\). _left_ for \(H>R>0\) and _right_ for \(0<H<R\). ## 5. Discussion of limiting cases In this section four limiting cases in terms of the reduced angular momentum \(l_{z}\) are discussed. For all cases we determine the associated height function from Theorem 4.2 and calculate, if possible, the corresponding envelope curves restricting the motion of the particle, in general, to a rotational symmetric region. All results of this section are displayed in Figure 7 for illustrative purposes. ### The \(l_{z}=0\) case In the simplest case of no reduced angular momentum (\(l_{z}=0\)) the height functions of Theorem 4.2 significantly simplify to \[h_{\pm}(r,\vartheta)=\frac{H+R\cos(\vartheta)}{2}-\frac{(r\pm R\sin(\vartheta ))^{2}}{2(H-R\cos(\vartheta))}. \tag{25}\] Solving the system (24) yields the envelope curves \[c_{\pm}(r)=\frac{H\pm R}{2}-\frac{r^{2}}{2(H\pm R)}. \tag{26}\] This reproduces the results obtained geometrically in [12] and analytically in [4]. Since the motion lies in a common plane containing the \(z-\)axis it is clear that \(r\) can take values in \(\mathbb{R}\). ### The small \(l_{z}\) case For \(l_{z}\) small the deviation from the \(l_{z}=0\) case is marginal. Thus one can conclude that in first approximation one obtains the same envelope curves \(c_{\pm}(r)\) as before. An additional restriction comes from the fact that the allowed values for \(r\) are bound from below by \(r\geq\frac{l_{z}}{\sqrt{g(H-R\cos(\vartheta))}}\). If this inequality is saturated, i.e. we consider the case of minimal radial distance in terms of \(\vartheta\), one can solve \(r=\frac{l_{z}}{\sqrt{g(H-R\cos(\vartheta))}}\) for \(\cos(\vartheta)\) and insert this expression into the Figure 6. Computer animated flight trajectories. height functions of Theorem 4.2 yielding one additional (approximate) envelope curve associated with the angular momentum barrier as \[c_{0}(r)=\frac{(H^{2}-R^{2})g}{2l_{z}^{2}}\cdot r^{2}+\frac{gr^{4}}{2l_{z}^{4}}. \tag{27}\] This envelope curve is reminiscent of the Higgs-potential in particle physics, in which in the cases \(R>H\) one obtains the well known Mexican-hat like function. ### The large \(l_{z}\) case In the large \(l_{z}\) limit the second square root appearing in the height functions of Theorem 4.2 in lowest order can be neglected since \(l_{z}^{2}\approx J(H,R,\vartheta_{max})\). The associated envelope curves thus approximately resemble the height functions for small variations of \(\vartheta\) \[\tilde{c}_{\pm}=\frac{H+R\cos(\vartheta)}{2}-\frac{r^{2}-R^{2}\sin^{2}( \vartheta_{max})}{2(H-R\cos(\vartheta))}, \tag{28}\] where \(\vartheta\in[\vartheta_{max}-\delta;\vartheta_{max}+\delta]\) for \(\delta\) small. ### The maximal \(l_{z}\) case The maximal value for \(l_{z}\) follows from (22) and is given by \[l_{z}=\sqrt{J(H,R,\vartheta_{max})}. \tag{29}\] In these cases, the second square root for the height function of Theorem 4.2 vanishes, resulting in a single height function for \(\vartheta=\vartheta_{max}\) as \[d(r)=\frac{H+R\cos(\vartheta_{max})}{2}-\frac{r^{2}-R^{2}\sin^{2}(\vartheta_{ max})}{2(H-R\cos(\vartheta_{max}))}, \tag{30}\] with \(r\geq R\sin(\vartheta_{max})\). Note that for \(R<H\) this reproduces the results of Section 3. For \(R>H\) it depends on the mirror boundary if the condition \(r\geq R\sin(\vartheta_{max})\) can be saturated, cases exist in which the maximal \(l_{z}-\)value is not accessible due to the mirror boundary. Figure 7. Two-dimensional section of confined domains associated to the four discussed limiting cases. The three-dimensional confined regions are obtained by rotation around the \(z-\)axis. ## 6. Conclusion and Outlook In this work the rotational symmetric confined domains of a point-like particle bouncing inside a paraboloid cavity under the influence of a homogeneous gravitational field in terms of the directrix height \(H\), reduced angular momentum \(l_{z}\) and foci sphere radius \(R\) were derived. It has been shown that some two-dimensional results map one to one to the 3D case. In addition, reduced angular momentum conservation (absent in 2D) yields some additional physics in 3D. For future works it would be interesting to generalize our results to other rotational symmetric domains. Also, the motion in a non-constant, e.g. Coulomb-field, would be of interest. ## Acknowledgements We would like to thank Dan Reznik for the inspiring conversation leading to this work.
2303.02473
Disparity in the Evolving COVID-19 Collaboration Network
The COVID 19 pandemic has paused many ongoing research projects and unified researchers' attention to focus on COVID 19 related issues. Our project traces 712294 scientists' publications related to COVID 19 for two years, from January 2020 to December 2021, to detect the dynamic evolution patterns of the COVID 19 collaboration network over time. By studying the collaboration network of COVID 19 scientists, we observe how a new scientific community has been built in preparation for a sudden shock. The number of newcomers grows incrementally, and the connectivity of the collaboration network shifts from loose to tight promptly. Even though every scientist has an equal opportunity to start a study, collaboration disparity still exists. Following the scale-free distribution, only a few top authors are highly connected with other authors. These top authors are more likely to attract newcomers and work with each other. As the collaboration network evolves, the increase rate in the probability of attracting newcomers for authors with higher degrees increases, whereas the increase rates in the likelihood of forming new links among authors with higher degrees decreases. This highlights the interesting trend that the COVID pandemic alters the research collaboration trends that star scientists are starting to collaborate more with newcomers but less with existing collaborators, which, in a certain way, reduces the collaboration disparity.
Huimin Xu, Redoan Rahman, Ajay Jaiswal, Julia Fensel, Abhinav Peri, Ka-mesh Peri, Griffin M Weber, Ying Ding
2023-03-04T18:07:10Z
http://arxiv.org/abs/2303.02473v1
# Disparity in the Evolving COVID-19 Collaboration Network ###### Abstract COVID-19 pandemic has paused many ongoing research projects and unified researchers' attention to focus on COVID-19 related issues. Our project traces 712,294 scientists' publications related to COVID-19 for two years, from January 2020 to December 2021, in order to detect the dynamic evolution patterns of COVID-19 collaboration network over time. By studying the collaboration network of COVID-19 scientists, we observe how a new scientific community has been built in preparation for a sudden shock. The number of newcomers grows incrementally, and the connectivity of the collaboration network shifts from loose to tight promptly. Even though every scientist has an equal opportunity to start a study, collaboration disparity still exists. Following the scale-free distribution, only a few top authors are highly connected with other authors. These top authors are more likely to attract newcomers and work with each other. As the collaboration network evolves, the increase rate in the probability of attracting newcomers for authors with higher degree increases, whereas the increase rates in the probability of forming new links among authors with higher degree decreases. This highlights the interesting trend that COVID pandemic alters the research collaboration trends that star scientists are starting to collaborate more with newcomers, but less with existing collaborators, which, in certain way, reduces the collaboration disparity. Keywords:COVID-19 publications, collaboration disparity, collaboration network, dynamic evolution, degree centrality ## 1 Introduction The science of science is a field to study the structure and evolution of science, which has offered rich quantitative and qualitative methods to uncover hindsight about crea tivity, collaboration, and impact in scientific endeavors. Despite the prominent contributions of science of science researchers, which are deeply rooted in normal science including scientific collaboration (Leahey, 2016), team composition (Wu et al., 2019), novelty (Uzzi et al., 2013), and funding allocation (Jacob & Lefgren, 2011), studies about scientific activities in abnormal conditions are largely overlooked. However, patterns or findings from studies on normal science cannot be applied to abnormal conditions. Normal science was coined by Thomas Samuel Kuhn (1962) as a phase of science during which the scientific community has confidence in what the world is like. Normal science often suppresses fundamental differences/novelties because it favors fitting phenomena into the widely accepted conceptual theories/boxes (Collins, 1994). So, when a novel pandemic occurs, we need to understand how scientists collaborate and what the team dynamics are, whether out-of-the-box thinking can be supported by scientific communities. This paper studies the scientific collaboration of COVID-19 authors from the perspective of evolving networks. The ongoing COVID-19 pandemic certainly not only disturbed the normal routines of scientific activities, but also demanded solutions from science to resolve the spread. Understanding scientific activities in abnormal times are urgent and imperative (Fry et al., 2020). Studying the patterns of scholarly communications during pandemic times can help us understand how science can bend the trajectories of pandemic spreading and provide implications for science policy makers to have a better risk management plan for future unexpected disasters. ## 2 Related Work Barabasi et al. (2002) explored the evolving collaboration networks in mathematics and neuro-science disciplines covering eight years. They found that the average degree (i.e., degree centrality) increases and the node separation (i.e., the average distances of all shortest paths between two given nodes) decreases. In addition to uncovering the power law distribution of networks (Barabasi & Albert, 1999), he also revealed two mechanisms to explain the preferential attachment phenomenon - "the richer get richer". In the research, Barabasi et al. (2002) found that a new author is more likely to work with authors who already have many coauthors. Also, authors who already have many authors are more likely to build more links as the network evolves. Azondekon et al. (2018) analyzed the connectedness of researchers in malaria research by building a co-authorship network from papers collected by Web of Science. They found that prolific authors have higher probabilities of collaborating with more authors and the giant component covers 94% of all the vertices to confirm a small-world network. Furthermore, Uddin et al. (2013) extended the relationship between network centrality measures with the impact and productivity of authors. They established the regression model and revealed that degree centrality and betweenness centrality of authors are positively correlated with the strength of their scientific collaborators (i.e., number of coauthors of a given author) and impact (i.e., the citation count of a research article authored by a given author). In our project, we want to apply the network science measures into the COVID-19 collaboration network to detect the collaboration disparity during the pandemic ## 3 Methodology **Data** We use the LitCOVID dataset1 as our source of COVID-19 publications. LitCOVID collects COVID-19 publications from PubMed dataset2 through searching relevant keywords, such as "coronavirus", "ncov", and "2019-nCoV" (Chen et al., 2020; Chen et al., 2021). The results are updated daily and reviewed by human and machine learning algorithms. By December 23rd in 2021, there are 205,476 COVID-19 papers, including specific PubMed id and title information. By tracing the PubMed id in the PubMed dataset, we can get author lists and publication time of each paper. We deal with the author name disambiguation problem with the assistance of Semantic Scholar dataset3 (Ammar et al., 2018). Xu et al. (2020) evaluated the effect of author name disambiguation in the Semantic Scholar, which reaches 96.94% in F1 score. LitCOVID and Semantic Scholar both keep the PubMed id, thus we can match author names in LitCOVID with unique author ids in Semantic Scholar through the common PubMed id. Finally, we get 186,046 COVID-19 related papers, with complete publication time, author names, and author ids from 2020 January to 2021 December. Among these papers, there are 712,294 unique authors. A majority of 89% papers (166,126) papers have more than one author and 99% (704,164) authors have collaborators. In order to observe the evolution of the collaboration network, we document eight quarters based on publication time. Table 1 and Figure 1 describe the cumulative nodes and links added into the network over time. We can see that the node increase rate appears to be stable, whereas the link grows suddenly at the first and second quarter in 2021 and then keeps stable. Footnote 1: [https://www.ncbi.nlm.nih.gov/research/coronavirus/](https://www.ncbi.nlm.nih.gov/research/coronavirus/) Footnote 2: [https://pubmed.ncbi.nlm.nih.gov/download/](https://pubmed.ncbi.nlm.nih.gov/download/) Footnote 3: [https://api.semanticscholar.org/corpus/download/](https://api.semanticscholar.org/corpus/download/) Measures Authors have formed the collaboration network if any given two authors have co-authored at least one paper, there is an edge to connect these two nodes. Degree centrality for an author \(i\) can be defined as: \(Degree\)\(Centrality\)\((a)=\frac{k}{n-1}\), where \(k\) is the degree of author \(a\) (represents the number of authors with whom author \(a\) is directly connected in the co-authorship network), n represents the number of authors in the network. Fig 2 shows how to calculate the probability of attracting external new nodes and forming new internal links. Probability of attracting new authors for an old node with degree \(k_{i}\): \(P(k_{i})=\frac{V(k_{i})}{N(k_{i})}\), where \(V(k_{i})\) means the number of newcomers that authors with degree \(k_{i}\) attract, \(N(k_{i})\) means the number of authors with degree \(k_{i}\). Probability of forming new links among old nodes with degree \(k_{i}\)and \(k_{j}\): \(P\big{(}k_{i},k_{j}\big{)}=\frac{L(k_{i},k_{j})}{N(k_{i})*N(k_{j})}\), \(L\big{(}k_{i},k_{j}\big{)}\) means the number of new links between authors with degree \(k_{i}\) and \(k_{j}\), \(N(k_{i})*N\big{(}k_{j}\big{)}\) means the number of combination pairs between authors with degree \(k_{i}\) and \(k_{j}\). Figure 1: Cumulative number of nodes (left: indicating authors) and links (right: indicating co-authorship) for the COVID-19 collaboration network up to a given time ## 4 Result We observe the evolution of the collaboration network at eight different stages, from the first quarter of 2020 to the fourth quarter of 2021. Firstly, we found the degree distributions of networks up to the indicated time all follow scale-free power law distribution (Fig 3). We can see most of the authors have a relatively small number of collaborations, but a few authors have the ability to connect with many partners. This result is consistent in these eight networks. The degree distributions gradually shift upward as more new authors join the community, and meanwhile move rightward as existing authors enhance their collaboration ties. Figure 2: **The illustration of calculating probability of attracting new nodes and forming new links among old nodes.****a.** At the time t1, there are three kinds of nodes with different degrees (green) in the network. We calculate the number of these nodes, like, N(k=1) means the number of nodes with degree 1. At the next time t2, new nodes (orange) join in (**b**) and new links (orange) among old nodes appear (c). V(new) represents the number of new nodes, and L(new) represents the number of new links. **d.** We calculate the probability of attracting new nodes for an old node with degree \(k_{i}\). f. We calculate the probability of forming a new link for an old node with degree \(k_{i}\) and an old node with degree \(k_{j}\). Given the scale-free distribution, we choose to separately observe the top and tail authors' degree centrality. The tail 20% authors' average degree centrality values have trivial changes over time (Fig 4a). At the very beginning (2020_Q1), the degree centrality is relatively large for top 10% and top 20% authors as the network has a few nodes. We also need to note that one year later (2021_Q1), there is an apparent increase in degree centrality for top authors. Meanwhile, the gap in degree centrality between top 10%, top 20% and tail 20% increases in 2021. These patterns suggest that top authors play a more important role in connecting other authors than tail authors, which increases the collaboration disparity. We are also curious about what kind of collaborators top authors connect with. In the first year, the difference in degree centrality between top authors' collaborators and tail authors' collaborators is evident. It indicates that they prefer to work with homogenous authors whose degree is similar to them. Specifically, top authors tend to work with top authors, whereas tail authors tend to work with tail authors. When the COVID-19 pandemic suddenly starts, the powerful alliance among top authors enables them to react quickly to the outbreak. However, one year later, the difference in degree centrality between top authors' collaborators and tail authors' collaborators is less significant (Fig 4b). One possible reason is that although top authors still work with each other, they attract more newcomers as more people pay attention to the COVID-19 pandemic and join the community. Thus, high degree centrality and low degree centrality cancel out each other. Figure 3: Degree distribution for COVID-19 authors, showing the cumulative results up to a given time. The plot is drew using a logarithmic scale for both the x-axis and the y-axis. X axis k represents the number of collaborators an author has, whereas y-axis frequency shows the number of authors in the network with degree k. To explain the phenomenon above, we explore two possible mechanisms. On the one hand, old authors with a high degree have a larger probability of attracting external new authors. On the other hand, old authors with a high degree have a larger probability of forming internal new collaborations. In Fig 5 and Fig 6, we calculate the probability of collaborating with external new authors and forming new internal links among old authors. The slope of the dashed line corresponds to the exponent of power law distribution. The slope values are positive, which signifies that authors with larger k are more likely to connect with new authors (Fig 5). From 2020 to 2021, the increase rates in the probability of attracting newcomers increases for top authors with more collaborators, indicated by the slope changes. In the first quarter of 2021, the second quarter of 2021 and the fourth quarter of 2021, the slopes are above 1. On the whole, the inequality of newcomer distribution is aggravated until 2021. Similarly, the slope values are positive in Fig 6, which indicates that authors with larger k are more likely to publish COVID-19 papers together. But the difference is the increase rate in the probability of building new connections among old nodes with high degree decreases from 2020 to 2021. In the second quarter of 2020, the third quarter of 2020 and the fourth quarter of 2020, the slopes are above 1. Figure 4: **a. Average degree centrality values of COVID-19 authors whose degree centrality values are in the top 10%, top 20% and tail 20%. b. Average degree centrality values of top 10%, top 20% and tail 20% authors’ collaborators.** Figure 5: **The probability of attracting newcomers for existing COVID-19 authors before the given time.****a-g.** The plot is drew using a logarithmic scale for both the x-axis and the y-axis. The x-axis and y-axis is calculated as Fig 2**d. We fit the increasing trend with dashed lines and calculate the slope. **h.** The changes of slopes corresponding to a-g. ## 5 Conclusion The current COVID-19 pandemic has caused a huge economic loss with the record high unemployment, the collapse of industry giants (e.g., large retails), the bankruptcy of small and medium-sized businesses, and spiral decline in spending, traveling, producing, and servicing. The whole world is on the pause button, and the world economy is on a spiral downturn. Uncertainty lies ahead of us. With our ever-growing highly connected world, simple infectious diseases can rapidly transform into pandemics, the threat and damage of future infectious diseases can be immense. Understanding scientific activities in the current and past pandemics/epidemics can help us identify patterns and pinpoint wrong-doings. This paper conducts preliminary research about the connectivity of scientific activities of COVID-19 authors who are working at the frontlines to fight against COVID-19. In addition to describing the static topology of the COVID-19 collaboration network, Figure 6: **The probability of forming new links among existing COVID-19 authors before the given time. a-g. The plot is drawn using a logarithmic scale for both the x-axis and the y-axis. The x-axis and y-axis is calculated as Fig 2e. We fit the increasing trend with dashed lines and calculate the slope. h. The changes of slopes corresponding to a-g.** we also reveal the dynamic evolution of the network. We found that COVID pandemic alters the research collaboration trends that star scientists are starting to collaborate more with newcomers, but less with their peers, which, in certain way, reduces the collaboration disparity.
2310.15037
Engineered dissipation to mitigate barren plateaus
Variational quantum algorithms represent a powerful approach for solving optimization problems on noisy quantum computers, with a broad spectrum of potential applications ranging from chemistry to machine learning. However, their performances in practical implementations crucially depend on the effectiveness of quantum circuit training, which can be severely limited by phenomena such as barren plateaus. While, in general, dissipation is detrimental for quantum algorithms, and noise itself can actually induce barren plateaus, here we describe how the inclusion of properly engineered Markovian losses after each unitary quantum circuit layer can restore the trainability of quantum models. We identify the required form of the dissipation processes and establish that their optimization is efficient. We benchmark our proposal in both a synthetic and a practical quantum chemistry example, demonstrating its effectiveness and potential impact across different domains.
Antonio Sannia, Francesco Tacchino, Ivano Tavernelli, Gian Luca Giorgi, Roberta Zambrini
2023-10-23T15:36:00Z
http://arxiv.org/abs/2310.15037v2
# Engineered dissipation to mitigate barren plateaus ###### Abstract Variational quantum algorithms represent a powerful approach for solving optimization problems on noisy quantum computers, with a broad spectrum of potential applications ranging from chemistry to machine learning. However, their performances in practical implementations crucially depend on the effectiveness of quantum circuit training, which can be severely limited by phenomena such as barren plateaus. While, in general, dissipation is detrimental for quantum algorithms, and noise itself can actually induce barren plateaus, here we describe how the inclusion of properly engineered Markovian losses after each unitary quantum circuit layer can restore the trainability of quantum models. We identify the required form of the dissipation processes and establish that their optimization is efficient. We benchmark our proposal in both a synthetic and a practical quantum chemistry example, demonstrating its effectiveness and potential impact across different domains. ## I Introduction While dissipation is generally detrimental to quantum technologies and is, in fact, a limiting factor for currently available noisy quantum devices [1], there are important exceptions. On the one hand, going beyond unitary operations is not only needed to account for quantum measurements [2] but also enables several applications ranging from quantum state preparation and control to error correction [3]. On the other hand, noisy quantum platforms are well suited for computing [4] in a hybrid quantum-classical setting, for instance using the popular class of variational quantum algorithms (VQAs) [5]. In this framework, a parametric quantum circuit is used to prepare complex quantum states and to evaluate, through sampling or with suitable quantum measurements, a cost function that would be otherwise expensive to compute classically. The optimization of the circuit parameters, aimed at minimizing such cost, is instead assigned to a classical optimizer. As an example, variational quantum eigensolvers (VQEs) [6; 7] can be used to approximate the ground state of a given Hamiltonian \(H\) through a trial quantum circuit, also called an ansatz, by minimizing the cost function given by the expectation value of \(H.\) More in general, VQAs can be applied to classical optimization problems [8], linear systems of equations [9; 10; 11], quantum simulations [12; 13; 14; 15], quantum data compression [16], quantum machine learning [17; 18; 19], generative models [20; 21; 22], quantum foundations [23], quantum compiling [24; 25; 26; 27], and quantum error correction [28]. In many of these cases, the VQA can also be formulated as a ground-state problem with a problem-specific (rather than a physically motivated) Hamiltonian, the choice of which is generally non-trivial [5]. Despite their potential advantages, VQAs are known to suffer from a number of bottlenecks that still prevent their implementation at scale. One particularly serious drawback, specific to quantum operations, is the phenomenon known as barren plateaus [29]. These manifest as a vanishing gradient of the loss function with respect to the model parameters and can lead to severe limitations in the training efficiency. Barren plateaus build up as the system size increases, hence directly hindering the scalability of VQAs. Specifically, it was found that when a quantum circuit ansatz reaches the 2-design limit the probability of randomly inducing a non-negligible gradient decreases exponentially with the number of qubits [29]. This condition is relatively easy to satisfy [30; 31; 32] implying a generalized loss of all possible quantum speedups, even if the optimization strategy does not include gradient computation [33]. Recent findings indicate that barren plateaus can originate from several factors, including high circuit expressibility [34; 35], concentration of the cost function [36], noise [37], entanglement excess [38], and globality of the cost function [39; 40]. All these barren-plateau sources can be unified by means of a Lie algebra theory that applies to a general (yet unitary) setting [41; 42]. Several techniques to mitigate barren plateaus have been reported, such as initialization strategies [43; 44], transferability of smooth solutions [45], entanglement limitations [46], correlations and restrictions of circuit parameters [34; 47], classical shadows [48], pre-trainings [49; 50], and layerwise learning [51; 52]. These mitigation strategies generally consist in appropriately constraining the unitary ansatz. In this work, we change the focus acting on the problem Hamiltonian without adapting the quantum circuit. Remarkably, the occurrence of barren plateaus is closely related to the unitarity of the ansatz [41; 42], while assessing these phenomena beyond a fully unitary framework presents a promising research avenue. Our goal is therefore to demonstrate the potential of non-unitary, open-system dynamics as a powerful strategy to overcome the trainability barrier and to ensure efficient convergence. To achieve it, it is reasonable to expect that a non-trivial engineering of the dissipation processes will be required. Indeed, while it has been already suggested that non-unitary operations can increase the accuracy of ground-state VQE calculations in quantum chemistry [53; 54], generic noise is known to actually induce barren plateaus [37; 55]. In parallel, dissipation implemented simply by discarding qubit registers, as in the class of so-called dissipative quantum neural network models [56; 57; 58], is also not sufficient to ensure trainability, in general [56]. The proof that a suitable set of operations can in fact be constructed constitutes the main technical contribution of our work. Building on previous results obtained in the field of engineered quantum dissipation [59; 60], we consider a Markov dissipation modeled by a Gorini-Kossakowski-Sudarshan-Lindblad (GKLS) Master Equation [61; 62; 63]. Our strategy to design the proper dissipation is motivated by the recent observation that training a _local_ cost function (a cost function made of local observables) is much more efficient than training a _global_ one [64; 65; 66; 67]. Rigorous analyses have proven that local cost functions, unlike global ones, are immune to barren plateaus in the case of shallow circuits [39; 40]. While formulating variational algorithms locally is preferable, mapping a global problem to a local one is generally a non-trivial task. Here, we propose variational algorithms based on non-unitary ansatzes and we show that this represents an effective strategy to tackle barren plateaus (Sec. III), setting the theory framework (Sec. IV) and discussing both a synthetic and a chemical example (Sec. V and Sec. VI). ## II General framework In the most general case, VQAs consist of minimizing a cost function whose minimum faithfully corresponds to the solution of a considered problem [5]. In the following, we consider the case where the cost function corresponds to the expectation value of an \(n\)-qubit Hermitian operator \(H\) and, consequently, the solution is its ground-state energy. Given an initial condition \(\rho_{in}\) and a quantum circuit ansatz \(U(\mathbf{\theta})\), where \(\mathbf{\theta}\) is a free parameter vector, the cost function has the form \[C(\mathbf{\theta})=\mathrm{Tr}\{HU(\mathbf{\theta})\rho_{in}U^{\dagger}(\mathbf{\theta})\}.\] The minimization strategy consists of evaluating \(C(\mathbf{\theta})\) by a quantum device. The free parameters are optimized by a classical procedure. A necessary condition for the circuit trainability is that, for a random initialization of \(\mathbf{\theta}\), the probability of finding a non-negligible value of the cost function derivative with respect to a generic parameter \(\theta_{k}\), denoted as \(\partial_{k}C\), is itself not negligible. An upper bound on this probability is known to be proportional to the variance of the derivative, which we write as \(\mathrm{Var}[\partial_{k}C]\), setting the limits to training efficiency [34; 35; 36; 37; 38; 39; 40; 41; 42]. By definition, we say that the cost function landscape presents a barren plateau if \(\mathrm{Var}[\partial_{k}C]\) exponentially decreases with the number of qubits \(n\), i.e., \(\mathrm{Var}[\partial_{k}C]\in\mathcal{O}(e^{-pn})\) where \(p\) is a positive integer. This phenomenon strongly depends on the locality of \(H\) where locality, in this context, refers to the number of qubits on which the Hamiltonian acts non-trivially. We expand the Hamiltonian such that \[H=c_{0}\mathbb{I}+\sum_{i=0}^{N}c_{i}H_{i} \tag{1}\] where \(\mathbb{I}\) is the \(n\)-qubit identity operator, \(H_{i}\) is a generic Hermitian operator and \(c_{i}\) is a real coefficient. By definition, we say that \(H\) is _local_ when all \(H_{i}\) terms act non-trivially on at most \(K\) qubits (where \(K\) does not scale with \(n\)), as is the case of nearest-neighbor interaction Hamiltonians. On the other hand, we call _H global_ if the \(H_{i}\) operators act non-trivially on all qubits. It has been rigorously proved that, under quite general conditions, for _alternating layered ansatzes_ of arbitrary depth, a barren plateau is inevitable in the global case, while in the local case shallow circuits can prevent it [39; 40]. More precisely, if the number of layers (L) does not increase faster than a logarithm in the number of qubits, i.e., \(L=\mathcal{O}(\log(n))\) then: \[\mathrm{Var}[\partial_{k}C]=\Omega\left(\frac{1}{\mathrm{poly}(n)}\right)\] which means that the probability of finding a non-negligible gradient does not decrease faster than a polynomial implying the absence of barren plateaus. It has also been empirically found that when a problem can be encoded with both a global and a local cost function, the training is more effective in the local case and the mitigation techniques work much better [64; 65; 34; 66]. Moreover, it has been also analytically shown that \(\mathrm{Var}[\partial_{k}C]\), in general, increases with the Hamiltonian locality [41; 42]. In the following, we will focus on the case where the Hamiltonian of the problem is originally global and an equivalent local one is unknown. We will show how the addition of a proper non-unitary layer in the variational scheme allows the problem to be approximated with a local one where barren plateaus are absent or easier to face. ## III Non-unitary ansatz We propose to generalize the unitary framework for VQA to Markovian maps, designing the proper dissipation [59; 60] amenable to experimental implementation, also in near-term quantum devices. Previous non-unitary ansatz in VQA's have been considered for _in silico_ implementations (classical post-processing) to boost precision in chemical problems [53; 54]. The non-unitary ansatz acting on the quantum state is \[\Phi(\mathbf{\sigma},\mathbf{\theta})\rho=\mathcal{E}(\mathbf{\sigma})\circ U(\mathbf{\theta} )\rho U^{\dagger}(\mathbf{\theta})\] where \(U(\mathbf{\theta})\) is, as before, a parametric quantum circuit, and \(\mathcal{E}(\mathbf{\sigma})\) is a non-unitary superoperator, with their respective tunable parameters \(\mathbf{\theta}\) and \(\mathbf{\sigma}\). Under Markovian assumption, the non-unitary part of the ansatz is defined by a parametric Liouvillian \(\mathcal{L}=\mathcal{L}(\mathbf{\sigma})\) such that \[\mathcal{E}(\mathbf{\sigma})=e^{\mathcal{L}(\mathbf{\sigma})\Delta t},\] where \(\Delta t\) is the interaction time with the environment. The Liouvillian \(\mathcal{L}\) is the superoperator that generates the dynamics of the GKLS Master Equation [61, 62, 63]: \[\dot{\rho}=\mathcal{L}\rho\equiv-i[\mathfrak{H},\rho]+\sum_{i}\gamma_{i}(L_{i }\rho L_{i}^{\dagger}-\frac{1}{2}\{L_{i}^{\dagger}L_{i},\rho\}) \tag{2}\] where \(\mathfrak{H}\) is the Hamiltonian responsible of the unitary part of the evolution, \(\{\gamma_{i}\}\) are the damping rates and the operators \(\{L_{i}\}\), called jump operators, identify the environment action on the qubits. For our specific proposal, we set the following conditions on \(\mathcal{L}\): 1. \(\mathcal{L}\) can be expressed as the sum of \(Q\) superoperators \(\mathcal{L}_{q}(\mathbf{\sigma}_{q})\), each of which acts non-trivially on at most \(K\) qubits. This expansion takes the form: \[\mathcal{L}(\mathbf{\sigma})=\sum_{q=1}^{Q}\mathcal{L}_{q}(\mathbf{\sigma}_{q}).\] (3) 2. All the generators \(\mathcal{L}_{q}\) commute with each other. 3. All the generators \(\mathcal{L}_{q}\) have exactly one stationary state, \(\rho_{ss,q}\). 4. The generators \(\mathcal{L}_{q}\) converge to their respective stationary states at the same rate, which we refer to as the mixing time. According to the previous definitions and under the assumptions 1. and 2., the cost function reads: \[C(\mathbf{\theta},\mathbf{\sigma}) =\mathrm{Tr}\biggl{\{}He^{\mathcal{L}(\mathbf{\sigma})\Delta t}U(\bm {\theta})\rho_{in}U^{\dagger}(\mathbf{\theta})\biggr{\}}\] \[=\mathrm{Tr}\Biggl{\{}H\prod_{q=1}^{Q}e^{\mathcal{L}_{q}(\mathbf{ \sigma}_{q})\Delta t}U(\mathbf{\theta})\rho_{in}U^{\dagger}(\mathbf{\theta})\Biggr{\}}. \tag{4}\] ### Illustrative example Before delving into the formal characterization, we present a simple example to illustrate the strategy of tackling the barren plateaus with non-unitary VQAs. Following [39], we consider a state preparation task formulated through the optimization of a global Hamiltonian. In particular, we are interested in minimizing \(H=\mathbb{I}-\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\) supposing that all the qubits are initialized in state \(\left|0\right\rangle\). By applying the unitary ansatz \(U(\mathbf{\theta})=\bigotimes_{j=1}^{n}e^{-i\theta_{j}\sigma_{x}/2}^{n}\), we can easily detect the occurrence of a barren plateau (see Fig. 1 a). In fact, the cost Figure 1: Cross sections of the warm-up example cost function landscapes of Sec. III.1 for a 20-qubit system. (a) Fully unitary ansatz \(C_{u}\), (b) noisy landscape \(C_{n}\) with depolarizing probability \(p=0.5\), and (c) engineered-dissipation ansatz \(C_{ed}\) in the case of \(\Delta t\simeq 2,33\), which corresponds to the maximum of \(\mathrm{Var}[\partial C_{ed}/\partial\theta_{j}]\) (see main text). function takes the form \(C_{u}=1-\prod_{j=1}^{n}\cos^{2}(\frac{\theta_{j}}{2})\) and a direct calculation shows that \(\text{Var}[\frac{\partial C_{u}}{\partial\theta_{j}}]=\frac{1}{8}(\frac{3}{8})^{ n-1}\). In addition, the derivative values are unbiased, i.e. \(\left\langle\frac{\partial C_{u}}{\partial\theta_{j}}\right\rangle=0\), implying an exponential suppression of the probability to sample a non-negligible gradient as a consequence of Chebyshev's inequality. We now emphasize that, as shown in [37], general noisy non-unitary interactions are not able to mitigate barren plateaus and can actually induce them [37]. For example, if we model the noise effect with a depolarizing channel of probability \(p\), the resulting cost function will be \(C_{n}=p(1-\frac{1}{2^{n}})+(1-p)C_{u}\) (see Fig. 1 b). This non-unitary model, in addition to clearly having the same trainability problems of the cost function \(C_{u}\), also precludes the possibility of finding the correct ground state energy. We now introduce an engineered dissipation through a non-unitary layer, ensuring that each \(\mathcal{L}_{q}\) operator acts dissipatively on a single qubit as per Eq. (2). We assume that, in general, the \(\mathcal{L}_{q}\) structure lacks a Hamiltonian part and instead is determined by a single jump operator \(L_{q}=\left|0\right\rangle_{q}\left\langle 1\right|\) with a corresponding damping rate normalized to \(1\). This situation is advantageous because the stationary state of the whole Liouvillian is the ground state of the sought-after problem. The cost function in the presence of such an engineered dissipation is now \(C_{ed}=1-\prod_{j=1}^{n}[1-\sin^{2}(\frac{\theta_{j}}{2})e^{-\Delta t}]\) for which the unbiased condition on the gradient still holds. However, in this case, it is possible to efficiently avoid the barren plateau (see broader and still deep minimum in Fig. 1 c). In fact, \(\text{Var}[\frac{\partial C_{ed}}{\partial\theta_{j}}]=\frac{1}{8}e^{-\Delta t }(1+\frac{3}{8}e^{-2\Delta t}-e^{-\Delta t})^{n-1}\) and if \(\Delta t\sim\mathcal{O}(\log(n))\) the desired polynomial scaling is achieved. We observe that the \(C_{ed}\) landscape takes a similar shape as the one in [39], where the unitary ansatz was the same as we have considered here, but the global Hamiltonian was replaced by an equivalent local one. In the following, we will show how this landscape similarity is not a coincidence because the considered non-unitary ansatz allows, in general, localizing a problem that has been originally formulated as a global one. Moreover, we will also prove that the logarithmic \(\Delta t\) scaling is a general feature of the procedure. ## IV Theoretical results Now we will show how a non-unitary layer that respects the constraints introduced in Sec. III is able to make the Hamiltonian of the cost function local. First, we note that the diagonalizability of the \(\mathcal{L}_{q}\) superoperators is a mild condition that can be assumed to hold. In fact, if \(\mathcal{L}_{q}\) has a holomorphic dependence on the parameters \(\mathbf{\sigma}_{q}\) and if it is diagonalizable for a subspace of them, then it will be diagonalizable in general, apart from the possible appearance of countable exceptional points, which can be ignored in the discussion [68]. Anyway, in all the cases discussed in the following, diagonalizability will always be ensured. Consequently, it is useful to expand the \(\mathcal{L}_{q}\) superoperators through their dual basis. To keep the notation simple, we will assume that each \(\mathcal{L}_{q}\) acts non-trivially on exactly \(K\) qubits [69]. Introducing the Liouville notation for a generic matrix, \(\rho\rightarrow\left|\rho\right\rangle\!\rangle\), and considering the dot product \(\langle\!\langle\tau|\rho\rangle\!\rangle=\text{Tr}\{\tau^{\dagger}\rho\}\), then \[e^{\mathcal{L}_{q}(\mathbf{\sigma}_{q})\Delta t}=\sum_{i=0}^{4^{K}-1}e^{\lambda_{q,i}\Delta t}\frac{\left|r_{q,i}\right\rangle\!\langle\!\langle l_{q,i}|}{ \left\langle\!\langle l_{q,i}|r_{q,i}\right\rangle\!\rangle}, \tag{5}\] where \(\{\lambda_{q,i}\}_{i}\) is the set of the eigenvalues of \(\mathcal{L}_{q}\) and \(\{\left|r_{q,i}\right\rangle\!\rangle\}_{i}\) and \(\{\left|l_{q,i}\right\rangle\!\rangle\}_{i}\) are the corresponding set of orthogonal and normalized right and left eigenvectors [70]. Ordering the indexes as a function of the real parts of the eigenvalues such that \(\text{Re}\{\lambda_{q,0}\}>\text{Re}\{\lambda_{q,1}\}\geq\cdots\geq\text{Re }\big{\{}\lambda_{q,4^{K}-1}\big{\}}\), and assuming the uniqueness of the steady state \(\left|\rho_{ss,q}\right\rangle\!\rangle\) (this is also a mild condition to satisfy), we identify \(\lambda_{0}=0\), \(\left|r_{q,0}\right\rangle\!\rangle=\left|\rho_{ss,q}\right\rangle\!\rangle\) and \(\left|l_{q,0}\right\rangle\!\rangle=\left|\mathbb{I}\right\rangle\!\rangle\). Equation 5 can be used to calculate the order of magnitude of the mixing times \(\Delta t_{mix,q}\). Indeed, excluding the occurrence of singular phenomena such as the skin effect [71], we have \(\Delta t_{mix,q}\sim\mathcal{O}(1/|\text{Re}\{\lambda_{q,1}\}|)\), where the quantity \(|\text{Re}\{\lambda_{q,1}\}|\) represents the spectral gap. Our constraint of a common mixing time for all \(\mathcal{L}_{q}\) is, then, satisfied if the spectral gaps are equal and do not vary with \(\mathbf{\sigma}_{q}\). Then, we deal with a unique mixing time, which we will refer to as \(\Delta t_{mix}\). Defining \(\left|\rho(\mathbf{\theta})\right\rangle\!\rangle\equiv\left|U(\mathbf{\theta})\rho_{in }U^{\dagger}(\mathbf{\theta})\right\rangle\!\rangle\), we can rewrite Eq. (4) with the introduced notation: \[C(\mathbf{\theta},\mathbf{\sigma})=\langle\!\langle H|e^{\mathcal{L}(\mathbf{ \sigma})\Delta t}[\rho(\mathbf{\theta})]\!\rangle\] \[=\langle\!\langle H|\sum_{i_{1},\ldots,i_{Q}=0}^{4^{K}-1}e^{(\sum_ {q=1}^{Q}\lambda_{q,i_{q}})\Delta t}\bigotimes_{q=1}^{Q}\frac{\left|r_{q,i_{q }}\right\rangle\!\rangle\langle\!\langle l_{q,i_{q}}\right|}{\left\langle\! \langle l_{q,i_{q}}\right|r_{q,i_{q}}\!\rangle\!\rangle}|\rho(\mathbf{\theta}) \rangle\!\rangle.\] For a value of the time such that \(\Delta t\ Writing \(H\) as a linear combination of the \(\mathcal{L}\) left eigenvectors (which form a complete set) \[H=\sum_{j_{1},\ldots,j_{Q}}c_{j_{1},\ldots,j_{Q}}\bigotimes_{q=1}^{Q}|l_{q,i_{q}} \rangle\!\rangle\] and assuming that higher order terms in Eq. (6) can be disregarded, we arrive at the following approximate expression for \(H^{{}^{\prime}}\): \[H^{{}^{\prime}}\simeq c_{0}^{\prime}(\mathbf{\sigma})\mathbb{I}+\sum_{q=1}^{Q}c_{ q}^{\prime}(\mathbf{\sigma})|\mathbb{I}_{q}\rangle\!\rangle\otimes|l_{q,1}\rangle\!\rangle, \tag{8}\] where \(\mathbb{I}_{q}\) refers to the identity operator in all qubit Hilbert spaces except the \(q\)-th subspace. At this point, we observe that \(H^{{}^{\prime}}\) takes the local form of Eq. (1) and is Hermitian, because the time evolution of \(H\), according to (7), can always be written as a sum of time-decaying Hermitian matrices [73]. Therefore, neglecting terms in the expansion does not preclude the hermiticity of the operator. Consequently, all the results concerning the training of the hyperparameter vector \(\mathbf{\theta}\) for local Hamiltonians can be applied. We emphasize that the condition imposed on the mixing times guarantees that each \(\mathcal{L}_{q}\) operator contributes equally to the expansion of the slowest terms of Eq. 6, resulting in the definition of \(H^{\prime}\). It is also easy to verify that the emergence of an effective Hamiltonian with local character holds even when faster modes are included. Our strategy is effective only if the \(c^{\prime}\) coefficients are not negligible. This property is achieved when the slowest decay right eigenvectors of \(\mathcal{L}\) have a significant overlap with \(H\) and, consequently, the choice of the Liouvillan has to be designed according to the known Hamiltonian properties. In addition to avoiding the problem of the barren plateau, we have to make sure that the minimum energy of \(H^{{}^{\prime}}\) and the minimum energy of \(H\) are close enough. Such a closeness will depend, in general, on the value of the parameters \(\mathbf{\sigma}\), and, consequently, the trainability of the non-unitary layer is a property to take into account. To this end, we study how the derivative of the cost function varies as a function of a generic component of \(\mathbf{\sigma}\), which we call \(\sigma_{k}\). As in the previous case of the unitary parameters, a key quantity to calculate is the variance of \(\sigma_{k}\), which is related to the probability of sampling a non-negligible value of such parameter. If we call \(d\mu(\mathbf{\theta})\) and \(d\mu(\mathbf{\sigma})\) the distribution volume elements of the free parameters, we obtain \[\text{Var}\left[\frac{\partial C}{\partial\sigma_{k}}\right] =\text{Var}\left[\text{Tr}\Bigg{\{}\frac{\partial H^{{}^{\prime} }}{\partial\sigma_{k}}\rho(\mathbf{\theta})\Bigg{\}}\right]\] \[=\text{Var}\left[\text{Tr}\Big{\{}\tilde{H}(\mathbf{\sigma})\rho(\bm {\theta})\Big{\}}\right]\] \[=\int_{\mathbf{\sigma}}d\mu(\mathbf{\sigma})\int_{\mathbf{\theta}}d\mu(\mathbf{ \theta})\left(\tilde{C}-\left\langle\tilde{C}\right\rangle\right)^{2} \tag{9}\] where \(\tilde{H}\) is a local Hamiltonian that can be written in the form of Eq. (8) and \(\tilde{C}\) is its relative cost function computed with respect to the unitary ansatz considered. From Eq. (9) we learn that an exponential suppression of \(\text{Var}[\partial C/\partial\sigma_{k}]\)), according to [36], can occur if and only if the unitary ansatz presents a barren plateau in the optimization of \(\tilde{H}\). Since \(\tilde{H}\) and \(H^{\prime}\) have the same local character, considering a quantum circuit that is not affected by barren plateaus in the case of local Hamiltonians (see Sec. II) directly excludes the presence of barren plateaus in the non-unitary layer. Moreover, we want to remark that the value of \(\Delta t\) that ensures the validity of the approximation in Eq. (6) scales efficiently with the number of generators that compose the model, that is, \(\mathcal{O}(\log(Q))\), independent on the specific Hamiltonian \(H\) of the problem. Furthermore, the condition \(Q=\mathcal{O}(\text{poly}(n))\) ensures an efficient logarithmic scaling even in the number of qubits, as required. Let us also emphasize that the condition of Eq. (6) can be alleviated by including faster decay terms while still preserving a local structure for \(H^{{}^{\prime}}\). This means that for practical purposes we can choose a value of \(\Delta t\) significantly smaller than the theoretical threshold discussed above. Finally, our analysis extends seamlessly to considering a convex combination of non-unitary maps generated by a Liouvillian that satisfies the constraints introduced in Sec. III. Indeed, a non-unitary layer given by the convex combination \(\mathcal{E}=\sum_{j}\beta_{j}e^{\mathcal{L}_{j}\Delta t}\), in addition to being a suitable quantum channel, will still transform a global Hamiltonian to a local one. Importantly, if we compute the variance of the derivative of the cost function with respect to either the \(\beta_{j}\) coefficients or the \(\mathcal{L}_{j}\) free parameters, we find that the result reduces to the form shown in Eq. 9. This observation generalizes the conclusions drawn from our previous analysis. ## V Random Hamiltonian example We will now provide a numerical demonstration of the scaling calculated analytically in Sec. IV in a synthetic example. We consider a random Hamiltonian whose ground state in each realization can be either in the neighborhood of the state \(|\mathbf{0}\rangle\) or of \(|\mathbf{1}\rangle\). Assuming lack of knowledge of the ground state, the initial guess for the ansatz is random. For the sake of definiteness, we have fixed the minimum energy to the value of -1.1 (see Appendix A for more details). Moreover, as in Refs. [29; 34], we will employ a hardware-efficient, layered quantum circuit as the unitary component of our ansatz (its form is shown in Appendix B). Considering the known properties of the Hamiltonian, we define the non-unitary layer as follows: \[\mathcal{E}(\sigma)=s(\sigma)e^{\mathcal{L}^{(1)}\Delta t}+\left[1-s(\sigma) \right]e^{\mathcal{L}^{(2)}\Delta t},\] where \(\sigma\) is a real free parameter, \(s(\sigma)\) is the sigmoid function that ensures the sum is a convex combination, and \(\mathcal{L}^{(1)}\) and \(\mathcal{L}^{(2)}\) are Liouvillians made up of single-qubit dissipators such that their corresponding stationary states are \(\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\) and \(\left|\mathbf{1}\right\rangle\left\langle\mathbf{1}\right|\), respectively. These two Liouvillians can be easily derived considering the superoperators family identified in Appendix C in such a way that they will respect the needed constraints presented in Sec. III. This particular choice ensures a non-negligible overlap with the ground state of the target Hamiltonian. Moreover, by training the parameter \(\sigma\), we can transform the outputs of the quantum circuit into states close to the desired ground state. This convex combination can be implemented by considering a stochastic Liouvillian, sampling \(\mathcal{L}^{(1)}\) with probability \(s(\sigma)\) and \(\mathcal{L}^{(2)}\) with complementary probability. In our numerical experiment, we first consider computing the derivative variance scaling with respect to the unitary ansatz parameters, with and without the addition of the non-unitary layer. In the case of the non-unitary ansatz, we have spanned \(\Delta t\) from \(0.1\) to \(3\) with a resolution of \(0.1\) to find its optimal values. In Fig. 2(a) we can clearly observe that, for a fixed number of layers of the quantum circuit, dissipation enables the prevention of an exponential scaling of the gradient variance, which indicates the absence of the barren plateau, as we predicted. We also remark that the optimal values of \(\Delta t\) found are always below the characteristic time threshold indicated in Sec. IV. To give an idea of the role of the dissipation strength, we plot the trend of the variance as a function of \(\Delta t\) in Fig. 2(b). We find a non-monotonic trend that is the result of the competition between the constructive role of dissipation, leading to an increase of locality, but also the drawbacks of a contractive dynamics. The latter at long time would lead to a full Hamiltonian erasure (large \(\Delta t\) in Fig. 2 2 (b)), while in the initial transient can rescale the Hamiltonian with a consequent reduction of the variance (more visible for 5-qubits). For the non-unitary layer to provide an advantage, the number of qubits has to be large enough to obtain a maximum that overcomes the initial value, which corresponds to the fully unitary case. In Fig. 2(a), we also show the behavior of the derivative of the variance with respect to the free parameter of the unitary layer as the number of qubits is changed. We can clearly exclude the presence of an exponential scaling, in agreement with the theoretical results of Sec. IV. Finally, it is also important to test the accuracy of the ground-state energy estimation for both the unitary and the non-unitary ansatz. To this end, we applied 1000 iterations of the gradient descent algorithm, with a learning rate of \(0.1\), to \(10\) random initial conditions in the ideal case where the gradient can be exactly estimated. The results, presented in Fig. 2(c), show that the non-unitary circuit allows a faster convergence, with a relative average error of \(2.2\%\), while the fully unitary circuit has a slower convergence but a higher accuracy of \(1.5\%\). This is expected since the fully unitary circuit preserves the original Hamiltonian and hence its ground state. To take advantage of both properties, we propose a hybrid approach where the non-unitary ansatz is applied to the initial state for the first 500 iterations, and then only the fully unitary circuit is used for the last 500 iterations by setting \(\Delta t\) to zero. This method serves as a novel initialization strategy, and from Fig. 2 (d), we observe a significant improvement in convergence time compared to the fully unitary case. Additionally, the average accuracy is the best found with a value of \(1.2\%\). ## VI Quantum Chemistry Example Moving to a more practical application, we show the results of the beneficial effects of our non-unitary model in the context of quantum chemistry. In particular, we have studied the problem of finding the electronic ground state of the \(H_{2}\) molecule. In our numerical experiments, we studied the molecule in its equilibrium configuration where the two hydrogen atoms are separated by \(0.74\) A. The qubit Hamiltonian \(H\) was obtained from a Jordan-Wigner transformation [74], while the electronic orbitals were generated from the STO-3G basis set [75]. Under these conditions, \(H\) spans a four-qubit Hilbert space. Following the previous example, the quantum circuit that defines the unitary part of the ansatz is the one presented in Appendix B with a number of layers fixed to \(20\). As for the non-unitary layer, the nature of the problem suggests choosing a Liouvillian whose corresponding stationary state is the Hartree-Fock state [76]. Interestingly, the class of Liouvillians presented in Appendix C can always accomplish this task by simply dissipating along the \(z\) direction, because the Hartree-Fock state is generally a computational basis state. As in the previous example of Sec. V, we have applied the gradient descent algorithm to ten random initial conditions both in the case of a fully unitary ansatz, which uniquely consists of a quantum circuit, and in the case of a non-unitary one. We found that a value of \(\Delta t=0.5\) is sufficient to significantly improve the convergence time and the ground state quality as shown in Fig. 3. In particular, we have found that in the non-unitary case, the algorithm is able to converge by fixing a learning rate of \(1\), while in the fully unitary case, we need to set the learning rate to \(0.1\) to achieve faster convergence time and a lower value of the final ground state. In all the considered examples, we have fixed the number of iterations to \(300\). As in the random Hamiltonian example, we observe that the introduction of the non-unitary layer makes it possible to significantly reduce the algorithm time of almost two orders of magnitude, at the cost of convergence to a less accurate ground state. As done in the previous section, we propose a hybrid approach in which, we first apply the non-unitary ansatz, and after half of the total iteration steps, we remove the dissipation and apply only the unitary ansatz to complete the experiment. As shown in Fig. 3, this strategy is successful both in reducing the convergence time and in improving the quality of the final ground state. We have set an error threshold with respect to the ground state energy calculated using a numerical diagonalization equal to \(0.00159\) Hartree, in line with the standard threshold for defining chemical accuracy. Notably, only the hybrid approach can estimate the ground state energy within this energy interval. ## VII Discussion Extensive research in the field of variational quantum algorithms is devoted to exploring strategies and techniques aimed at circumventing or minimizing the occurrence of barren plateaus. This is essential for improving scalability, algorithmic efficiency, and applicability in a wide range of contexts. Here we have proposed and demonstrated the effectiveness of a strategy based on engineered dissipation, that can be understood as a localization of a global cost function argument. This dissipation-based analysis goes beyond the general scaling laws recently presented in Refs. [41, 42], which are limited to the use of a unitary ansatz. Indeed, the presented mitigation strategy has a time complexity that scales logarithmically as the number of qubits is increased, which makes it possible to estimate the ground state energy of systems that would be intractable with previous methods. To assess the effectiveness of our approach, we have first examined the disappearance of the barren plateau resulting from the presence of engineered dissipation in a simple illustrative model that admits an analytical solution. This toy model also serves as a useful point of comparison for the scenario with general noise, which worsens the problem of attenuation of the barren plateau. Interestingly, handling Hamiltonians such as Figure 2: Results of the random Hamiltonian example. (a) Scaling of the partial derivative variances with respect to \(\theta_{1}^{1}\) and \(\sigma\). Each point was determined from a statistics of \(1000\) different random samples. It includes different rotations directions, Hamiltonian and free parameter realizations. Moreover, the \(x\) parameter was selected at random in the range \([-5,5]\) while all the angles were also selected at random in the compact set \([0,2\pi]\). For the non-unitary ansatz, we varied \(\Delta t\) from \(0.1\) to \(3\) in steps of \(0.1\), selecting the biggest variance for each value of \(n\). The variances with respect to \(\theta_{1}^{1}\) are depicted by dashed lines for the fully unitary ansatz whereas, for the non-unitary ansatz case, they are represented by continuous lines. The partial derivative variance with respect to \(x\) is depicted using the black dotted lines in the upper part of the plot. We only plot the cases where the non-unitary ansatz is beneficial for training (for very short chains no improvement is observed compared to the unitary ansatz ). (b) Variance averages with respect to \(\theta_{1}^{1}\) as a function of \(\Delta t\) in the case of a \(5\)-layer quantum circuit, for \(5\) and \(8\) qubits. (c)-(d) Evolution of the cost function evolution under the gradient descent algorithm with a learning rate of \(0.1\) and for \(10\) random initial conditions. The light lines represent single realizations, while the relative averages are reported in a darker color. (c) The performances of the two different cases (unitary vs non-unitary ansatz) are compared. (d) Cost function calculated through a hybrid approach: the first \(500\) iterations involve non-unitary ansatzes, while in the remaining \(500\) we impose the condition \(\Delta t=0\). where \(|\psi\rangle\) represents a generic pure state, can be challenging with alternative proposals, such as perturbative gadgets [77]. While both these approaches map a global Hamiltonian into a local one, our engineered dissipation strategy does not require to increase the Hilbert space dimension and the time complexity is entirely unrelated to the decomposition of the Hamiltonian using Pauli matrices. The formal theoretical framework of this proposal stands on the analytical proofs in Sec. IV, where we first demonstrate that the presence of a tailored non-unitary layer makes it possible to map the cost function into an effective one that corresponds to a local Hamiltonian. Moreover, we also prove that this type of non-unitary layer can be efficiently trained. This theoretical proposal opens a new avenue for implementations in noisy quantum processors. Actually, the form of engineered dissipation here presented can be efficiently implemented on experimental platforms, for instance through collision models [78], as discussed in Appendix D. We then tested the use of nonunitary ansatzes in two relevant numerical examples. The first one addressed a random synthetic Hamiltonian, while the second tackled the realistic task of determining the lower energy of a Hydrogen molecule. Both problems display the effectiveness of our method in speeding up the convergence and also the precision reached in the estimation of the ground state energy. Indeed, the non-unitary approach can also be seen as an efficient initialization protocol for a unitary ansatz suggesting a powerful hybrid strategy where a unitary ansatz follows a non-unitary one. Our results are consistent with other quantum machine learning protocols, where the presence of losses can enhance performance. In a recent work, a dissipative optimization algorithm, able to efficiently find Hamiltonian local minima, has been reported [79]. Another relevant example can be found in the realm of quantum reservoir computing [80], where dissipation can be transformed into a constructive resource [81, 82, 83]. To stay within the context of VQAs, it was demonstrated that incorporating stochastic noise can prevent the occurrence of saddle points, which are detrimental to efficient optimization [84]. Finally, we proposed a simple dissipation model that proved very effective in mitigating barren plateaus. This indicates the potential for devising more comprehensive strategies that fully explore the potential of non-unitary architectures in variational quantum circuits but also in the broader arena of quantum neural networks. ## Acknowledgements We acknowledge the Spanish State Research Agency, through the Maria de Maeztu project CEX2021-001164-M funded by the MCIN/AEI/10.13039/501100011033 and through the QUARESC project (PID2019-10904GB-C21/AEI/10.13039/501100011033), MINECO through the QUANTUM SPAIN project, and EU through the RTRP - NextGenerationEU within the framework of the Digital Spain 2025 Agenda. We also acknowledge funding by CAIB through the QUAREC project (PRD2018/47). The CSIC Interdisciplinary Thematic Platform (PTI) on Quantum Technologies in Spain is also acknowledged. GLG is funded by the Spanish Ministerio de Educacion y Formacion Professional/Ministerio de Universidades and co-funded by the University of the Balearic Islands through the Beatriz Galindo program (BG20/00085). The project that gave rise to these results received the support of a fellowship from the "la Caixa" Foundation (ID 100010434). The fellowship code is LCF/BQ/DI23/11990081. IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. The current list of IBM trademarks is available at [https://www.ibm.com/legal/copytrade](https://www.ibm.com/legal/copytrade). ## Appendix A Random Hamiltonian structure In Sec. V, we have proposed the optimization of a random Hamiltonian \(H\) from which we have only partial information about a possible ground state. We will now show the criteria used to randomly generate \(H\) in such a way that it is constrained to this incomplete ground state knowledge. We point out that the non-unitary optimization algorithm shown in Sec. V only has access to this Hamiltonian property and, as a consequence, the key idea can be applied in a general context. Going now into the details of \(H\), for the sake of definiteness, we will impose the condition that all its eigenvalues \(\lambda_{i}\) satisfy \(|\lambda_{i}|<1\), except for the maximum eigen Figure 3: Results of the gradient descent algorithm for the example of \(H_{2}\). (a) First 150 iterations of the algorithm. The blue line refers to the non-unitary ansatz optimization with \(\Delta t=0.5\) and a learning rate equal to 1, the red dotted one to the unitary-ansatz optimized with a learning rate equal to 1, while, for the red continuous one, the unitary ansatz takes a learning rate equal to 0.1. The vertical black line indicates the ground-state energy computed by a numerical diagonalization of the Hamiltonian. (b) Last 150 iterations. The green line shows the hybrid ansatz optimization and the grey region refers to energy values falling within the considered error threshold. The hybrid ansatz learning rate is set to 0.1. value which is fixed at 1.1, and the minimum eigenvalue which is set at -1.1. Additionally, we have constrained the eigenvectors associated with these last two values to be one of two states (up to a normalization factor): \(\ket{\psi_{1}}=\ket{\mathbf{0}}+0.1\ket{\phi_{1}}\) and \(\ket{\psi_{2}}=\ket{\mathbf{1}}+0.1\ket{\phi_{2}}\), selected randomly and exclusively, where \(\ket{\phi_{1}}\) and \(\ket{\phi_{2}}\) are Haar-distributed states. We stress that for \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\) to be, in general, viable eigenvectors, they must undergo a Gram-Schmidt orthonormalization. Finally, we underline that the algorithm of Sec. V is built from two possible ground-state neighborhood guessing which are referred to two opposite cases: the maximum and the minimum eigenvalue. Consequently, the numerical results obtained indicate that the algorithm was able to successfully discriminate them. ## Appendix B Quantum circuit We now show in detail the form of the quantum circuit considered in Sections V and VI. Following the refs. [29; 34], it has the structure of a hardware-efficient, layered quantum circuit structured as follows: \[U(\mathbf{\theta},\mathbf{d})=\prod_{l=1}^{D}WU_{l}(\mathbf{\theta}_{l},\mathbf{d}_{l}).\] This comprises \(D\) layers of rotations and entangling gates, with the entangler \(W\) consisting of a series of controlled-\(Z\) gates that correlate adjacent qubits: \[W=\prod_{i=1}^{n-1}CZ_{i,i+1}.\] A \(U_{l}\) gate, on the other hand, is formed by single-qubit rotations in random directions: \[U_{l}(\mathbf{\theta}_{l},\mathbf{d}_{l})=\prod_{i=1}^{n}R_{d_{l}^{i}}(\theta_{l}^{i}),\] where \(R_{d_{l}^{i}}(\theta_{l}^{i})\) is a rotation gate that applies an angle of \(\theta_{l}^{i}\) around the direction \(d_{l}^{i}\) to the \(i\)-th qubit, and \(d_{l}^{i}\) is randomly chosen from the \(x\), \(y\), or \(z\) axis, while \(\theta_{l}^{i}\) takes values in the set \([0,2\pi]\). To avoid any preferential direction, the qubits are initially prepared in the state \(\rho_{in}=\ket{\psi_{0}}\bra{\psi_{0}}^{\otimes n}\), where \(\ket{\psi_{0}}=R_{y}(\frac{\pi}{4})\ket{0}\). The all circuit structure is summarized in fig. 4. ## Appendix C Liouvillians based on single-qubit dissipators We now present a simple class of Liouvillians that satisfy the constraints required for our purposes (see Sec. III) from which we have built the non-unitary layer considered in sections V and VI. We consider a generator \(\mathcal{L}\) that takes the form of Eq. (3), where each operator \(\mathcal{L}_{q}\) implements single-qubit dissipation along a desired direction. More precisely, their action is described by the following relation: \[\mathcal{L}_{q}\rho=d_{q}\rho d_{q}^{\dagger}-\frac{1}{2}\{d_{q}^{\dagger}d_{ q},\rho\},\] where \(d_{q}\) is a jump operator acting onto the q-th qubit Hilbert space: \[d_{q}=\ket{\psi_{-}(\alpha_{q},\phi_{q})}_{q}\bra{\psi_{+}(\alpha_{q},\phi_{q })},\] with \[\ket{\psi_{+}(\alpha_{q},\phi_{q})} =\cos\!\left(\frac{\alpha_{q}}{2}\right)\ket{0}+e^{i\phi_{q}}\sin \!\left(\frac{\alpha_{q}}{2}\right)\ket{1},\] \[\ket{\psi_{-}(\alpha_{q},\phi_{q})} =\sin\!\left(\frac{\alpha_{q}}{2}\right)\ket{0}-e^{i\phi_{q}}\cos \!\left(\frac{\alpha_{q}}{2}\right)\ket{1}.\] Here the parameters \(\alpha_{q}\) and \(\phi_{q}\) identify the direction of the dissipation. This choice satisfies the required in Sec. III because the spectral gap of each \(\mathcal{L}_{q}\) is identically equal to \(1/2\). Moreover, each \(\mathcal{L}_{q}\) has as a unique stationary state \(\ket{\psi_{-}(\alpha_{q},\phi_{q})}_{q}\bra{\psi_{-}(\alpha_{q},\phi_{q})}\), and this information can be used to study the overlap with the target Hamiltonian to ensure the effectiveness of the approach. ## Appendix D Non-unitary layer implementation Nonunitary operations can be effectively realised on digital quantum computing architectures in several ways. This task is in fact closely related to the general problem of implementing quantum simulation algorithms for open systems [85; 86; 87; 88; 89]. The most direct approaches employ, for instance, dilation theorems allowing one to recast dissipative dynamics into a unitary evolution on an extended system [90; 91; 92]. Alternatively, or in combination with the latter, one could in principle make use of engineered dissipations leveraging intrinsic qubit noise [93; 94; 95; 59], randomized schemes [96; 97] or controlled classical environments [98]. To give a concrete example, we now present an efficient implementation of the nonunitary layer by means of the so-called collision models (CMs) [99]. The general idea of CMs is to approximate the Markovian dynamics given by Eq. (2) by letting the system qubits sequentially interact in a unitary way with a set of ancillary qubits. In particular, the following approximation is considered: \[e^{\mathcal{L}\Delta t}\approx(\phi_{\Delta t/M})^{M}\] which consists of the alternating M steps where a quantum channel \(\phi_{\Delta t/M}\) approximates the evolution for a finite time equal to \(\Delta t/M\). By definition, \[\phi_{\Delta t/M}[\rho]=\mathrm{Tr}_{a}\{V(\Delta t/M)\rho\otimes\rho_{a}V^{ \dagger}(\Delta t/M)\} \tag{21}\] qubits space, \(\rho_{a}\) is their state which has to be properly prepared at each step and \(V(\Delta t/M)\) is a unitary operator which, depending on the particular dynamics of interest, non-trivially acts on the system and ancillary qubits space. Importantly, for local Liouvillians as required by the criteria of Sec. III, the number of resources needed by the collision-model strategy always scales efficiently [92]. More quantitatively, let us call \(\epsilon\) the difference between the exact Liouvillan dynamics and the \(M\)-step collision model: \[\|e^{\mathcal{L}\Delta t}-(\phi_{\Delta t/M})^{M}\|_{1\to 1}=\epsilon,\] where \(\left\|\cdot\right\|_{1\to 1}\) indicates the trace norm. Then, it can be shown that the number of gates necessary to have an error \(\epsilon\) is always a polynomial function of the number of qubits \(n\). According to Ref. [78], the class of models defined in Appendix Sec. C can be efficiently implemented by coupling each qubit with an ancillary one initialized in the state \(\left|0\right\rangle\). Going more into details, in the definition of \(\phi_{\Delta t/M}\), according to Eq. 12, we will have \(\rho_{a}=\left|0\right\rangle\left\langle 0\right|^{\otimes n}\) and \(V(\Delta t/M)=\prod_{q=1}^{n}V_{q}(\Delta t/M)\) with \[V_{q}(\Delta t/M)=\exp\Bigl{(}-i\sqrt{\Delta t/M}(d_{q}\sigma_{q^{(a)}}^{+}+h.c.)\Bigr{)}\] where the index \(q^{(a)}\) refers to the q-th ancillary qubit. In view of a possible experimental implementation, we emphasize as a positive note that each unitary \(V_{q}(\Delta t/M)\) can be easily realized through a single-qubit gate and a CNOT gate and that the qubit reset operation, currently available on IBM Quantum devices [100], allows the number of ancillary qubits to be fixed to \(n\) for the entire execution of the collision-model algorithm.
2307.00793
The Building Data Genome Directory -- An open, comprehensive data sharing platform for building performance research
The building sector plays a crucial role in the worldwide decarbonization effort, accounting for significant portions of energy consumption and environmental effects. However, the scarcity of open data sources is a continuous challenge for built environment researchers and practitioners. Although several efforts have been made to consolidate existing open datasets, no database currently offers a comprehensive collection of building data types with all subcategories and time granularities (e.g., year, month, and sub-hour). This paper presents the Building Data Genome Directory, an open data-sharing platform serving as a one-stop shop for the data necessary for vital categories of building energy research. The data directory is an online portal (http://buildingdatadirectory.org/) that allows filtering and discovering valuable datasets. The directory covers meter, building-level, and aggregated community-level data at the spatial scale and year-to-minute level at the temporal scale. The datasets were consolidated from a comprehensive exploration of sources, including governments, research institutes, and online energy dashboards. The results of this effort include the aggregation of 60 datasets pertaining to building energy ontologies, building energy models, building energy and water data, electric vehicle data, weather data, building information data, text-mining-based research data, image data of buildings, fault detection diagnosis data and occupant data. A crowdsourcing mechanism in the platform allows users to submit datasets they suggest for inclusion by filling out an online form. This directory can fuel research and applications on building energy efficiency, which is an essential step toward addressing the world's energy and environmental challenges.
Xiaoyu Jin, Chun Fu, Hussain Kazmi, Atilla Balint, Ada Canaydin, Matias Quintana, Filip Biljecki, Fu Xiao, Clayton Miller
2023-07-03T07:21:53Z
http://arxiv.org/abs/2307.00793v1
The Building Data Genome Directory - An open, comprehensive data sharing platform for building performance research ###### Abstract The building sector plays a crucial role in the worldwide decarbonization effort, accounting for significant portions of energy consumption and environmental effects. However, the scarcity of open data sources is a continuous challenge for built environment researchers and practitioners. Although several efforts have been made to consolidate existing open datasets, no database currently offers a comprehensive collection of building data types with all subcategories and time granularities (e.g., year, month, and sub-hour). This paper presents the Building Data Genome Directory, an open data-sharing platform serving as a one-stop shop for the data necessary for vital categories of building energy research. The data directory is an online portal (buildingdatadirectory.org/) that allows filtering and discovering valuable datasets. The directory covers meter, building-level, and aggregated community-level data at the spatial scale and year-to-minute level at the temporal scale. The datasets were consolidated from a comprehensive exploration of sources, including governments, research institutes, and online energy dashboards. The results of this effort include the aggregation of 60 datasets pertaining to building energy ontologies, building energy models, building energy and water data, electric vehicle data, weather data, building information data, text-mining-based research data, image data of buildings, fault detection diagnosis data and occupant data. A crowdsourcing mechanism in the platform allows users to submit datasets they suggest for inclusion by filling out an online form. This directory can fuel research and applications on building energy efficiency, which is an essential step toward addressing the world's energy and environmental challenges. ## 1 Introduction The rise of artificial intelligence as a tool for built environment applications has the potential to impact several industries significantly. However, data availability in the built environment domain remains a critical bottleneck due to privacy concerns and acquisition costs [1]. Open data sources are essential for understanding energy consumption patterns, identifying areas for improvement, and testing energy-saving strategies, especially in the absence of in situ measurements. Yet, access to open data sources in the built environment domain lags behind other communities [2], posing limitations for researchers and practitioners in developing effective energy-saving solutions [3]. In addition to limited accessibility, available open datasets are often dispersed and require labor-intensive and time-consuming collation due to varying formats and sources [4]. Efforts have been made to aggregate open datasets and share them through platforms or directories such as the Building Performance Database (BPD) [5], the Building Data Genome (BDG) projects [6, 7], and the Directory of Buildings Energy Consumption Datasets (DBED) [8]. However, these projects have limitations in the diversity of data types, lack of user contributions, and missing data. This paper outlines the development of a comprehensive data-sharing platform for building performance research. This effort is achieved by creating a data directory that is publicly available and includes functions for filtering, visualization, and uploading new data sets. The Building Data Genome Directory is a lightweight web app that links to a wide range of open datasets, offering users easy access to comprehensive coverage of relevant information. In subsequent sections, the paper will introduce the data sources, data category definitions, reasons for inclusion, critical functions of the web app, and some application cases. ## 2 Data sources The directory focuses on collecting information about open building performance datasets that are widely dispersed and fragmented, which conventionally would require a rigorous data collection process. Metadata for the directory was gathered from various open data sources, including government disclosure programs, research projects, institutes, and publicly available dashboards. Details on each of these data source categories are discussed in the following subsections. The directory data sources are divided according to category and type of data based on the format (e.g., tabular, image) and process of the system that created the data (e.g., HVAC, occupants, sensors). Figure 1 shows an overview of the data set categories, which will be outlined in the following subsections. Figure 1: Schematic of the categories of datasets included in Building Data Genome Directory ### Government disclosure data Data from government disclosure programs is a significant source for built environment data. One example is the Local Law 84 (LL84) of New York City (NYC) in the United States, which requires building owners to disclose their energy and water consumption data through benchmarking annually [9]. This directive has led to the publication of the Energy and Water Data Disclosure dataset for Local Law 84 by the NYC government. These city-level datasets can contain many samples, with some featuring tens of thousands of buildings, although they may have coarse-grained time intervals of a year or a month. To collect these datasets, a comprehensive review of relevant literature and examination of laws pertaining to data disclosure was conducted [1]. Open data portals provided by city governments [10], such as the NYC open data portal ([https://opendata.cityofnewyork.us/](https://opendata.cityofnewyork.us/)), were also browsed to gather available datasets, ensuring the comprehensiveness of the data directory. ### Open research data Research institutes and organizations have published various datasets for building performance research. Some datasets are available on websites, such as the Building Data Genome dataset on Kaggle [6] or the 3D city model of Singapore public housing buildings on GitHub [11]. Other datasets are published through journals, with _Scientific Data_ being a significant venue. A recent review has also listed open-source datasets for building energy demand [2]. These datasets typically provide detailed information about individual buildings but may not have large numbers of samples (generally less than 5,000). A common differentiator of these types of data sets is that the time-series frequency may be higher, sometimes even at the minute level, offering a more granular view of a building's energy usage. Some datasets also provide detailed information about building characteristics, solar installations [12], morphological indicators [13], or sensor locations and building structure [14]. Accessing and leveraging these datasets allows researchers to gain comprehensive insights into individual buildings and their energy usage. To collect these datasets, relevant reviews and research papers were examined, including platforms that provide access to datasets referenced in articles, ### Data collected from open, online dashboards In response to the growing emphasis on net-zero and sustainability goals in the higher education sector, many educational institutes and universities, such as the University of California, Berkeley, Cornell University, and Princeton University, have public energy management dashboards that provide access to energy usage data for further study and analysis. For these datasets, a data acquisition pipeline can be built using scripts to automate the process of extraction from these dashboards, enabling batch downloads of performance data from thousands of buildings. The directory includes several datasets that were retrieved from these types of public web-based energy management dashboards. For many of these dashboards, the API of the data source can usually be found using built-in web browser developer tools. Once the data API is identified, an automated process can be configured with the required data parameters, such as building ID and specific time period, to enable batch downloading of performance data from a web-based dashboard. ## 3 Overview of the directory interface The Building Data Genome Directory can be found online at: buildingdatadirectory.org/. The interface comprises of a main page, referred to as the _Meta Directory_, which provides an overview of all available datasets and several sub-pages presenting datasets by types. The _Meta Directory_ page introduces the Building Data Genome Directory and outlines the scope of the collected datasets. As a web app, it has filtering, visualization, and uploading functions for the datasets. Datasets pertaining to buildings, such as _Building Energy and Water_ and _Building Information_ provide geospatial granularity levels that correspond to individual buildings or, at the very least, communities, instead of the aggregated data of an entire city. The _Meta Directory_ includes a schematic diagram showcasing the various datasets available in the Building Data Genome Directory, as shown in Figure 1. Each black label in the diagram represents a specific data type and has a corresponding subpage, with its link conveniently located on the left column of the web page. The scope description for these types and the representative datasets are presented in Table 1. The _Add New Dataset_ uploading function is at the bottom of the left-hand button. Users must fill in the _Dataset Name_, _URL_, and _Dataset Type_ items to submit a possible contribution to the directory. The datasets submitted by the users will be stored and displayed at the bottom of the _Meta Directory_ page, and they will be added to the directory after undergoing a review process. The category with the highest number of data sets is _Building Energy and Water_, which includes over 30 datasets at the moment. A metadata table that provides essential information about the datasets is displayed on this page, including disclosure status (e.g., data opening level, license availability, organization) and information on the building samples. Figure 2 shows the filtering and visualization functions. The filtering functions enable users to select datasets by location, time interval, and building type. The visualization functions include bar plots with adjustable axes to visualize numerical information, bubble plots to display sample and variable numbers with the size of circles denoting sample sizes and variable quantities, and heatmaps to visualize variable categories. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline **Category Name** & **Data Scope Description** & **Representative Dataset** \\ \hline Building Energy Ontologies & Haystack, Brick Schema, and datasets supporting the semantic model development Building energy model information and simulation data of prototype buildings & A Synthetic Building Operation Dataset [15] \\ Building Energy and Water Data & Time-series energy data and water consumption data of buildings & Energy and Water Data Disclosure for Local Law 84 [9] \\ Electric Vehicle Data & Charging infrastructure datasets and time-series energy consumption data from charging sessions & Campus Electric Vehicle Charging Stations Behavior \\ Weather Data & Historical data from environmental sensors and weather prediction & Campus Electric Vehicle Charging Stations Behavior [17] \\ Building Information Data & Stock characteristics, GIS data, and project management data for a large number of buildings & European Building Stock Characteristics Dataset [18] \\ Image Data of Buildings & Image data of a large number of buildings & Annotated Image Database of Architecture [19] \\ Text Mining Based Research Data & Text-mining data from previous research and communities & A Comprehensive Text-mining Driven Review of Scientific Literature [20] \\ Fault Detection and Diagnosis Data & Ground-truth and simulated datasets for anomalies in the built environment and building systems & Large-scale Energy Anomaly Detection (LEAD) Dataset [21] \\ Occupant Data & The thermal comfort data of occupants collected from experiments & Cozie smartwatch application [22] \\ \hline \end{tabular} \end{table} Table 1: Categories of the data in the directory with short descriptions and an example representative dataset of each type. ## 4 Conclusion and future works The Building Data Genome Directory is a potentially valuable resource for building energy research, providing comprehensive datasets and web app functions for filtering, visualization, and uploading. This directory can be a starting point for researchers and analysts who want to start the exploration process for applicable open data sets for their studies. Numerous research endeavors are anticipated to emerge as branches stemming from this directory. As highlighted by Jin et al. [1], the availability of comprehensive datasets will significantly expedite research in building energy, encompassing areas such as building energy management, grid management, and socio-economic analysis. The team is developing a sub-branch within the Building Data Genome Directory focusing on time-series feature analysis utilizing energy consumption data. ### Future expansion and data quality considerations Future work can optimize the directory by improving functions such as allowing brief dataset descriptions during uploading and incorporating semantic searching capabilities. Enhancing search capabilities for different data types, such as geographic location, would also improve usability, as well as considering unconventional data sources such as scraping relevant data on buildings from property websites [23] and considering volunteered geographic information such as OpenStreetMap [24] in locations that have data of reliable quality. Finally, to strengthen the crowdsourcing aspect of our platform, we plan to implement a functionality to allow users to flag erroneous information and allow trusted users to edit the database. Building a community around the directory would foster user communication and optimize the web app. Collecting feedback and insights through discussions and forums would provide valuable inputs for enhancing features and usability. By actively engaging with users, the directory can continue to evolve and serve as a valuable resource for building energy researchers b ## Acknowledgments The authors gratefully acknowledge the support for this research from the National Key Research and Development Program of China (2021YFE0107400), the Research Grants Council of the Figure 2: Building Data Genome Directory interface showcasing the filtering and visualization functions. Hong Kong SAR (C5018-20GF), and the Singapore Ministry of Education (MOE) Tier 1 Grants: A-0008301-01-00 and A-8000139-01-00.
2305.16788
Spectral convergence in large finite resonator arrays: the essential spectrum and band structure
We show that resonant frequencies of a system of coupled resonators in a truncated periodic lattice converge to the essential spectrum of corresponding infinite lattice. We use the capacitance matrix as a model for fully coupled resonators with long-range interactions in three spatial dimensions. For one-, two- or three-dimensional lattices embedded in three-dimensional space, we show that the (discrete) density of states for the finite system converge in distribution to the (continuous) density of states of the infinite system. We achieve this by proving a weak convergence of the finite capacitance matrix to corresponding (translationally invariant) Toeplitz matrix of the infinite structure. With this characterization at hand, we use the truncated Floquet transform to introduce a notion of spectral band structure for finite materials. This principle is also applicable to structures that are not translationally invariant and have interfaces. We demonstrate this by considering examples of perturbed systems with defect modes, such as an analogue of the well-known interface Su-Schrieffer-Heeger (SSH) model.
Habib Ammari, Bryn Davies, Erik Orvehed Hiltunen
2023-05-26T09:59:22Z
http://arxiv.org/abs/2305.16788v1
# Spectral convergence in large finite resonator arrays: the essential spectrum and band structure ###### Abstract We show that resonant frequencies of a system of coupled resonators in a truncated periodic lattice converge to the essential spectrum of corresponding infinite lattice. We use the capacitance matrix as a model for fully coupled resonators with long-range interactions in three spatial dimensions. For one-, two- or three-dimensional lattices embedded in three-dimensional space, we show that the (discrete) density of states for the finite system converge in distribution to the (continuous) density of states of the infinite system. We achieve this by proving a weak convergence of the finite capacitance matrix to corresponding (translationally invariant) Toeplitz matrix of the infinite structure. With this characterization at hand, we use the truncated Floquet transform to introduce a notion of spectral band structure for finite materials. This principle is also applicable to structures that are not translationally invariant and have interfaces. We demonstrate this by considering examples of perturbed systems with defect modes, such as an analogue of the well-known interface Su-Schrieffer-Heeger (SSH) model. **Mathematics Subject Classification (MSC2010):** 35J05, 35C20, 35P20. **Keywords:** finite periodic structures, essential spectrum convergence, edge effects, subwavelength resonance, density of states, multilevel Toeplitz matrix, van Hove singularity ## 1 Introduction The spectra of periodic elliptic operators have significant implications for many physical problems and have been studied extensively, as a result. In most cases, this analysis is relatively straightforward, since the spectrum can be decomposed into a sequence of continuous bands using Floquet-Bloch theory [9, 16]. Meanwhile, the spectra of elliptic operators on finite domains are quite different in nature but are similarly convenient to handle, in most cases. Such problems typically have a discrete spectrum that can be described using a variety of standard techniques. A more subtle question, however, is how to relate these two spectra. In the physical and experimental literature on waves in periodic structures, the link between the spectra of finite and infinite structures is made routinely. For example, Floquet-Bloch analysis of infinite structures is commonly used to predict the behaviour of the equivalent truncated version, which can be realised in experiments or simulations. Similarly, measurements from experiments or simulations are often used to recreate the Floquet-Bloch spectra band structure by taking Floquet transforms of spatially distributed data. This work will make the link between these two systems precise, by clarifying how the spectrum of the finite structure converges to that of the corresponding infinite structure, as its size becomes arbitrarily large. In this work, we will study the capacitance matrix as a model for a system of coupled resonators. This is a model that describes the resonant modes of a system of \(N\) resonators in terms of the eigenstates of an \(N\times N\) matrix. This matrix is defined in terms of Green's function operators, posed on the boundaries of the resonators. This use of boundary integral formulations allows the model to describe a broad class of resonator shapes [7, 10]. A crucial feature of the model is that it takes into account long-range interactions between the resonators; interactions between a pair of resonators scale in inverse proportion to the distance between them. This is in contrast to many tight-binding Hamiltonian formulations, which often use nearest-neighbour approximations. The capacitance matrix model was first introduced by Maxwell to model the relationship between the distributions of potential and charge in a system of conductors [17]. More recently, it has been shown that the capacitance matrix model also captures the subwavelength resonant modes of a system of high-contrast resonators [1]. Our motivation for using it as the basis for this work is that it serves as a canonical model for a fully coupled system of resonators, which has long-range interactions between the resonators decaying in proportion to the distance. An important subtlety of the model considered in this work is that there are no energy sources or damping mechanisms, such that the system is time-reversal symmetric. This means that the capacitance matrix model used in this work is a Hermitian matrix. While generalized capacitance matrix models have been developed for non-Hermitian systems [4], for non-Hermitian models we generally expect drastically different behaviour, whereby the finite and infinite systems have fundamentally different spectra. A symptom of this is the non-Hermitian _skin effect_, whereby all eigenmodes are localised at one end of a finite-sized structure, in certain systems [23]. As the system of finitely many resonators becomes large, the corresponding capacitance matrix also grows in size. Thus, the problem at hand is to understand the asymptotic distribution of the eigenvalues of the capacitance matrix, in the limit that its size becomes arbitrarily large. While this is an open question for the capacitance matrix, similar results exist for other classes of matrices. For example, the asymptotic distribution of the eigenvalues of banded matrices has been studied [8, 14]. The capacitance matrix is not banded, but it is "almost" banded in the sense that the entries decay in successive off-diagonals. Similarly, there is an established theory describing the limiting spectra of Toeplitz matrices [15, 19, 21]. Once again, the capacitance matrix is not Toeplitz, but for a periodic resonator array it is known to converge to a doubly infinite matrix that has constant (block) diagonals [2]. The crux of this work is to use the properties of the capacitance matrix, as summarised in _e.g._[1, 11], to develop an analogous asymptotic eigenvalue distribution theory. In many of the existing theories on asymptotic eigenvalue distributions, the crucial quantity is the _density of states_ (DOS) [8, 14]. This is a measure that describes the distribution of eigenmodes in frequency space and is given, for a finite array, by \[D_{\mathrm{f}}(\omega)=\frac{1}{M}\sum_{j=1}^{M}\delta\Big{(}\omega-\omega_{j} ^{(M)}\Big{)}, \tag{1.1}\] where \(\omega_{1}^{(M)},\omega_{2}^{(M)},\ldots,\omega_{M}^{(M)}\) are the \(M\) eigenvalues of the \(M\)-resonator system. One of the main results of this work is showing that \(D_{\mathrm{f}}\) converges (in the sense of distributions) to the density of states \(D\) for the corresponding infinite system (which can be obtained via Floquet-Bloch analysis): \[D_{\mathrm{f}}\to D. \tag{1.2}\] The proof of this result is based on comparing the capacitance matrix for a finite system of resonators with the finite-sized matrix obtained by taking the matrix arising from the infinite array of resonators and truncating it to be \(M\)-by-\(M\). The key insights are that (i) these two finite-sized matrices converge to the same limit (under a normalised Frobenius norm) as the size becomes large and (ii) the truncated infinite matrix is a block Toeplitz matrix, meaning we can use existing theory. This approach proves to be immensely useful and will allow us to prove not only the convergence of the density of states, but also a theorem demonstrating that this spectral convergence is in fact pointwise. That is, given an eigenvalue \(\omega\) of the infinite structure and a positive number \(\varepsilon\), any sufficiently large finite structure will have an eigenvalue \(\omega_{\mathrm{f}}\) such that \[|\omega_{\mathrm{f}}-\omega|<\varepsilon. \tag{1.3}\] Further, we will see that a similar convergence result holds for the corresponding eigenvectors, also. This paper will begin by introducing the capacitance matrix model in Section 2. This is accompanied by an asymptotic derivation of the model from a three-dimensional differential problem (with high-contrast subwavelength resonators) in Appendix A, for context. In Section 3, we will develop the theory needed to prove the convergence of the density of states. This is followed by results on pointwise convergence in Section 4, which are accompanied by a demonstration of how the Floquet-Bloch spectral bands can be reconstructed from a finite structure using the Floquet transform. Finally, in Section 5 we present some open questions and possible avenues for future work. ## 2 Capacitance matrix model Throughout this work, we will use a capacitance matrix model for a system coupled resonators. We can view this as a canonical model for coupled resonators with long-range interactions (the interactions are inversely proportional to the distance between the resonators). This model can be derived from first principles to describe either a system of conductors [17] or a system of high-contrast resonators, which is summarised in Appendix A. We consider a periodically repeating system in \(\mathbb{R}^{3}\). We take a lattice \(\Lambda\) of dimension \(d\), where \(0<d\leq 3\), generated by the lattice vectors \(l_{1},\ldots,l_{d}\in\mathbb{R}^{3}\). For simplicity, we take \(\Lambda\) to be aligned with the first \(d\) coordinate axes. We will refer to the three possible lattice dimensions as, respectively, a _chain_ of resonators (\(d=1\)), a _screen_ of resonators (\(d=2\)), and a _crystal_ of resonators (\(d=3\)). We take \(Y\subset\mathbb{R}^{3}\) to be a single unit cell, \[Y=\begin{cases}\{c_{1}l_{1}+x_{2}e_{2}+x_{3}e_{3}\mid 0\leq c_{1}\leq 1,x_{2}, x_{3}\in\mathbb{R}\},&d=1,\\ \{c_{1}l_{1}+c_{2}l_{2}+x_{3}e_{3}\mid 0\leq c_{1},c_{2}\leq 1,x_{3}\in\mathbb{R }\},&d=2,\\ \{c_{1}l_{1}+c_{2}l_{2}+c_{3}l_{3}\mid 0\leq c_{1},c_{2},c_{3}\leq 1\},&d=3. \end{cases}\] To define the finite lattice, we let \(I_{r}\subset\Lambda\) be all lattice points within distance \(r\) from the origin; \[I_{r}=\{m\in\Lambda\mid|m|<r\}.\] The resonators are given by inclusions of a heterogeneous material surrounded by some heterogeneous background medium. We let \(D\subset Y\) be a collection of \(N\) resonators contained in \(Y\), \[D=\bigcup_{i=1}^{N}D_{i},\] where \(D_{n}\) are disjoint domains in \(Y\) with boundary \(\partial D_{i}\in C^{1,s}\) for \(s>0\). We will use \(D\) to denote the collection of resonators contained within a single unit cell of the periodic lattice. We can subsequently define the _periodic_ system \(\mathcal{D}\) and the _finite_ system \(\mathcal{D}_{\mathrm{f}}\) of resonators, respectively, as \[\mathcal{D}=\bigcup_{m\in\Lambda}D+m,\quad\text{and}\quad\mathcal{D}_{\mathrm{ f}}(r)=\bigcup_{m\in I_{r}}D+m.\] Here, \(\mathcal{D}\) is the full lattice of resonators while \(\mathcal{D}_{\mathrm{f}}\) is a finite lattice of resonators of width \(r\). For \(i=1,...,N\) and \(m\in\Lambda\), we let \(D_{i}^{m}\) denote the \(i^{\mathrm{th}}\) resonator inside the \(m^{\mathrm{th}}\) cell: \[D_{i}^{m}=D_{i}+m.\] Next, we will define the capacitance coefficients associated to \(\mathcal{D}\) and \(\mathcal{D}_{\mathrm{f}}\), starting with the finite structure \(\mathcal{D}_{\mathrm{f}}\). Let \(G\) be the Green's function for Laplace's equation in three dimensions: \[G(x)=-\frac{1}{4\pi|x|}.\] Given a smooth, bounded domain \(\Omega\subset\mathbb{R}^{3}\), the _single layer potential_\(\mathcal{S}_{\Omega}:L^{2}(\partial\Omega)\to H^{1}(\partial\Omega)\) is defined as \[\mathcal{S}_{\Omega}[\varphi](x):=\int_{\partial\Omega}G(x-y)\varphi(y)\; \mathrm{d}\sigma(y),\quad x\in\partial\Omega.\] Crucially for the analysis that will follow, \(\mathcal{S}_{\Omega}\) is known to be invertible [7]. For the finite lattice \(\mathcal{D}_{\mathrm{f}}\), we define the capacitance coefficients as \[C_{\mathrm{f},ij}^{mn}(r)=\int_{\partial D_{i}^{m}}\mathcal{S}_{\mathrm{f}_{ \mathrm{f}}}^{-1}[\chi_{\partial D_{j}^{m}}]\,\mathrm{d}\sigma, \tag{2.1}\] for \(1\leq i,j\leq N\) and \(m,n\in I_{r}\), where \(\chi_{A}(x)\) denotes the indicator function of the set \(A\subset\mathbb{R}^{3}\). Here, we explicitly indicate the dependence of the size \(r\) of the truncated lattice. For \(m,n\in I_{r}\), we observe that \(C_{\mathrm{f}}^{mn}(r)\) is a matrix of size \(N\times N\), while the block matrix \(C_{\mathrm{f}}=(C_{\mathrm{f}}^{mn})\) is a matrix of size \(N|I_{r}|\times N|I_{r}|\). We can define analogous capacitance coefficients for the infinite structure \(\mathcal{D}\). We begin by defining the dual lattice \(\Lambda^{*}\) of \(\Lambda\) as the lattice generated by the dual lattice vectors \(\hat{\alpha}_{1},...,\hat{\alpha}_{d}\) satisfying \(\hat{\alpha}_{i}\cdot l_{j}=2\pi\delta_{ij}\) for \(i,j=1,...,d\) and whose projection onto the orthogonal complement of \(\Lambda\) vanishes. We define the _Brillouin zone_\(Y^{*}\) as \(Y^{*}:=\big{(}\mathbb{R}^{d}\times\{\mathbf{0}\}\big{)}/\Lambda^{*}\), where \(\mathbf{0}\) is the zero-vector in \(\mathbb{R}^{3-d}\). We remark that \(Y^{*}\) can be written as \(Y^{*}=Y^{*}_{d}\times\{\mathbf{0}\}\), where \(Y^{*}_{d}\) has the topology of a torus in \(d\) dimensions. When \(\alpha\in Y^{*}\setminus\{0\}\), we can define the quasi-periodic Green's function \(G^{\alpha}(x)\) as \[G^{\alpha}(x):=\sum_{m\in\Lambda}G(x-m)e^{\mathrm{i}\alpha\cdot m}. \tag{2.2}\] The series in (2.2) converges uniformly for \(x\) and \(y\) in compact sets of \(\mathbb{R}^{d}\), with \(x\neq y\) and \(\alpha\neq 0\). Given a bounded domain \(\Omega\subset Y\), the _quasi-periodic_ single layer potential \(\mathcal{S}^{\alpha}_{\Omega}:L^{2}(\partial\Omega)\to H^{1}(\partial\Omega)\) is then defined as \[\mathcal{S}^{\alpha}_{\Omega}[\varphi](x):=\int_{\partial\Omega}G^{\alpha}(x- y)\varphi(y)\;\mathrm{d}\sigma(y),\quad x\in\partial\Omega. \tag{2.3}\] For \(\alpha\in Y^{*}\) and for \(1\leq i,j\leq N\), we have a "dual-space" representation of the infinite capacitance matrix as the \(N\times N\)-matrix \[\widehat{C}^{\alpha}_{ij}=\int_{\partial D_{i}}(\mathcal{S}^{\alpha}_{D})^{-1 }[\chi_{\partial D_{j}}]\;\mathrm{d}\sigma. \tag{2.4}\] This is a "dual-space" representation in the sense that it is parametrised by \(\alpha\) which is the Floquet-Bloch parameter that describes the frequency of spatial oscillation of the eigenmodes. Thus, we can equivalently define a "real-space" representation of the capacitance coefficients through an appropriate transformation. That is, for \(1\leq i,j\leq N\), the "real-space" capacitance coefficients at the lattice point \(m\) are given by \[C^{m}_{ij}=\frac{1}{|Y^{*}|}\int_{Y^{*}}\widehat{C}^{\alpha}_{ij}e^{-\mathrm{i }\alpha\cdot m}\;\mathrm{d}\alpha. \tag{2.5}\] Here, \(C^{0}_{ij}\) corresponds to the diagonal block which contains the capacitance coefficients of the resonators within a single unit cell. We use the notation \(\mathfrak{C}\) to denote the infinite matrix that contains all the \(C^{m}_{ij}\) coefficients, for all \(1\leq i,j\leq N\) and all \(m\in\Lambda\). The main results in this work are based on relating the finite capacitance matrix \(C_{\mathrm{f}}\) to the _truncated_ capacitance matrix \(C_{\mathrm{t}}\). This matrix is obtained by truncating \(\mathfrak{C}\) to the centre block of size \(N|I_{r}|\times N|I_{r}|\), to give a matrix of the same dimensions as \(C_{\mathrm{f}}\). The main technical result of this work is Lemma 3.1, which ascertains a type of weak convergence of \(C_{\mathrm{f}}\) to \(C_{\mathrm{t}}\) as \(r\to\infty\). The spectra of the infinite structure and the finite structure, respectively, are given by the solutions \(\omega\) to the spectral problems \[\mathfrak{C}\mathfrak{u}=\omega^{2}\mathfrak{u}\quad\text{and}\quad C_{\mathrm{ f}}u=\omega^{2}u. \tag{2.6}\] The goal of this work is to compare spectral properties of the infinite structure and the finite structure. Specifically, this work will focus on the convergence of eigenvalues of \(C_{\mathrm{f}}\) to the _essential_ spectrum of \(\mathfrak{C}\); the convergence of pure-point spectra (defect modes) has already been treated in [3]. Throughout, we let \(\widehat{\omega}_{k}(\alpha)\), for \(k=1,...,N\) denote the positive eigenvalues of the quasi-periodic capacitance matrix problem \[\widehat{C}^{\alpha}u=(\widehat{\omega}_{k}(\alpha))^{2}\,u,\quad k=1,...,N,\] and let \(\omega_{i}\), for \(i=1,...,N|I_{r}|\) denote the positive eigenvalues of the finite capacitance matrix problem \[C_{\mathrm{f}}u=(\omega_{i})^{2}\,u,\quad i=1,...,N|I_{r}|.\] We conclude this section with the following convergence result of the capacitance coefficients, which was proved in [3]. **Theorem 2.1**.: _For fixed \(m,n\in\Lambda\), we have as \(r\to\infty\),_ \[\lim_{r\to\infty}C^{mn}_{\mathrm{f}}(r)=C^{m-n},\] _where \(C^{mn}_{\mathrm{f}}=(C^{mn}_{\mathrm{f},ij})^{N}_{i,j=1}\) and \(C^{m-n}=(C^{m-n}_{ij})^{N}_{i,j=1}\) denote, respectively, the \(N\times N\) matrices defined in (2.1) and (2.5)._ Eigenvalue distribution and essential spectral convergence The main goal of this section is to prove the distributional convergence of the density of states of the finite and infinite materials. The main technical result is Lemma 3.1, which establishes the convergence of the finite and truncated capacitance matrices in a certain weak "averaged" norm. We emphasise that the finite and truncated capacitance matrices are not expected to converge strongly in the matrix operator norm. This is due to the fact that, regardless of its size, the finite structure will always exhibit edge effects. ### Density of states For an \(N\)-level system with band functions \(\widehat{\omega}_{k}(\alpha)\), \(k=1,...,N\), we define the "finite-material" density of states \(D_{\mathrm{f}}(\omega)\) and "infinite-material" denisty of states \(D(\omega)\) as the distributions (see, for example, [12, 24]) \[D_{\mathrm{f}}(\omega)=\frac{1}{N|I_{r}|}\sum_{i=1}^{N|I_{r}|}\delta(\omega- \omega_{i})\quad\text{and}\quad D(\omega)=\frac{1}{(2\pi)^{d}}\int_{Y^{*}} \frac{1}{N}\sum_{k=1}^{N}\delta\big{(}\omega-\widehat{\omega}_{k}(\alpha) \big{)}\,\mathrm{d}\alpha. \tag{3.1}\] Let \(L(\omega)=\{\alpha\in Y^{*}\mid\widehat{\omega}_{k}(\alpha)=\omega\text{ for some }k\}\) be the level set of \(\widehat{\omega}_{k}(\alpha)\) at \(\omega\). Carrying out the integral in (3.1), we can rewrite \(D\) as \[D(\omega)=\frac{1}{(2\pi)^{d}}\int_{L(\omega)}\frac{1}{N}\sum_{k=1}^{N}\frac{ 1}{|\nabla_{\alpha}\widehat{\omega}_{k}(\alpha)|}\,\mathrm{d}\sigma(\alpha), \tag{3.2}\] where \(\,\mathrm{d}\sigma(\alpha)\) is the surface measure on \(L(\omega)\). Points \(\alpha\) where \(|\nabla_{\alpha}\widehat{\omega}_{k}(\alpha)|=0\) are known as _van Hove singularities_ and occur around any band edge [22]. ### Finite-material eigenvalue distribution The truncated matrix \(C_{\mathrm{t}}\) of size \(m\in\mathbb{Z}^{d}\) is a multilevel block Toeplitz matrices, with known asymptotic eigenvalue distribution in terms of the eigenvalues \(\widehat{\omega}_{k}(\alpha)\) of the quasi-periodic capacitance matrix \(C^{\alpha}\) (see, e.g. [13, 15] for the one-dimensional case and [20, 21] for the two- and three-dimensional cases). Based on these results, we will show that the finite capacitance matrix \(C_{\mathrm{f}}\) has identical eigenvalue distribution to \(C_{\mathrm{t}}\) as the size tends to infinity. For an \(n\times n\) matrix \(M\), let \(|M|\) denote the normalized Frobenius norm \[|M|^{2}=\frac{1}{n}\sum_{i,j=1}^{n}|m_{i,j}|^{2}. \tag{3.3}\] We will use \(\|M\|_{2}\) for the standard Euclidean matrix norm. The following lemma is the main technical result we will need. **Lemma 3.1**.: _As \(r\to\infty\), the matrices \(C_{\mathrm{t}}\) and \(C_{\mathrm{f}}\) are asymptotically equivalent, in other words, it holds that_ * \(\lim_{r\to\infty}|C_{\mathrm{f}}-C_{\mathrm{t}}|=0\)_;_ * \(\|C_{\mathrm{f}}\|_{2}\) _and_ \(\|C_{\mathrm{t}}\|_{2}\) _are uniformly bounded as_ \(r\to\infty\)_._ Proof.: \(\|C_{\mathrm{f}}\|_{2}\) is uniformly bounded by [3, Lemma 3.4], while \(\|C_{\mathrm{t}}\|_{2}\) is uniformly bounded since it is the Toeplitz matrix of an essentially bounded symbol. Let \(\mathcal{D}_{\mathrm{f}}\) denote the finite lattice of width \(r\). Theorem 2.1 tells us how to extend this finite lattice \(\mathcal{D}_{\mathrm{f}}\) to a larger lattice \(\tilde{\mathcal{D}}_{\mathrm{f}}\), which has width \(\tilde{r}>r\). In particular, Theorem 2.1 shows that we can make this extension in such a way that the corresponding "\(r\)-sized" block \(\tilde{C}_{\mathrm{f},0}\) of the finite capacitance matrix corresponding to \(\tilde{\mathcal{D}}_{\mathrm{f}}\) is arbitrarily close to the "\(r\)-sized" truncated matrix \(C_{\mathrm{t}}\). That is, given an \(\varepsilon>0\), we can make a sufficiently large extension such that \[\|C_{\mathrm{t}}-\tilde{C}_{\mathrm{f},0}\|_{2}<\varepsilon. \tag{3.4}\] Observe that \[\left|C_{\mathrm{f}}-\tilde{C}_{\mathrm{f},0}\right|^{2}=\frac{1}{N|I_{r}|}\sum_{ \begin{subarray}{c}m,n\in I_{r}\\ 1\leq i,j\leq N\end{subarray}}\left(\int_{\partial D_{i}^{m}}\left(\mathcal{S }_{\mathcal{D}_{f}}^{-1}-\mathcal{S}_{\tilde{\mathcal{D}}_{f}}^{-1}\right) \left[\chi_{\partial D_{j}^{m}}\right]\mathrm{d}\sigma\right)^{2}. \tag{3.5}\] We define the "tail" \(\mathcal{D}_{0}\) of the extended lattice as \[\mathcal{D}_{0}=\tilde{\mathcal{D}}_{\mathrm{f}}\setminus\mathcal{D}_{\mathrm{ f}}.\] Observe that we have a block-structure of the single-layer potential on the extended lattice: \[\mathcal{S}_{\tilde{\mathcal{D}}_{\mathrm{f}}}=\begin{pmatrix}\mathcal{S}_{ \mathcal{D}_{f}}&\mathcal{S}_{\mathcal{D}_{0}}|_{\mathcal{D}_{\mathrm{f}}}\\ \mathcal{S}_{\mathcal{D}_{\mathrm{f}}}|_{\mathcal{D}_{0}}&\mathcal{S}_{ \mathcal{D}_{0}}\end{pmatrix}.\] We then have a block inverse of \(\mathcal{S}_{\tilde{\mathcal{D}}_{\mathrm{f}}}\) as follows: \[\mathcal{S}_{\tilde{\mathcal{D}}_{\mathrm{f}}}^{-1}=\begin{pmatrix}\left( \mathcal{S}_{\mathcal{D}_{\mathrm{f}}}-\mathcal{S}_{\mathcal{D}_{0}}|_{ \mathcal{D}_{\mathrm{f}}}\mathcal{S}_{\mathcal{D}_{0}}^{-1}|_{\mathcal{D}_{0} }\right)^{-1}&A_{1}\\ A_{2}&A_{3}\end{pmatrix}, \tag{3.6}\] where \(A_{i}\) are bounded operators that are immaterial for our analysis. We are now ready to start estimating the difference between \(C_{\mathrm{f}}\) and \(C_{\mathrm{t}}\). We want to estimate the term \[\mathcal{S}_{\mathcal{D}_{0}}|_{\mathcal{D}_{\mathrm{t}}}\mathcal{S}_{ \mathcal{D}_{0}}^{-1}|_{\mathcal{D}_{\mathrm{f}}}|_{\mathcal{D}_{0}}\mathcal{S }_{\mathcal{D}_{\mathrm{f}}}^{-1}|\chi_{\partial D_{j}^{m}}|.\] Define \(U(x)=\mathcal{S}_{\mathcal{D}_{\mathrm{f}}}\mathcal{S}_{\mathcal{D}_{\mathrm{ f}}}^{-1}|\chi_{\partial D_{j}^{m}}|\) and \(V=\mathcal{S}_{\mathcal{D}_{0}}\mathcal{S}_{\mathcal{D}_{0}}^{-1}|U|_{ \partial\mathcal{D}_{0}}|\); these functions satisfy the systems of equations \[\begin{cases}\Delta U=0,\quad x\in\mathbb{R}^{3}\setminus\mathcal{D}_{ \mathrm{f}},\\ U|_{\partial\mathcal{D}_{\mathrm{f}}}=\chi_{\partial D_{i}^{m}},\\ U(x)\sim\frac{1}{|x|},\end{cases}\quad\text{ and }\quad\begin{cases}\Delta V=0,&x\in \mathbb{R}^{3}\setminus\mathcal{D}_{0},\\ V(x)=U(x),&x\in\partial\mathcal{D}_{0},\\ V(x)\sim\frac{1}{|x|}.\end{cases} \tag{3.7}\] Observe that the boundary conditions satisfied by \(U\) are imposed on \(\partial\mathcal{D}_{\mathrm{f}}\) while the boundary conditions satisfied by \(V\) are imposed on \(\partial\mathcal{D}_{0}\). In particular, \(U(x)\) scales like \(|m|^{-1}\) for \(x\in\partial\mathcal{D}_{0}\) while \(V(x)\) scales like \((|m||n|)^{-1}\) for \(x\in\partial D^{n}\). As \(r\to\infty\), we therefore have \[\int_{\partial D_{i}^{m}}\left(\mathcal{S}_{\mathcal{D}_{f}}^{-1}-\mathcal{S} _{\tilde{\mathcal{D}}_{f}}^{-1}\right)\left[\chi_{\partial D_{j}^{m}}\right] \mathrm{d}\sigma\leq\frac{K_{1}}{(1+|m|)(1+|n|)},\] for some constant \(K_{1}\). From (3.5) and (3.6), we use the Neumann series to find that \[\left|C_{\mathrm{f}}-\tilde{C}_{\mathrm{f},0}\right|^{2} \leq\frac{K_{2}}{r^{d}}\sum_{m,n\in I_{r}}\frac{1}{(1+|m|)^{2}(1+| n|)^{2}}\] \[\leq K_{3}r^{d-4},\] where \(d\in\{1,2,3\}\). In other words, \(\left|C_{\mathrm{f}}-\tilde{C}_{\mathrm{f},0}\right|\to 0\), which together with (3.4) concludes the proof. From [15] we know that asymptotically equivalent matrices have identical eigenvalue distributions as their sizes tend to infinity. This gives the following result on distributional convergence of the discrete density of states \(D_{\mathrm{f}}(\omega)\) to the continuous density of states \(D(\omega)\). **Theorem 3.2**.: _As \(r\to\infty\), \(D_{\mathrm{f}}(\omega)\) converges to \(D(\omega)\) in the sense of distributions. In other words, for any smooth function \(F\) with compact support, we have_ \[\lim_{r\to\infty}\int_{-\infty}^{\infty}D_{\mathrm{f}}(\omega)F(\omega)\, \mathrm{d}\omega=\int_{-\infty}^{\infty}D(\omega)F(\omega)\,\mathrm{d}\omega.\] Proof.: Since \(C_{\mathrm{t}}\) and \(C_{\mathrm{f}}\) are asymptotically equivalent, we have from [15] that \[\lim_{r\to\infty}\frac{1}{N|I_{r}|}\sum_{i=1}^{N|I_{r}|}F(\omega_{i})=\frac{1} {(2\pi)^{d}}\int_{Y^{*}}\frac{1}{N}\sum_{k=1}^{N}F\big{(}\widehat{\omega}_{k}( \alpha)\big{)}\,\mathrm{d}\alpha,\] from which the theorem follows. From Theorem 3.2 we have that the frequencies \(\omega_{i},i=1,...,N|I_{r}|\), of the finite capacitance matrix are distributed according to \[\omega_{i}\sim\widehat{\omega}_{k}(\alpha),\] where \(\alpha\) is uniformly distributed on the Brillouin zone \(Y^{*}\) and \(\widehat{\omega}_{k}(\alpha),k=1,...,N,\) are the eigenvalues of the quasi-periodic capacitance matrix. The proportion of modes with eigenfrequencies in the infinitesimal interval between \(\omega\) and \(\omega+\,\mathrm{d}\omega\) is then approximated by \[D_{\mathrm{f}}(\omega)\,\mathrm{d}\omega\approx\frac{\mathrm{d}\omega}{(2\pi) ^{d}}\int_{L(\omega)}\frac{1}{N}\sum_{k=1}^{N}\frac{1}{\left|\nabla_{\alpha} \omega_{k}(\alpha)\right|}\,\mathrm{d}\sigma(\alpha).\] For a one-dimensional chain of single resonators (\(d=1,N=1\)) we obtain \[D_{\mathrm{f}}(\omega)\,\mathrm{d}\omega\approx\frac{\mathrm{d}\omega}{2\pi} \frac{1}{\left|\widehat{\omega}^{\prime}(\alpha)\right|}\Bigg{|}_{\alpha\in \alpha(\omega)}, \tag{3.8}\] where \(\alpha(\omega)=\{\alpha\in Y^{*}\mid\widehat{\omega}_{1}(\alpha)=\omega\}\), which is shown by the solid lines in Figure 1a. ### Numerical results The convergence of the distribution of the resonant frequencies can be studied numerically. In Figure 1 we plot histograms of the discrete, finite set of subwavelength resonant frequencies for truncated structures and compare this to the density of states (DOS) for the infinite array, given by (3.8). In each case, the histograms and the DOS are normalised so that the area under the curves is equal to \(1\). We can see that the distribution of the truncated eigenvalues closely resembles the DOS as the structure becomes sufficiently large (for \(1000\) resonators, the curve is difficult to distinguish from the histogram in our plot). We can quantify the error by computing the area between the two curves (the histograms being viewed as curves for these purposes). This is shown in the lower plot of Figure 1 and we observe linear convergence as the size of the array increases. In other words, the discrete DOS of the truncated resonant frequencies is converging in distribution to the DOS, at a linear rate. Similar histograms can be produced for multi-dimensional lattices. For example, in Figure 2 we plot the same histograms for successively larger square (two-dimensional) lattices. Once again, we can see that the distribution of the eigenfrequencies converges to a fixed distribution as the size of the finite lattice increases. One notable difference from the one-dimensional lattice shown in Figure 1 is that the distribution is not singular at the edge of the first band. ## 4 Pointwise convergence and discrete band structure Since any edge effects persist in the limit \(r\to\infty\), we should not expect all eigenvalues of \(C_{\mathrm{f}}\) to converge to the spectrum of \(\mathfrak{C}\). Nevertheless, for any point in the continuous spectrum, there will always be eigenvalues arbitrarily close. We can repeat the arguments used in the proof of Lemma 3.1 to obtain the following theorem on pointwise convergence of the essential spectrum and Bloch modes. **Theorem 4.1**.: _Let \(\omega=\widehat{\omega}_{k}(\alpha)\) be an eigenvalue of the quasi-periodic capacitance matrix \(C^{\alpha}\) for some \(\alpha\in Y^{*}\), corresponding to the normalized eigenvector \(u\in\mathbb{R}^{N}\). Then, for any \(\varepsilon>0\) we can choose \(r>0\) such that the finite capacitance matrix \(C_{\mathrm{f}}(r)\) has a family of eigenvalues \(\omega_{i}^{2},i\in I\) and associated normalized eigenvectors \(u_{i}\in\mathbb{R}^{N|I_{r}|}\) satisfying_ \[|\omega^{2}-\omega_{i}^{2}|<\varepsilon,\qquad\left\|\tilde{u}-\sum_{i\in I}c _{i}u_{i}\right\|_{2}<\varepsilon,\] _for some \(c_{i}\in\mathbb{C}\), where \(\tilde{u}\in\mathbb{R}^{N|I_{r}|}\) is the normalized quasi-periodic extension of \(u\)._ **Remark 4.2**.: Although any eigenvalue and eigenvector of the quasi-periodic capacitance matrix can be approximated by eigenvalues and eigenvectors of the finite capacitance matrix, the converse need not hold. Indeed, due to edge effects, \(C_{\mathrm{f}}\) might have eigenvalues which do not approach those of \(C_{\mathrm{t}}\) in the limit \(r\to\infty\). Although \(C_{\mathrm{f}}\) converges to \(C_{\mathrm{t}}\) in the (weak) norm \(|\cdot|\), it does not converge in the (strong) Euclidean operator norm. Figure 1: Convergence in distribution of the resonant frequencies of the truncated linear array to the density of states (DOS) for the infinite array. (a) The resonant frequencies of the truncated arrays are shown in histograms and the DOS for the infinite array (3.2) as a solid red line. Both the histograms and the DOS have been normalised to have unit area under the curves. (b) The \(L^{1}\) error between the histogram plots and the DOS curve, which converges to zero as the size of the finite array increases. Figure 2: Distribution of the resonant frequencies of a truncated square (two-dimensional) array, plotted as histograms. The discrete band function calculation introduced in [3] provides a notion of how well an eigenmode of \(C_{\rm f}\), for large \(r\), is approximated by Bloch modes of the infinite structure. Given an eigenmode \(u_{j}\), we can take the truncated Floquet transform of \(u_{j}\) as \[(\widehat{u}_{j})_{\alpha}=\sum_{m\in I_{r}}(u_{j})_{m}e^{{\rm i}\alpha\cdot m}, \qquad\alpha\in Y^{*}. \tag{4.1}\] Here we denote by \((u_{j})_{m}\) the vector of length \(N\) associated to cell \(m\in\Lambda\). Observe that \(u_{j}\) is a vector of length \(N|I_{r}|\) while \((\widehat{u}_{j})_{\alpha}\) is a vector of length \(N\). Looking at the Euclidean 2-norm \(\|(\widehat{u}_{j})_{\alpha}\|_{2}\) as a function of \(\alpha\), this function has distinct peaks, which are the quasi-periodicities \(\alpha\) associated with \(u_{j}\). An example is shown in Figure 3. We can then define a discrete band structure whereby the eigenvalues \(\omega_{j}\) of \(C_{\rm f}\) are associated to a quasi-periodicity \(\alpha_{j}\) given by \[\alpha_{j}=\operatorname*{argmax}_{\alpha\in Y^{*}}\|(\widehat{u}_{j})_{ \alpha}\|_{2}. \tag{4.2}\] Note that the symmetry of the problem means that if \(\alpha\) is an approximate quasi-periodicity then so will \(-\alpha\) be. In cases of additional symmetries of the lattice, we expect additional symmetries of the quasi-periodicities. As a demonstrative example of this process, we consider the case of a single resonator repeating in one direction. This has a single continuous band of eigenfrequencies, as shown in Figure 4a. For a truncated version of the structure, the approximate band functions can be reconstructed as above. In Figure 3 we show the norm of the truncated Floquet transform of the \(10^{\rm th}\), \(20^{\rm th}\) and \(30^{\rm th}\) eigenmodes in an array of 50 resonators. In each case the function is even about zero and has a clear peak, which allows us to identify an appropriate quasi-periodicity \(\alpha_{j}\) via (4.2). These values can be used to plot the 50 resonant frequencies alongside the continuous bands of the limiting infinite structure, which is shown in Figure 4a. We see that, even for a set of 50 resonators, the approximate band function closely resembles that of the infinite structure. In Figure 4b, we compare the continuous and truncated spectra of an array of resonators arranged in pairs (dimers). The truncated structure has 100 resonators arranged in 50 pairs. This geometry is an example of the famous Su-Schrieffer-Heeger (SSH) chain [18] which has been shown to have fascinating topological properties [5]. This system has two subwavelength spectral bands and the truncated modes are split evenly between approximating the two bands. Additionally, we can consider this method for lattices of higher dimension. Figure 5a shows the case of a square lattice of resonator dimers. Similarly to Figure 4b, there is a band gap between the first and the second bands, and we see a close agreement between the discrete and the continuous band structure. Figure 5b shows a similar figure in the case of a honeycomb lattice, where the finite lattice is truncated along zig-zag edges of the lattice. As shown in [6], there are Dirac cones on each corner of the Brillouin zone. In the truncated structure, in addition to the "bulk modes" whose frequencies closely agree with the continuous spectrum, there are "edge modes" which are localized around the edges and whose points in the band structure lie away from the continuous bands. Figure 3: A given eigenmode \(u_{j}\) of a truncated periodic structure can be associated with a quasi-periodicity \(\alpha\). Here, we plot the norm of the truncated Floquet transform of the \(10^{\rm th}\), \(20^{\rm th}\) and \(30^{\rm th}\) eigenmodes of an array of 50 resonators. In each case, there are clear peaks that we can assign as the quasi-periodicities of the eigenmodes. These values can be used to reconstruct approximate spectral band functions. Figure 4: The continuous spectrum of the infinite structure and the discrete spectrum of the truncated structure for a one-dimensional lattices. (a) Single periodic resonators (\(N=1\)) with a truncated structure consisting of 50 resonators. (b) Periodic pairs of resonators (\(N=2\)) with a truncated structure containing 100 resonators. In both cases, the truncated Floquet transform (4.1) is used to approximate the quasi-periodicity of the truncated modes. Figure 5: Examples of continuous and discrete spectra of the infinite and truncated structures, respectively. (a) A square lattice with two resonators per unit cell, resulting in two bands separated by a gap. (b) A honeycomb lattice with Dirac cones at the vertices of the Brillouin zone. In both cases, the truncated structures have 800 resonators and the truncated Floquet transform is used to approximate the quasi-periodicity of the truncated modes. ## 5 Nonperiodic band structure and topological invariants The method described in Section 4 can also be applied to aperiodic structures that have been perturbed to introduce defects. Two examples of this are shown in Figure 6. In each case, the defects have induced localised eigenmodes which do not have well-defined associated quasi-periodicities. These are shown with dashed lines. The rest of the truncated spectrum still agrees well with the continuous spectrum of the limiting infinite operator. The example shown in Figure 5(a) is that of a local defect, where the material parameters are changed on the central resonator. As was studied in [2, 3], this corresponds to multiplying the capacitance matrix by a diagonal matrix that is equal to the identity matrix other than a value greater than 1 in the central entry. A formula for the eigenfrequency of this defect mode in the infinite structure was derived in [2] and it was proved in [3] that the eigenfrequencies of the localised modes in the truncated arrays converges to that value. In Figure 5(b) we study the famous example of a interface mode in the Su-Schrieffer-Heeger (SSH) chain. This localised eigenmode exhibits enhanced robustness with respect to imperfections, a property it inherits from the underlying topological properties of the periodic structure via the notion of _topological protection_[5]. Even though the periodicity of the structure is broken due to the interface, we can still visualise the spectrum as a discrete band structure through (4.1) and (4.2). ## 6 Concluding remarks In this work, we have shown the convergence of resonant frequencies of systems of coupled resonators in truncated periodic lattices to the essential spectrum of corresponding infinite lattice. We have studied this using the capacitance matrix model for coupled resonators with long-range interactions. Figure 6: The discrete spectrum a truncated array with defects can be related to the continuous spectrum of the unperturbed periodic structure. (a) A local defect in a periodic array of resonators. The truncated array has 51 resonators with the defect on the 26\({}^{th}\). The continuous spectrum of the infinite array is the same as that shown in Figure 5(a). (b) A topological (non-compact) defect in an array of resonator pairs. The truncated array has 101 resonators with the defect on the 51\({}^{st}\). The continuous spectrum of the infinite array is the same as that shown in Figure 5(b). In both cases, the defect introduces a localised eigenmode, which does not have a well-defined associated quasi-periodicity and is show with a dashed line. We emphasise that our conclusions extends to other long-range models, since it is the decay of the coupling which is the main feature of the analysis. The discrete band structure calculations in Section 4 give a concrete way to associate band structures to practically realizable materials which may be finite and aperiodic. Notably, for the field of topological insulators, this opens the possibility of defining _discretely_ defined invariants, defined only in terms of the eigenvalues and eigenmodes of the finite interface structures rather than the Bloch eigenvalues and eigenmodes of corresponding infinite, periodic, structures. We emphasise that the matrix model adopted in this work is a Hermitian model with time-reversal symmetry. For non-Hermitian models, similar convergence theorems are not expected to hold, and the spectra of the finite and infinite models might be vastly different. For the discrete band structure calculations in Section 4, we expect the finite modes to be associated to a _complex_ momentum and to adequately describe these, we need to extend the Brillouin zone to the complex plane. A precise study of this setting would be a highly interesting future work. ## Appendix A Continuous PDE model In this appendix, we summarize how the generalized capacitance matrix gives an asymptotic characterisation a system of coupled high-contrast resonators. In particular, it can be used to characterize the subwavelength (_i.e._ asymptotically low-frequency) resonance of the system. We refer the reader to [1] for an in-depth review and extension to other settings. We will consider an array of finitely many resonators here, but the modification to infinite periodic systems is straightforward, though an appropriate modification of the Green's function [1]. As previously considered, we suppose that the resonators are given by \(D_{i}\subset\mathbb{R}^{3}\). We consider the scattering of time-harmonic waves with frequency \(\omega\) and will solve a Helmholtz scattering problem in three dimensions. This Helmholtz problem, which can be used to model acoustic, elastic and polarized electromagnetic waves, represents the simplest model for wave propagation that still exhibits the rich phenomena associated to subwavelength physics. We let \(v_{i}\) denote the wave speed in each resonator \(D_{i}\) so that \(k_{i}=\omega/v_{i}\) is the wave number in \(D_{i}\). Similarly, the wave speed and wave number in the background medium are denoted by \(v\) and \(k\). The crucial asymptotic parameters are the contrast parameters \(\delta_{1},\dots,\delta_{N}\). For example, in the case of an acoustic system, \(\delta_{i}\) is the ratio of the densities inside and outside the resonator, respectively. In the current formulation, the material parameters may take any values (for example, complex parameters correspond to non-Hermitian systems with energy gain and loss). In the setting of Section 2, we take all parameters to be positive and equal. Subwavelength resonance will occur in the high-contrast limit \[\delta_{i}\to 0.\] We define \(D\) as the collection of resonators: \[D=\bigcup_{m\in I_{r}}\bigcup_{i=1}^{N}(D_{i}+m),\] and consider the Helmholtz resonance problem in \(D\) \[\left\{\begin{array}{ll}\Delta u+k^{2}u=0&\mbox{in }\mathbb{R}^{3}\setminus \overline{D},\\ \Delta u+k_{i}^{2}u=0&\mbox{in }D_{i}+m,\mbox{ for }i=1,\dots,N,\ m\in I_{r}, \\ u|_{+}-u|_{-}=0&\mbox{on }\partial D,\\ \delta_{i}\frac{\partial u}{\partial\nu}\Big{|}_{+}-\frac{\partial u}{ \partial\nu}\Big{|}_{-}=0&\mbox{on }\partial D_{i}+m\mbox{ for }i=1,\dots,N,\ m\in I_{r}, \\ u(x)&\mbox{satisfies the Sommerfeld radiation condition,}\end{array}\right.\] (A.1) where the Sommerfeld radiation condition is given by \[\lim_{|x|\to\infty}|x|\left(\frac{\partial}{\partial|x|}-\mathrm{i}\,k\right)u =0,\quad\mbox{uniformly in all directions }x/|x|,\] (A.2) and guarantees that energy is radiated outwards by the scattered solution. As mentioned, we take the limit of small contrast parameters while the wave speeds are all of order one. In other words, we take \(\delta>0\) such that \[\delta_{i}=O(\delta)\quad\text{and}\quad v,v_{i}=O(1)\quad\text{as}\quad\delta \to 0,\text{ for }i=1,\dots,N.\] (A.3) Within this setting, we are interested in solutions \(\omega\) to the resonance problem (A.1) that are _subwavelength_ in the sense that \[\omega\to 0\quad\text{as}\quad\delta\to 0.\] (A.4) To be able to characterize the subwavelength resonant modes of this system, we must define the _generalized_ capacitance coefficients. Recall the capacitance coefficients \((C_{\text{f}}^{mn})_{ij}\) from (2.1). Then, we define the corresponding generalized capacitance coefficient as \[(\mathcal{C}_{\text{f}}^{mn})_{ij}=\frac{\delta_{i}v_{i}^{2}}{|D_{i}^{m}|}(C_{ \text{f}}^{mn})_{ij},\] (A.5) where \(|D_{i}^{m}|\) is the volume of the bounded subset \(D_{i}^{m}\). Then, the eigenvalues of \(\mathcal{C}_{\text{f}}\) determine the subwavelength resonant frequencies of the system, as described by the following theorem [1]. **Theorem A.1**.: _Consider a system of \(N|I_{r}|\) subwavelength resonators in \(\mathbb{R}^{3}\). For sufficiently small \(\delta>0\), there exist \(N|I_{r}|\) subwavelength resonant frequencies \(\omega_{1}(\delta),\dots,\omega_{N|I_{r}|}(\delta)\) with non-negative real parts. Further, the subwavelength resonant frequencies are given by_ \[\omega_{n}=\sqrt{\lambda_{n}}+O(\delta)\quad\text{as}\quad\delta\to 0,\] _where \(\{\lambda_{n}:n=1,\dots,N|I_{r}|\}\) are the eigenvalues of the generalized capacitance matrix \(\mathcal{C}_{\text{f}}\), which satisfy \(\lambda_{n}=O(\delta)\) as \(\delta\to 0\)._ A similar result exists for an infinite periodic structure, in terms of the eigenvalues of the generalized quasi-periodic capacitance matrix, as defined in (2.4); see [1] for details. ## Acknowledgements The work of HA was supported by Swiss National Science Foundation grant number 200021-200307. The work of BD was supported by a fellowship funded by the Engineering and Physical Sciences Research Council under grant number EP/X027422/1.
2302.05035
Application of Machine Learning in Identification of Best Teaching Method for Children with Autism Spectrum Disorder
A good teaching method is incomprehensible for an autistic child. The autism spectrum disorder is a very diverse phenomenon. It is said that no two autistic children are the same. So, something that works for one child may not be fit for another. The same case is true for their education. Different children need to be approached with different teaching methods. But it is quite hard to identify the appropriate teaching method. As the term itself explains, the autism spectrum disorder is like a spectrum. There are multiple factors to determine the type of autism of a child. A child might even be diagnosed with autism at the age of 9. Such a varied group of children of different ages, but specialized educational institutions still tend to them more or less the same way. This is where machine learning techniques can be applied to find a better way to identify a suitable teaching method for each of them. By analyzing their physical, verbal and behavioral performance, the proper teaching method can be suggested much more precisely compared to a diagnosis result. As a result, more children with autistic spectrum disorder can get better education that suits their needs the best.
Zarin Tassnim Zoana, Mahmudul Wahed Shafeen, Nasrin Akter, Tanvir Rahman
2023-02-10T03:24:32Z
http://arxiv.org/abs/2302.05035v1
Application of Machine Learning in Identification of Best Teaching Method for Children with Autism Spectrum Disorder ###### Abstract A good teaching method is incomprehensible for an autistic child. The autism spectrum disorder is a very diverse phenomenon. It is said that no two autistic children are the same. So, something that works for one child may not be fit for another. The same case is true for their education. Different children need to be approached with different teaching methods. But it is quite hard to identify the appropriate teaching method. As the term itself explains, the autism spectrum disorder is like a spectrum. There are multiple factors to determine the type of autism of a child. A child might even be diagnosed with autism at the age of 9. Such a varied group of children of different ages, but specialized educational institutions still tend to them more or less the same way. This is where machine learning techniques can be applied to find a better way to identify a suitable teaching method for each of them. By analyzing their physical, verbal and behavioral performance, the proper teaching method can be suggested much more precisely compared to a diagnosis result. As a result, more children with autistic spectrum disorder can get better education that suits their needs the best. Autism Spectrum Disorder, Teaching methods, Special Educational Needs, Machine Learning, Autistic children 978-1-6654-6159-7/22/$31.00 0202 IEEE 6-7 October 2022, Barcelona, Spain ## I Introduction Autism Spectrum Disorder or ASD is a disorder of neurological development that mostly occurs in the early stage of a child's life, that can be distinguished by a persistent deficiency in social interaction and communication and repetitive behaviors[1]. According to the World Health Organization, about one in 270 people globally is estimated to have ASD[2]. And 31 percent of children with ASD have an intellectual disability, that is, their intelligence quotient (IQ) is less than 70[3]. That is why students with ASD do not reach the same academic outcomes as opposed to other students[4]. Article 26 of the Universal Declaration of Human Rights states that everyone has the right to education[5]. That means all types of students, like children with disabilities and fromminority groups from all corners of society have the right to get educated, and that includes children with autism spectrum disorder. In the last two decades, there has been a huge increase in researches on autism spectrum disorder[6] and following this, the tendency of including autistic children in the general and specialized education system has also ascended. However, education for children with autism spectrum disorderis far from perfect. Students with ASD require a special form of education and are thus considered as students with "special educational needs" or SEN[7]. Arranging this SEN is mostly hard not only due to the student lacking communication and interaction skills, but also due to lack of collaboration between teachers and parents, and a very limited amounts of practical suggestions and proper resources to guide them[7]. Each year more researches are done on autism spectrum disorder and specialists are able to diagnose children with ASD much more successfully than before. In the last decade, the number of children diagnosed has increased an incredible five times[6]. Based on their diagnosis, the children are categorized into their own sections of special education. But quite often, they are rejected in general schools, and even when they get their primary education in a specialized school, later in life, they drop out during their higher studies, due to anxiety and poor academic achievement. Most researches on autism spectrum disorder are done using large-scale statistics. Since diagnosis is done based on statistics and not persistent data, there might be some inaccuracies. And thus, identifying their special educational needs is not so accurate as well. But this can be done more efficiently using Machine Learning algorithms that can solve large amounts of variables more precisely. ## II Related Work This section will evaluate previous relevant machine learning work in the context of determining the optimum teaching method for autistic children. We examine the different strategies utilized to accomplish the major conclusions and explain how matching learning has its own set of challenges and limitations this regard. Since the past decade machine learning approaches have been very convenient to diagnose ASD in children. For instance, Virginia Tech Center along with Virginia Tech Institute for Society, Culture and Environment (ISCE), did research where they applied fast Artificial Neural Network (fANN) technique using data from 14,995 infants (16-30 months, 46.51% male) taking twenty inputs and produce an assertive or negative output if the kid has ASD. The sample was then clustered into groups according to their race, sex, and maternal education (i.e., mother has completed Associate Degree or not). The results yielded 99.72% accurate, with 99.92% accuracy for white children, and 99.79% for those black. The results were 99.64% correct for the boys, while it was 99.95% for the girls. In case of maternal education, the results were 99.75% correct for mothers having completed the degree and 99.70% for those who have not[8]. However, very recently, focus has been shifted to use of technology in educational needs of special children. Department of Computer Science and Institute of Education, University College London used machine learning techniques to figure out how students respond to various forms of communication used by their teachers inside controlled classroom conditions considering their specific attributes[9]. There is a lot of space for further researches in this field of taking machine learning approaches to find the proper teaching method for children with ASD. ## III Background Supervised Learning is one of the three types of machine learning, which in turn is a subcategory of artificial intelligence. This type of machine learning is known for using labeled datasets to determine patterns, create classification of the data, and calculate estimated results as precisely as possible. It takes a set of data to learn patterns and educate particular models and then yields the desired output. The model later incorporates more inputs to the datasets and compare outputs, which allows it to learn over time and become more accurate in giving outputs. The model measures its accuracy through the loss function, updating itself until the error has been adequately reduced to the point where it is negligible[10]. We decided to use Supervised learning algorithm as it permits collecting information and produces output from past encounters. It makes a difference to optimize execution criteria with the assistance of past experience. Naive Bayes is a probabilistic classifier, that is based on the Bayes hypothesis with presumption of independence among predictors. Bayes hypothesis depicts the probability of an event to occur by comparing it to the previous records of the event's probability. It is a classification approach that uses the theory of class conditional independence from the Bayes Hypothesis. This implies that the existence of one feature does not impactthe existence of another within the probability of a given result, and each predictor has a rise to impact on that result[11]. Confusion Matrix represents the performance estimation of two or more classes of outputs from classification done with a machine learning model. It is a matrix used to portray the execution of a classification model by comparing a set of test data, whose true values are known, and a similar set of predicted data. Confusion matrix can be very useful for measuring Review, Accuracy, Specificity, Precision, and most imperatively AUC-ROC bends[12]. Random Forest is a popular supervised machine learning algorithm. It can be used as a Classification model as well as a Regression model. It is based on the notion of supervised learning, which combines multiple classifiers and solves complex problems and moves forward to execute the model. It is a classifier that combines a bunch of decision trees on different parts of a particular datasets and uses the normal to improve predictive precision of the dataset. Rather than relying on a single decision tree, the random forest takes the output from each decision tree and predicts the final output based on the majority of predictions. The larger number of trees within the forest, the more precise it is and avoids any complication of over fitting[13]. Decision tree is known as an effective algorithm in the case of prediction as well as classification. It can be used in the supervised learning method. A Decision tree looks like a flowchart type of tree structure and on each inner hub indicates a test on an attribute. In the decision tree algorithm, each branch represents an output of the given input-based testing, and each leaf node, also we call it terminal node holds a class label. In a Decision tree, there are two nodes that are being used, first one is Decision Node and the other one is Leaf Node. Mainly, Decision nodes are utilized to make any decision. Decision tree algorithm has numerous branches, while its Leaf nodes are the output of those decisions and do not contain any further branches[14]. On the premise of features of the given datasets, here almost all decisions are performed. It is a graphical representation for getting all the conceivable outputs to a problem based on given conditions. There is a specific reason to call it a decision tree because it begins with the root node that extends on further branches comparable to a tree and builds a tree-like structure. A decision tree basically inquires, and according to the reply (Yes/No), it assists parting the tree into sub trees. The K-Nearest Neighbor technique assumes that the unused data and accessible cases are close in proximity, and places the new data in the category that is most similar to the available categories. The K-NN algorithm saves all available data and categorizes unused data points depending on their proximity. This means that as new information appears, it may be quickly sorted into a suitable category using the K-NN algorithm. The K-NN algorithm can be used for both regression and classification, but it is more commonly used for classification tasks. It is also known as a lazy learner algorithm since it does not learn from the training set right away; instead, it saves the datasets and performs an action unit when it comes time to classify it. At the training stage, the K-NN algorithm simply saves the datasets, and when it receives new data, it classifies it into a category that is significantly more comparable to the new data[15]. ## IV Proposed Approach The goal of the proposed model is to apply machine learning to determine the best teaching method for children with ASD. To accomplish said goal, the model needs to design a process that accepts data as an input, process it, apply the machine learning algorithms and find the best fit. The figure below provides an effective view of the model design. First, we have to collect data relevant to the research. The input data preprocessing stage is concerned with dropping unnecessary data and convert the datatype of necessary data to make it easy to process in the model. The preprocessed input data goes into the train and test splitting stage and build two segments one used to train the model and the other used to compare with the predicted outputs. After that, selected machine learning algorithms are used and run on the preprocessed input data for selecting the best fit algorithm and apply them in finding the appropriate teaching method for children with ASD. Among the many education methods specialized for children with autism, six teaching methods have been selected for the project. The first one is Technology-aided instruction and intervention, which is a learning system that uses technology as its central feature. This is designed for children who need some extra time to go over an exercise and adjust to their own pace of learning[16]. As a result, teachers and other students needing less support can continue ahead. Secondly, Antecedent based intervention is another evidence-based practice that identifies what causes interference to a child and modifies the environment to remove said interfering behavior of the child so that he/she can regain focus on the exercise[17]. Thethird teaching method used in the project is Pivotal response training. This is a therapy type training system that increases a child's motivation to learn, start communication with someone, and monitor their own behaviors[18]. On the fourth spot, we have Peer-mediated instruction and intervention. This method involves another child without disabilities to take on a role in the teaching beside the teacher or therapist[19]. The peer not only plays the role of a tutor, but also teach critical social skills along the way. Next is the Picture Exchange Communication method, which encourages verbally challenged children to use visual symbols to communicate with parents, teachers and peers. It is a kind of complementary and alternate system that is used for intentional and functional communication to tell what they want or need[20]. Lastly, we have selected Task Analysis, which breaks down complex tasks into sequential smaller steps. This teaching method is applicable for individuals who find even simple tasks to be difficult and give up[21]. Fig. 1: _Proposed Approach_ ### _Input Data_ As for input data to determine the best teaching method for children with autism, it was quite difficult to obtain data, since such data are confidential, and very few researches have been done on this. We are using supervised Machine Learning method for our research and so, we have searched for labeled data. Since there was no dataset large enough to be used for Machine Learning, we have used a total 4 datasets for our research and run our algorithms on a merged dataset with same attributes. We have used the dataset named 'Auistic Spectrum Disorder Screening Data for Toddlers' documented by Dr. Fadi Fayez Thabtah, Principal Lecturer of DigitalTechnology department of Manukau Institute of Technology in New Zealand[22].Download Link: [https://www.kaggle.com/fabdelja/autism-screening-for-toddlers?select=Todlder+data+description.docx](https://www.kaggle.com/fabdelja/autism-screening-for-toddlers?select=Todlder+data+description.docx) Second dataset that we used named Behavior Analysis of Autism. Download Link: [https://www.kaggle.com/iashiqul/behavior-analysis-of-autism](https://www.kaggle.com/iashiqul/behavior-analysis-of-autism) The other two datasets we have used the data set named Autism screening child consists of two version. It has 2249 instances, 18 attributes.Download Link: [https://www.kaggle.com/basmarg/autism-screening-child-two-version](https://www.kaggle.com/basmarg/autism-screening-child-two-version) Datasets regarding clinical or screening autism spectrum disorder are presently very limited and also generic. That is why we used this merged dataset because it is based on influential features that can be useful for further research in improving the classification process and determining ASD cases.The dataset contains categorical, binary and continuous data. The information is focused on medical, health and social science areas. The dataset holds 3043 samples of different children with autism spectrum disorder. The dataset contains 12 integer type data including case number, closed questions whose answers are represented by binary 1's and 0's and Q-chat-10 score. The other 7 data are object type. The total size of our merged dataset is 451.8 kB. We have used a merged dataset that consists of 4 individual datasets that has 3043 instances in total. We decided to use this dataset because the problem with scarcity of data is a crucial drawback, as a good amount of data is integral to any project done with machine learning algorithm. If a dataset is inadequate, it might as well be the agent of poor performances in the project. Very often, drawbacks like these are the main reason why significant machine learning projects remain unaccomplished. Most supervised learning algorithms are strongly reliant on the amount of training data provided. It can be challenging to create large enough training datasets in many circumstances. With smaller datasets algorithms tend to learn the detail of the noises in the training data to such extent, that the model's performance on newer data can have a negative impact. In general, the smaller the dataset, the better it is to use the simpler the machine learning algorithm. Small data requires low-complexity models in machine learning in order to prevent overfitting the model to the data. The Naive Bayes for example, is one of the most basic algorithms, and as a result, it's performance on learning from comparatively minor dataset can indeed be remarkably well. Furthermore, other simpler algorithms such as decision tree, random forest and K-NN can also learn really well from small datasets. These algorithms are essentially better than more complicated algorithms in case of learning from smaller datasets, as they actually try less to learn from exceptional cases or noises. ### _Data Preprocessing_ Now that we have the merged dataset, we initially tried to run four Machine Learning algorithms through the dataset without any kind of pre-processing techniques. That means, we only got to use the integer datatypes for training the algorithms. These were the A1-A10 answers and Qchat-10 score. However, three out of four algorithms gave 100 percent accuracy and the other one gave 99.8 percent accuracy. This case can be interpreted as such that there was no point in using Machine Learning and this could be done only with a bunch of if-else conditions. That is when we decided to include more features of object datatype. So, at first, we used Label Encoding pre-processing method to convert the object datatypes into integer datatype so that the algorithms can read those features as well. By doing so, we trained the algorithms with 16 features instead of 11 features done previously. With more input data, a machine learning algorithm can learn patterns more efficiently. Fig. 2: _Label Encoding_ However, due to Label Encoding, some data became unbalanced, so after splitting the training and testing sets, we used Feature Scaling pre-processing on the training set to achieve a higher accuracy rate. The objective of our project is to use a Machine Learning Algorithm that will learn from patterns in autistic traits, sex, and other factors responsible for autism and then predict a preferred educational method for an individual. Due to its access to many built-in libraries such as numpy, pandas, matplotlib, sklearn, the most preferred language Python has been used in the project. The project is done in Google Colaboratory. The datasets used for the project have been stored in Google Drive in the same folder as the Colab Notebooks. ### _Implementation_ The project file starts by importing the necessary libraries and frameworks. At the next step Google Drive is mounted with Colab and the dataset is read by the project file using pandas library, which is the best way to handle large data frames in a Python program. For simplicity and efficiency, we preprocess the dataset with the help of two methods. We used Label Encoding to include more features to train the algorithms. The binary data A1 to A10 and Q-chat-10 Score are kept as is, while the rest of the columns that had object datatype are converted to integer datatype and added as new columns. We also used Feature Scaling on the training data to balance out irregularly encoded data. Another new column namely "Preferred Education" is added using the numpy library. This column consists of integers 1 to 6 to represent the six special teaching methods, and 0 to represent that no special education is required. These six teaching methods are selected based on some conditions set upon the binary datatype autism traits A1-A10, as followed: ### _Result_ At first, the columns of dataset are divided to 2 sets, one as independent variable x, which includes A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, Q-chat-10 Score, Age Mon's enc Sex, Ethnicity enc Jaundice enc, and Family mem with ASD enc and Class/ASD Traits enc. The other set of dependent variable y consists of just the "Preferred Education' column. In the next step we split both x and y part of the dataset to the training and testing set. To be more accurate, the testing part is set to be 0.05, i.e. 5% of the total data. That means that the algorithm will learn from 95% of the data. It will find patterns from the x and y of the training set and then predict y from the x of the testing set. Finally, the predicted y values are checked with y values of the testing set to calculate accuracy of the algorithms. Here are the Confusion matrices and Accuracy score of all four algorithms. It is easily understandable that both Random Forest and Decision Tree have the highest accuracy of approximately 98.69 percent. But Random Forest has the highest precision Fig. 3: _Label Encoding_ score of 99.09 percent and Decision Tree has the highest recall score. Now the highest F1 score of Random Forest, i.e. 98.47 percent breaks the tie. Thus by comparing the accuracy, precision, recall and F1 score, the best fit for our approach has to be Random Forest. ## VI Conclusion It is effectively necessary to find a suitable teaching method for the children having autism spectrum disorder because every child with ASD has different characteristics and they should have specific education which is appropriate to their needs. Based on a report of World Health Organization (WHO), among every 160 children, at least one has been diagnosed with autism spectrum disorder (ASD). According to Paulette Delgado in her article Autism Spectrum Disorder (ASD) in Education, Schools fail to fulfill the expectations of guardians by not recognizing or supporting the requirements of their children who need special treatment[23]. Social interactions are everywhere and changing continuously in schools. Moreover, some activity in the classroom may be suitable for generalstudents but it can be unfitting for the children with ASD. One the other hand some social prompts or signs which indicate a child to change their certain behaviors are usually difficult to follow for an autistic child. There is confusion between teachers on how to treat an autistic child with special educational needs as there is a deficiency of the depth of knowledge. This might have a negative consequence on their education. Earlier studies have anticipated the challenges of teachers regarding the right education method. Not much research has been done previously to implement machine learning in identification of proper education needs of a child with ASD[24]. So, this research can be helpful for teachers, parents as well as education institutions to determine an effective learning method for an autistic child. We have suggested six education systems according to their necessity. After that four machine learning algorithms named Naive Bayes, Random Forest, Decision Tree and K-Nearest Neighbors have been used in this research. According to the result we have found that the Random Forest algorithm gave approximately 98.69% accuracy, 99.10% precision, 97.95% recall and 98.48% f1 score, which is the best fit in finding the most appropriate teaching approach based on the children's characteristics. ## Acknowledgment The authors would like to thank Tanvir Rahman, Lecturer of CSE Department, BRAC University for his unwavering support, guidance, and encouragement throughout the process. We would also like to express our gratitude to Brac University for providing us with the chance and support we needed to complete this research.
2310.05224
Generative Spoken Language Model based on continuous word-sized audio tokens
In NLP, text language models based on words or subwords are known to outperform their character-based counterparts. Yet, in the speech community, the standard input of spoken LMs are 20ms or 40ms-long discrete units (shorter than a phoneme). Taking inspiration from word-based LM, we introduce a Generative Spoken Language Model (GSLM) based on word-size continuous-valued audio embeddings that can generate diverse and expressive language output. This is obtained by replacing lookup table for lexical types with a Lexical Embedding function, the cross entropy loss by a contrastive loss, and multinomial sampling by k-NN sampling. The resulting model is the first generative language model based on word-size continuous embeddings. Its performance is on par with discrete unit GSLMs regarding generation quality as measured by automatic metrics and subjective human judgements. Moreover, it is five times more memory efficient thanks to its large 200ms units. In addition, the embeddings before and after the Lexical Embedder are phonetically and semantically interpretable.
Robin Algayres, Yossi Adi, Tu Anh Nguyen, Jade Copet, Gabriel Synnaeve, Benoit Sagot, Emmanuel Dupoux
2023-10-08T16:46:14Z
http://arxiv.org/abs/2310.05224v1
# Generative Spoken Language Model based on continuous ###### Abstract In NLP, text language models based on words or subwords are known to outperform their character-based counterparts. Yet, in the speech community, the standard input of spoken LMs are 20ms or 40ms-long discrete units (shorter than a phoneme). Taking inspiration from word-based LM, we introduce a Generative Spoken Language Model (GSLM) based on word-size continuous-valued audio embeddings that can generate diverse and expressive language output. This is obtained by replacing lookup table for lexical types with a Lexical Embedding function, the cross entropy loss by a contrastive loss, and multinomial sampling by k-NN sampling. The resulting model is the first generative language model based on word-size continuous embeddings. Its performance is on par with discrete unit GSLMs regarding generation quality as measured by automatic metrics and subjective human judgements. Moreover, it is five times more memory efficient thanks to its large 200ms units. In addition, the embeddings before and after the Lexical Embedder are phonetically and semantically interpretable. 1 Footnote 1: Audio examples are available at our anonymous website. ## 1 Introduction Recent work has opened up the possibility of learning generative language models directly from the raw audio signals, without using either text or Automatic Speech Recognition (ASR) Lakhotia et al. (2021); Kharitonov et al. (2021); Nguyen et al. (2022); Borsos et al. (2022). The basic idea of these model is to rely on traditional text-based language models (LM), but replace the text input with some other discrete tokens directly learned from audio in an unsupervised fashion. The advantage of learning units from speech instead of relying on ASR is that this procedure can capture non-verbal vocalizations (like laughter) or intonation and rhythm, which are typically not transcribed, resulting in more expressive generations Kreuk et al. (2021); Kharitonov et al. (2021). In addition, ASR may not be available in many languages that have insufficient textual resources and can make errors, which may then perturb the learning of the LM. The problem of using self-discovered units, however, is that these units are typically very small, in fact, usually smaller than phonemes Lakhotia et al. (2021); Borsos et al. (2022). We think that increasing the size of the units will favourably impact the semantic capabilities of a downstream spoken LM. This intuition comes from the NLP literature. Among others, Graves (2013); Mikolov et al. (2011); Bojanowski et al. (2015); Nguyen et al. (2022) have shown a performance gap between character-based LM and word-based LM. The main reason is that at the level of characters, it is difficult for a text LM to extract long-range syntactic and semantic relationships. This is one of the reasons why recent state-of-the-art text-based LM Radford et al. (2019) typically use a tokenizer representing word or subword units Byte Pair Encoding Gage (1994), WordPiece Wu et al. (2016), Unigram Kudo (2018)). Another advantage of large units is to save GPU memory at training time that enables to use both larger batches and longer sequences. In speech, building the equivalent of a text-based tokenizer is hampered by two difficulties. First, the _boundary problem_ is that contrary to text in most orthographic systems, speech does not have spaces and punctuation to delimit between word units. Finding word boundaries from raw audio is itself a difficult challenge Dunbar et al. (2022). Second, the _clustering problem_, is that even if boundaries were available, the clustering of speech fragments is challenging because the same word may surface in a variety of forms depending on speaker, accent, speech rate, etc. This problem may be even more difficult to solve than the first one Dunbar et al. (2022) because of the highly skewed distribution of word frequencies Alagyres et al. (2022). Here, we investigate the possibility of building a _continuous tokenizer_ that sidesteps these two problems by using tokens that have neither perfect boundaries nor require a clustering step. In Appendix B, we explain in more detail why we wish to avoid the clustering of speech fragments and what methods have been applied to tackle this problem so far. Having a continuous tokenizer instead of a discrete one results in drastic changes from the point of view of the downstream LM. With a discrete tokenizer, one can define a finite list of tokens over which the LM can learn a lookup embedding table at the input of the model and use a softmax layer at the output of the model. The softmax is used in training mode to compute the loss function through a cross-entropy with the target token and at inference time to sample sentences. With continuous representations, the list of tokens is unbounded, making these computations intractable. We tackle this problem with a _Lexical Embedder_, a semi-learnable function that maps continuous tokens to a practically infinite list of embeddings. The key question addressed in this paper is whether it is possible to generate speech using large (word-size) continuous units instead of short discrete ones. Our major technical contribution is to replace the three standard elements of a text-based LM (lookup table, cross-entropy loss function, multinomial sampling) with elements adapted to a virtually infinite list of continuous embeddings. We show that with these changes, it is possible to generate speech of the same quality as discrete unit models. This is interesting because our units are 200ms long which amounts to a 5-time memory reduction compared to regular discrete units Lakhotia et al. (2021); Borsos et al. (2022), opening up the possibility to train spoken LMs on longer speech sequences. In addition, our model builds interpretable representations thanks to the Lexical Embedder which learns a mapping between an acoustic space, with phonetic properties, to a lexical space, with semantic and syntactic properties. We call the resulting model tGSLM (token-based GSLM). ## 2 Related work **Unsupervised speech representations** like CPC, Wav2vec2.0 and HuBERT van den Oord et al. (2018); Baevski et al. (2020); Hsu et al. (2021) are fixed-size representations (10 to 20ms long) that outperform traditional features, like mel-filterbanks and MFCCs, in many applications Yang et al. (2021). In parallel to these works, there is a growing literature on variable-length acoustic encoding called speech sequence embeddings (SSE) Peng et al. (2020); Alagyres et al. (2022); Jacobs et al. (2021); Kamper (2018); Settle and Livescu (2016). SSE models take a sequence of speech of any length and return a fixed-size vector. These models encode speech by maximizing phonetic information while minimizing speaker identity and recording conditions. SSEs are used for spoken term discovery Thual et al. (2018), speech segmentation into phones or words Kamper (2022); Alagyres et al. (2022) but also as input to a BERT model Alagyres et al. (2022) for spoken language modelling. **Speech generation** is often performed with a neural vocoder conditioned on mel-filterbanks van den Oord et al. (2016); Kumar et al. (2019); Kong et al. (2020); Prenger et al. (2018). In a text-to-speech pipeline, the mel-filterbanks are obtained with another neural network, which is conditioned on text Ping et al. (2017); Shen et al. (2018). In the next step, the mel-filterbanks are decoded into natural-sounding speech by a neural vocoder van den Oord et al. (2016); Kumar et al. (2019); Kong et al. (2020); Prenger et al. (2018). For the Zerospeech Challenge 2019, Dunbar et al. (2019) proposed to remove text and replace it with unsupervised discrete units. This challenge has fueled a large body of works on learning low bitrate speech representations for speech compression, voice conversion and spoken language modelling Chen and Hain (2020); Liu et al. (2019); Feng et al. (2019); Baevski et al. (2019); Tjandra et al. (2019); Kharitonov et al. (2021); Lakhotia et al. (2021); Nguyen et al. (2020). For evaluation, the Zero-Resource challenge used bitrate and human evaluation. **Spoken Language Model** are neural networks trained to predict missing parts of a spoken sentence with predictive or contrastive losses. GSLM Lakhotia et al. (2021) is the first spoken LM able to generate expressive and consistent spoken sentences in a pure textless fashion. It uses a causal transformer LM trained with NLL loss on sequences of discrete units obtained with a \(k\)-means clustering (with \(k\)=100) of HuBERT frames. Once trained, GSLM can generate a sequence of discrete units by multinomial sampling that is decoded into speech with a separate vocoder. Specif ically, the sampled HuBERT units are mapped to mel-filterbanks with Tacotron2.0 and decoded into speech with _WaveGlow_(Prenger et al., 2018), a neural vocoder. Lakhotia et al. (2021) also provides a way to evaluate their spoken LM using an ASR to transcribe their spoken generations and an external LM to compute the perplexity of the resulting transcriptions. In addition, the Zerospeech Challenge 2021 (Nguyen et al., 2020) designed a set of zero-shot metrics to probe what spoken LMs learn. A recent paper (Borsos et al., 2022), audioLM, came to our attention, which we did not have the time to include in our experiments. AudioLM works similarly to GSLM yet with the ability to generate speech that preserves the identity of the speaker. In another line of work, Algayres et al. (2022) trained a BERT model with a contrastive loss function on sentences represented as a series of SSEs. They showed the resulting BERT is able to model semantics and syntax. This work suggests that discrete tokenizers and the NLL loss are not necessary to tackle language modelling on speech. We take inspiration on their work to design our approach. ## 3 Approach ### tGSLM: training The general structure of tGSLM is presented in Figure 1. It is composed of an **encoder** which segments the input speech into sequences of possibly varying size, and computes a fixed-sized Speech Sequence Embedding (SSE), which we call acoustic tokens (Section 3.1.1). These tokens are turned into lexical tokens through a learnable **Lexical Embedder** (Section 3.1.2), and fed into a causal **Language Model** that has been modified to deal with continuous inputs (Section 3.1.3). #### 3.1.1 Acoustic tokens In Figure 1, a speech sequence, \(S\), is turned into \(n\) acoustic tokens, \((a_{0},...,a_{n})\), after applying speech segmentation and an SSE model. Speech segmentation consists in finding word boundaries in a speech sentence (Algayres et al., 2022; Kamper, 2022; Kreuk et al., 2020). In this work, we rely on a naive method by placing a boundary every 200 ms, regardless of the content of the speech signal. In the Appendix A.1, we show that this method leads to better results than recent, more complex speech segmentation systems. The acoustic tokens \((a_{i})_{i\leq n}\) are built by first encoding the speech sentence \(S\) into a series of \(n^{\prime}\) frames \((f_{i})_{i\leq n^{\prime}}\) with the 8th layer of Wav2vec2.0 Base from Bacvski et al. (2020). For any two boundaries \((k,l)\), \(a_{i}=SSE([f_{k},...,f_{l}])\) where \(SSE\) is a self-supervised system from Algayres et al. (2022) trained with contrastive learning. This model has state-of-the-art performances on phonetic representation of pre-segmented words as measured by the Mean-Average-Precision metric. The acoustic tokens are extracted in a preprocessing step and stored before the training of the subsequent LM. #### 3.1.2 Lexical tokens In a text-based transformer LM, there is often a embedding lookup table before the transformer, that has the size of the vocabulary and that maps discrete word tokens to lexical tokens (Vaswani et al., 2017). These lexical tokens, also known as word embeddings (Mikolov et al., 2013), learn during training semantic and syntactic properties that have been studied extensively in the NLP literature. In our case, the situation is different. First, instead of discrete word tokens, our LM takes as input continuous acoustic tokens which latent vo Figure 1: Speech is encoded into Wav2vec2.0 frames and segmented into chunks. These latter are converted into acoustic tokens with an SSE model and turned into lexical tokens by applying the function _LexEmb_. Finally, lexical tokens are fed to a causal transformer LM which attempts to predict the first, second, and third following tokens using parallel output heads. The acoustic tokens are pre-extracted before training the learnable modules (\(LexEmb\), the transformer and the final fully connected layers) with the NCE loss. The negative samples are chosen randomly from other utterances of the same speaker. cabulary size is unknown. Second, the mapping between acoustic and lexical space cannot be linear, as two speech segments may sound the same, i.e. be close in the acoustic space, while being semantically/syntactically different, i.e. far in the lexical space. This highly non-linear function between acoustic and lexical space is learned by our lexical embedder: \(LexEmb=L\circ q\) function. \(L\) is a stack of non-linear fully connected layers learned jointly with the LM. \(q\) is an information bottleneck quantization function that we had to introduce to minimize the presence of low-level non-linguistic acoustic information. For a speech sequence \(S\) composed of \(n\) acoustic tokens \((a_{i})_{i\leq n}\), we note the sequence of lexical tokens \((l_{i})_{i\leq n}\) such as \(\forall i\leq n,\)\(l_{i}=LexEmb(a_{i})\). To understand why we need \(q\), we have to go back to the LexEmb function input: the acoustic tokens. The acoustic tokens are derived from Wav2vec2.0, which is a transformer architecture whose attention mechanism covers the whole sentence. Each wav2vec2 frame, therefore, contains potential information about relative positions (through the transformer's positional embeddings), adjacent acoustic materials (through self-attention) or global properties like speaker. What we've found in preliminary experiments is that this information may leak into the acoustic tokens and be amplified by the prediction or contrastive loss of the downstream causal LM. Fortunately, it turns out that this information has low variance and can be partially removed by slightly degrading the quality of the acoustic tokens. The degradation of the acoustic tokens is the role of the function \(q\). \(q\) is composed of a PCA reduction and a quantization step that we call _d-k-means_, which stands for permission k-means. Specifically, given a speech database that has been segmented and encoded into \(N\) acoustic tokens, \((a_{i})_{i\leq N}\), we reduce their dimensions to \(d\) with a PCA. Then, we train \(d\) different k-means, one for each dimension of the PCA. In other words, for each \(j\leq d\), we train a k-means on \((PCA(a_{i})[j])_{i\leq N}\). We chose the number of centroids per k-means to be proportional to the explained variance of each of the PCA dimensions. Once the k-means are trained, each dimension of each acoustic token is mapped to its cluster id. Finally, the cluster ids are turned into one-hot vectors and concatenated into one vector (see Appendix A.2 for more detailed explanations). d-k-means is inspired from multi-stage vector quantizer (VQ) (Vasuki and Vanathi, 2006) where several VQ codebooks are learned in parallel as in Baevski et al. (2020); Zeghidour et al. (2021). The PCA and the d-k-means are trained over the whole training set as a preprocessing step, before the transformer LM. We ablate the use of \(q\) in Appendix A.2 and show that it is necessary for the LM to generate sentences2. Footnote 2: Due to this quantization step, the resulting vectors (PCA+ d-k-means) could in principle be mapped to a finite dictionary of tokens, but, in practice, there is little or no collision and the number of classes remains identical to the number of tokens, i.e., way too high to apply a softmax. #### 3.1.3 Causal language model The LM is a standard causal transformer with two modifications: the loss function and the prediction heads. First, in a standard LM, the number of possible types is fixed beforehand and remains tractable even for a very large corpus (10k to 100k). Here, because the number of different lexical tokens is virtually infinite, we cannot use a standard softmax and cross-entropy loss. We first tried a simple L2 reconstruction loss with an additional decoder but it did not work for us in practice. Instead, we use a contrastive loss: the Noice Contrastive Estimation (NCE) loss (Gutmann and Hyvarinen, 2010). This loss works by maximizing the similarity between a pair of positive samples while minimizing the similarity between the positive samples and various negative samples. However, even though the SSE model from Algayres et al. (2022) has learned to be speaker invariant, there is still a lot of speaker-related information encoded into the acoustic tokens. This is a problem already encountered in Algayres et al. (2022); van den Oord et al. (2018) that is dealt with by sampling the negative tokens from the same speaker as the positive tokens. Second, in a standard LM, the output head typically predicts the next word. However, in the case of speech, the boundary between individual phonemes is blurred by coarticulation. It is therefore easy to predict the next word by just attending to very local acoustic information at the end of the last word (something impossible to do with characters which are sequentially disentangled). We, therefore, introduce three prediction heads (three linear fully connected layers: \(h_{1}\),\(h_{2}\),\(h_{3}\)) which do not only predict the first next token, but also the second and third as they cannot be co-articulated with the last token encoded by the LM. These prediction layers are trained jointly with the LM. We justify the choice of three prediction heads with a grid-search available in Appendix Table 5. ### tGSLM: generation Once tGSLM training is done, we use it to generate spoken sentences. We do that in two steps: we generate a sequence of acoustic tokens (Section 3.2.1) and then decode this sequence into speech (Section 3.2.2). #### 3.2.1 Sampling To generate a spoken sentence, we take inspiration of the popular top-k sampling method used in NLP to generate text sentences. This method requires sampling series of word tokens by sampling among the most probable word types. In our case, we do not have access to types so we are going to sample among the most probable lexical tokens. Our sampling method is summarized in Figure 2. We start by collecting a few dozen hours of speech that have not been seen during tGSLM training. The utterances are segmented and encoded into \(N\) speech segments and stored in their acoustic and lexical forms: \((a_{i},l_{i})_{i\leq N}\). Using the FAISS library Johnson et al. (2017), we index \((l_{i})_{i\leq N}\) into a k-NN graph called the lexical space. Given a prompt of \(t\) acoustic tokens \((a_{0},...,a_{t})\), we do a forward pass into tGSLM. Then, we compute the cosine similarity of \(h_{1}\) output and its \(k\) closest neighbours in the lexical space. We apply a softmax on the vector of cosine similarities and treat it as a multinomial distribution to sample one element: \(l_{t+1}\). The softmax function contains a temperature parameter that controls the range of the sampling area. The acoustic tokens \(a_{t+1}\) that correspond \(l_{t+1}\) is retrieved from the stored database and appended to \((a_{0},...,a_{t})\). Once the desired length is reached, the sequence of acoustic tokens is decoded into a spoken sentence as explained in the next section. #### 3.2.2 Speech generation Lakhotia et al. (2021); Kharitonov et al. (2022) trained a Tacotron2.0 decoder Shen et al. (2018) to map deduplicated HuBERT units into mel filterbanks. Then, speech is generated from the mel filterbanks by a _WaveGlow_ vocoder Prenger et al. (2018). In order to make use of this pre-trained Tacotron2.0 decoder, we trained an encoder-decoder transformer model to map series of acoustic tokens to series of HuBERT units. During training, the encoder computes an attention over a series of acoustic tokens while the decoder predicts HuBERT units auto-regressively. At inference, given a series of acoustic tokens, a corresponding sequence of HuBERT units is obtained by taking the argmax of the decoder softmax function. Finally, the HuBERT units are given as input to the pre-trained Tacotron2.0 to be decoded into spoken utterances. ## 4 Evaluation and datasets ### Datasets and settings LJ Speech (LJ), LibriSpeech (LS), Libri-light 6k clean (LL6k-clean) are three corpora of studio recordings of read English of respectively 24, 1k and 6k hours Ito and Johnson (2017); Panayotov et al. (2015); Riviere and Dupoux (2021). These corpora are used to train the different parts of the pipeline. The training details and specific model architectures can be found in Appendix Section A.3. ### Generation metrics **Perplexity (**ppx**) is a text-based metrics used by Lakhotia et al. (2021) to evaluate the overall quality of generated spoken sentences. The authors propose to transcribe the spoken generations with an external ASR system and to compute the mean perplexity score over batches of transcribed speech with an external transformer LM3. The spoken gen Figure 2: Our sampling procedure. Given a list of audio files unseen during training, \(N\) random speech segments are stored in their acoustic and lexical forms: \((a_{i},l_{i})_{i\leq N}\). In addition, a _lexical space_ is created by indexing \((l_{i})_{i\leq N}\) into a k-NN graph. Given a speech prompt, segmented and encoded into \((a_{0},...,a_{t})\), we do a forward pass in tgsLM and search for the nearest neighbors of \(h_{1}\) output in the lexical space. \(l_{t+1}\) is sampled and its corresponding \(a_{t+1}\) is appended to \((a_{0},...,a_{t})\). When a final \(a_{T}\) token is sampled, \((a_{0},...,a_{T})\) is decoded into HuBERT units and speech is generated with Tacotron2. eration process is guided by a temperature parameter that controls how diverse generated sentences are. The diversity of a batch of sentences can be computed as in Lakhotia et al. (2021) with the VERT score that is an average of self-BLEU Zhu et al. (2018) and auto-BLEU Lakhotia et al. (2021) scores. Typically, low temperatures produce high diversity and low perplexity, whereas high temperatures produce low diversity and high perplexity. Footnote 3: [https://github.com/facebookresearch/fairseq/tree/main/examples/language_model](https://github.com/facebookresearch/fairseq/tree/main/examples/language_model) Finally, the perplexity of spoken generation is a metric that presents a high variance, therefore, as a compromise between acceptable generation time and low variance, we compute perplexity over batches of 100 generated utterances whose transcriptions are each exactly 30 words (around 10 seconds of audio). **Subjective judgements** are computed with the meaningful Mean Opinion Scores (MMOS) in which human raters were asked to evaluate how natural (considering both grammar and meaning) a given spoken generation is. For both subjective tests, raters evaluate the samples on a scale of 1-5 with an increment of 1. We follow the method from Lakhotia et al. (2021) where they evaluated 100 samples from each of the evaluated methods while enforcing at least 15 raters for each sample. The CrowdMOS package Ribeiro et al. (2011) was used with the recommended recipes for detecting and discarding inaccurate scores. As for the perplexity measures, the sentences are generated without conditioning on a prompt. ### Zero-shot metrics \(sWUGGY\) **and \(sBLIMP\)** are zero-shot tasks to evaluate spoken language models introduced in the Zerospeech Challenge 2021 Nguyen et al. (2020):. These metrics are inspired by psycholinguistics and are used for interpreting what spoken LM learns. \(sWUGGY\) is a list of pairs of word/non-word synthesized with the Google TTS API and filtered for the words that are in the LibriSpeech training set. \(sBLIMP\) is a list of pairs of syntactically correct/incorrect synthesized sentences. Both \(sWUGGY\) and \(sBLIMP\) require the spoken LM to attribute a higher probability to the correct element in each pair. Probabilities are computed by applying the spoken LM training loss directly on the test items. \(ABX_{sem}\) **and \(ABX_{POS}\)** are additional zero-shot tasks introduced in Algayres et al. (2022) to evaluate the semantic encoding and Part-Of-Speech (POS) tagging, this time not based on probabilities but on distances between embeddings. An ABX task is a list of triplets \(A\),\(B\) and \(X\) where \(A\) and \(B\) belong to the same category and \(X\) is a distractor. The task is to encode the triplet with a distance \(d\) and show that \(d(A,B)<d(A,X)\). In this case, \(A\),\(B\), and \(X\) are spoken words given in the context of a sentence. For \(ABX_{sem}\), A and B are close semantically, and X is random. For \(ABX_{POS}\) A and B share the same POS tag, and X has different POS tags. **Normalised Edit Distance** (NED) introduced in Versteegh et al. (2016) is a term discovery task that consists in finding clusters or pairs of speech segments from unsegmented audio that have the same phonetic transcription. For each discovered pair, the NED is computed as the edit distance normalized by the length of the longest item in the pair. As for ABX tasks, the NED is also based on the distance between embeddings. To compute a NED score, we take inspiration of the procedure introduced in Thual et al. (2018). Given a segmentation of the LibriSpeech dev-clean subset, all speech segments are embedded into fixed-size vectors. With a k-NN, we search for the pairs of closest embeddings and sort them by cosine similarity. Starting from the higher similarities, we retrieve as much pair as necessary to cover the whole dev-clean set. With the phoneme-level transcription of the dev-clean set, all pairs can be transcribed into series of phonemes. The final NED score is obtained by averaging the NED over all pairs of transcriptions. NED and ABX tasks both rely on embeddings that can be extracted at any level of a multi-layer neural model. ## 5 Results ### Generation performances #### 5.1.1 Perplexity and diversity Figure 3 provides a comparison of the original discrete unit-based GSLM with two versions of our continuous unit model: 200ms-tGSLM, trained on speech segmented every 200ms and gold-tGSLM, trained on speech segmented on the true word boundaries. GSLM and 200ms-tGSLM are trained on LL6k-clean4 while the topline, gold-tGSLM, is trained only on LibriSpeech corpus5. The dots in Figure 3 represent batches of generated sentences conditioned on different temperatures. Color curves are the 3rd-degree polynomial interpolation of the dots. In green dashed lines appear two VERT anchor points LJ-VERT(=0.113) and LS-VERT(=0.189). These points are the mean VERT scores obtained on batches of sentences from, respectively LJ and LibriSpeech datasets. The intersection of the dashed lines and the curves gives the scores PPX@LS-VERT and PPX@LJ-VERT that are reported in Table 16. Footnote 5: word boundaries cannot be computed for LL6k-clean because sentence-level speech and text alignments are missing Footnote 6: For a given spoken LM, its PPX@LS-VERT score is the perplexity score obtained by that spoken LM when conditioned on a temperature that makes it generate spoken sentences with a VERT equal to the VERT of the LibriSpeech. Regarding the perplexity scores from Table 1, compared to GSLM, 200ms-tGSLM is slightly better at LJ-VERT and slightly worse at LS-VERT. The measure of perplexities being very noisy, these scores show that both models have similar performances. Some examples of transcribed spoken generations are available in Appendix Tables 8.9 and 10. The topline gold-tGSLM produces much lower perplexities than GSLM and 200ms-tGSLM. Yet, we have experienced a problem with the speech decoder (described in Section 3.2.2) of gold-tGSLM. The scores of our topline are obtained by retrieving the exact transcriptions of the sampled SSEs instead of decoding them with the speech decoder. We had to do this because our speech decoder makes a lot of decoding mistakes when it tries to decode SSEs of variable-size speech fragments. It seems to generate fully intelligible speech only when it is trained to decode SSEs of same-size speech chunks, as is the case for 200ms-tGSLM. We think this happened because, for a lack of time and resources, we chose a poor decoding strategy (decoder from SSEs to HuBERT frames and HuBERT frames to speech). In our future works, we will focus on training a model to decode the SSEs directly into speech, using, for instance, recent diffusion models or a Hi-Fi Gan (Polyak et al., 2021; Huang et al., 2022). As a consequence of the poor performances of our speech decoder, we have not been able to leverage recent progress in speech segmentation into words (Alagyres et al., 2022; Kamper, 2022; Peng and Harwath, 2023) that provide word boundaries more aligned with real words than our 200ms chunks. In Appendix A.1 are the results of our attempts at using speech segmentation systems. #### 5.1.2 Subjective judgements As for perplexity, we report in Table 1, the MMOS for batches of spoken generations that have a diversity score equal to the VERT of either LibriSpeech (MMOS@LS-VERT) or LJ (MMOS@LJ \begin{table} \begin{tabular}{l c c c c c c c} \hline \multicolumn{3}{c}{} & \multicolumn{2}{c}{Zero-shot metrics\(\uparrow\)} & \multicolumn{2}{c}{Generation PPX\(\downarrow\)} & \multicolumn{2}{c}{Generation MMOS\(\uparrow\)} \\ \cline{2-9} & sWUGY & SBLIMP & \(ABX_{sem}\) & \(ABX_{POS}\) & LS-VERT & LJ-VERT & LS-VERT & LJ-VERT \\ \hline GSLM & **70.36** & **56.31** & 55.85 & 59.03 & **503.25+-12.3** & 387.45+-11.2 & 3.76 + 0.035 & 3.78 + 0.023 \\ 200ms-tGSLM & 68.53 & 55.31 & **55.89** & **60.3** & 532.87+-10.1 & **356.24+-15.7** & **4.09 + 0.016** & **4.04 + 0.020** \\ _gold-tGSLM_ & _86.37_ & _-\(\uparrow\)_ & _65.6_ & _75.59_ & _361.84+-20.1* & _255.32+-14.2*_ & _n/a_ & _n/a_ \\ _character-gold_ & _n/a_ & _n/a_ & _n/a_ & _n/a_ & _180.2_ & _142.6_ & _4.12 + 0.016_ & _4.11 + 0.023_ \\ \hline \end{tabular} \end{table} Table 1: Results on zero-shots and generation tasks for 200ms-tGSLM and GSLM, trained on L16k-clean, and gold-tGSLM, trained on LibriSpeech. ABX is computed on tGSLM lexical tokens and on GSLM 9th layer. The last line is a topline that is composed of true sentences from both LibriSpeech and LJ. *: those scores are obtained without using a speech decoder. \(\dagger\) time-aligned word boundaries for sBLIMP are not available Figure 3: PPX and VERT scores for GSLM, 200ms-tGSLM and gold-tGSLM. Each dot is obtained by generating sentences with a fixed temperature parameter. The curves are 3rd-degree polynomial interpolations of the dots. The green dashed lines are the oracle PPX/VERT obtained on the LibriSpeech and LJ corpus. VERT). In addition to 200ms-tGSLM and GSLM we evaluate a topline called _character-gold_ that are speech utterances obtained with Text-To-Speech (Tacotron2.0 from Shen et al. (2017)) taking in input the transcriptions of LJ and LibriSpeech utterances. From Table 1, for the high-temperature regime that leads to diversity scores in the range of LJ and Librispeech, 200ms-tGSLM is slightly better than GSLM and gets close scores with the topline. MMOS scores are not available for gold-tGSLM has the speech decoder did not work properly. Nonetheless, our table of results does not show the performances of tGSLM in a much lower temperature regime. When conditioned on very low temperature, GSLM can generate very simple and intelligible sentences, whereas 200ms-tGSLM start to produce gibberish. Therefore, both models have their strengths and weaknesses. ### Zero-shot performances To complete our analysis, we provide in Table 1, performances on the zero-shot tasks scores that are comparable for GSLM and 200ms-tGSLM. GSLM has a little advantage on \(sWUGGY\) and \(sBLIMP\) and an 200ms-tGSLM a slight advantage on \(ABX_{sem}\) and \(ABX_{POS}\). The topline gold-tGSLM, once again gets much stronger results. ABX scores are obtained, for GSLM at the 9th layer of the transformer and for tGSLM with the lexical tokens. ### Interpretability In order to analyze what is learned by \(LexEmb\) we measure the ABX and NED of lexical tokens and acoustic tokens. In Table 2, the ABX scores show that the acoustic tokens are at chance level on semantic and syntactic encoding. After the \(LexEmb\) function, the lexical tokens lose a bit of their phonetic encoding (NED increases) but gain the ability to represent semantics and syntax. However, the NED is not at chance level, meaning that a bit of acoustic information has leaked into the lexical tokens. To visualize the difference between acoustic and lexical spaces, we provide t-SNE maps in Appendix Section A.4. ### Memory consumption GSLM model Lakhotia et al. (2021) and 200ms-tGSLM use the same transformer LM but with different types of inputs. Compared to the 200ms-long units of our model, GSLM is trained on discrete units that are 40ms long on average (when contiguous duplicates are removed). Therefore, we expected our model to be more memory efficient than GSLM7 which can be observed by the maximal batch size that both models can handle. Indeed, on the one hand, we managed to train GSLM with 34 60-seconds-long sentences on a 32G V100 GPU without OOM error. On the other hand, 200ms-tGSLM can fit as many as 162 sentences, which shows almost a 5-time reduction (\(\approx 4.76\)) of memory use. Footnote 7: The acoustic tokens that are the input of 200ms-tGSLM are extracted in a preprocessing step. They do not impact memory usage at training time. Training spoken LMs on long sequences of audio will become necessary in order to learn long-term semantic relations. The usage of very short acoustic units can become a bottleneck which our method helps to alleviate. To complete our analysis, we provide in Appendix A.5 a theoretical analysis of memory reduction. ## 6 Conclusion We introduced a generative spoken LM based on continuous word-sized acoustic tokens. The source code will be made available upon acceptance. Our model is able to generate speech with the same level of diversity and accuracy as a model based on discrete units. This shows that building a lexicon of types is not necessary for spoken language modelling, which is encouraging considering the difficulty of clustering large segments of speech without degrading the representation (see Appendix B). In addition, this performance was obtained with segments that were not very well aligned with word boundaries (200ms segments). The good result obtained with gold word boundaries indicates that there is room for improvement by using segments better aligned with word boundaries and of course a better speech decoder. Further work is also needed to better limit the leakage of low-level acoustic information into the LM through continuous units, which our analysis has shown is detrimental to the performance of the generative model (see also \begin{table} \begin{tabular}{l l l l l} \hline \hline models & tokens & **NED\(\downarrow\)** & \(ABX_{sem}\uparrow\) & \(ABX_{POS}\uparrow\) \\ \hline 200ms-tGSLM & acoustic & **34.51** & 50.14 & 49.87 \\ & lexical & 47.98 & **55.08** & **60.24** \\ \hline gold+tGSLM & acoustic & **16.15** & 50.20 & 50.12 \\ & lexical & 22.70 & **65.60** & **75.59** \\ \hline \hline \end{tabular} \end{table} Table 2: NED and ABX scores on acoustic and lexical tokens for 200ms-tGSLM and gold-tGSLM both trained on LibriSpeech. ABX and NED are computed on tGSLM lexical tokens Nguyen et al. (2022c)). Finally, the fact that the units are about 5 times larger than standard GSLM units aligns with the NLP literature that is in favour of word-based LMs. It opens the possibility to fit larger spans of audio in GPUs and capture long-distance relationships. ## 7 Limitations Our method has some limitations that range from GPU consumption, potential overfitting on the English language and sub-optimal decoding method. First, tGSLM is trained on 32 Nvidia V100-32Go GPUs for 30 hours. Due to the several modules at work in tGSLM (SSE model, LexEmb function, transformer decoder and seq2seq decoder), a large grid-search on hyper-parameters has been necessary which makes this work quite resource-consuming. Secondly, during the grid-search we chose hyper-parameters to optimize the semantic and syntactic ABX scores on English. By doing so, we might have overfitted the English language and made tGSLM specifically good at generating English speech. Further analysis is required to see if our method generalizes well to syntactically and morphologically different languages, like French or Mandarin. Finally, our decoding method is based on a seq2seq transformer that produces HuBERT frames which are decoded into speech with a combination of Tacotron2.0 and WaveGlow. We chose that method as this later speech synthesiser comes pre-trained in the _textlesslib_ Python library Kharitonov et al. (2022). Yet, recent work on _textless_ speech synthesis Kreuk et al. (2021); Kharitonov et al. (2021a) skip the spectrogram prediction of Tacotron2.0 and directly train a Hifi-Gan model to generate speech from HuBERT units. This latter model shows close to human-level performances. We leave the use of Hifi-Gan instead of Tacotron2.0 for future works on tGSLM. ## 8 Ethical statement tGSLM is a LM that learns to generate speech sentences by predicting its training data. Therefore, tGSLM inherits from ethical concerns associated with text-based LM, speech encoders and speech synthesizers. It is of paramount importance to safeguard against these issues. First, generative text-based LMs are known to repeat stereotypes and biases that belong to the training corpus which can cause a series of harms Chowdhery et al. (2022); Bender et al. (2021). One way to mitigate this is to apply post-processing on generated sentences to detect harmful content. Yet, from what we have heard, tGSLM still struggles to generate sentences that fully make sense, so we do not think that post-processing is required at the moment. Second, if tGSLM is used to continue a speech prompt, the continuation might be inconsistent for accents of underrepresented groups in the training data. Indeed, speech systems are known to encode poorly accents and dialects out of the training distribution Riviere et al. (2021). Finally, tGSLM continuations will not preserve any regional accentuation from the prompt, as our model only generates speech in the voice of the single speaker of the LJ dataset.
2305.17496
Toward Cost-effective Adaptive Random Testing: An Approximate Nearest Neighbor Approach
Adaptive Random Testing (ART) enhances the testing effectiveness (including fault-detection capability) of Random Testing (RT) by increasing the diversity of the random test cases throughout the input domain. Many ART algorithms have been investigated such as Fixed-Size-Candidate-Set ART (FSCS) and Restricted Random Testing (RRT), and have been widely used in many practical applications. Despite its popularity, ART suffers from the problem of high computational costs during test-case generation, especially as the number of test cases increases. Although several strategies have been proposed to enhance the ART testing efficiency, such as the forgetting strategy and the k-dimensional tree strategy, these algorithms still face some challenges, including: (1) Although these algorithms can reduce the computation time, their execution costs are still very high, especially when the number of test cases is large; and (2) To achieve low computational costs, they may sacrifice some fault-detection capability. In this paper, we propose an approach based on Approximate Nearest Neighbors (ANNs), called Locality-Sensitive Hashing ART (LSH-ART). When calculating distances among different test inputs, LSH-ART identifies the approximate (not necessarily exact) nearest neighbors for candidates in an efficient way. LSH-ART attempts to balance ART testing effectiveness and efficiency.
Rubing Huang, Chenhui Cui, Junlong Lian, Dave Towey, Weifeng Sun, Haibo Chen
2023-05-27T15:37:13Z
http://arxiv.org/abs/2305.17496v2
# Toward Cost-effective Adaptive Random Testing: An Approximate Nearest Neighbor Approach ###### Abstract _Adaptive Random Testing_ (ART) enhances the testing effectiveness (including fault-detection capability) of _Random Testing_ (RT) by increasing the diversity of the random test cases throughout the input domain. Many ART algorithms have been investigated according to different criteria, such as _Fixed-Size-Candidate-Set ART_ (FSCS) and _Restricted Random Testing_ (RRT), and have been widely used in many practical applications. Despite its popularity, ART suffers from the problem of high computational costs during test case generation, especially as the number of test cases increases. Although a number of strategies have been proposed to enhance the ART testing efficiency, such as the _forgetting strategy_ and the _k-dimensional tree strategy_, these algorithms still face some challenges, including; (1) Although these algorithms can reduce the computation time, their execution costs are still very high, especially when the number of test cases is large; and (2) To achieve low computational costs, they may sacrifice some fault-detection capability. In this paper, we propose an approach based on _Approximate Nearest Neighbors_ (ANNs), called _Locally Sensitive Hashing ART_ (LSH-ART). When calculating distances among different test inputs, LSH-ART identifies the approximate (not necessarily exact) nearest neighbors for candidates in an efficient way. LSH-ART attempts to balance ART testing effectiveness and efficiency. Software testing, random testing (RT), adaptive random testing (ART), approximate nearest neighbor (ANN), locality sensitive hashing (LSH), cost-effectiveness. ## I Introduction Software testing is an essential software quality assurance activity in the software development life-cycle [1, 2]. Among the many software testing techniques, one fundamental approach is _Random Testing_ (RT) [3], which simply selects test cases in a random manner from the _input domain_ (the set of all possible test inputs). RT is considered popular due to the usual ease of implementation, its efficient generation of random test cases, and its ability to provide quantitative estimation of the reliability of software [4]. RT has been widely applied to different software applications, including: SQL database systems [5, 6]; embedded software systems [7]; Java Just-In-Time (JIT) compilers [8]; security assessment [9]; and _NET error detection [10]. As Arcuri et al. [11] have pointed out, RT should be recommended as the first choice for testing many practical software scenarios. In spite of its popularity, though, RT has been criticized for generally adopting little, or none, of the available information to support its test case generation. Myers et al. [12], for example, described RT as perhaps the "least effective" testing approach. Many approaches have been proposed to enhance RT fault detection, including one of the most popular, _Adaptive Random Testing_ (ART) [13, 14]. ART is a family of enhanced RT methods that are motivated by observations of clustering of failure regions (regions of test inputs that can identify software failures) [15, 16, 17, 18, 19]. ART aims to achieve an even spread of random test cases over the input domain, which may deliver better test case diversity than RT [20]. Previous studies [14] have shown ART to be more effective than RT, in terms of various evaluation criteria, including: the number of test case executions required to detect the first failure (_F-measure_) [21]; code coverage [22]; and test case distribution [23]. Many different strategies have been adopted for ART, resulting in multiple ART implementations [14], the most well-known and widely used of which is the _Select-Test-From-Candidates Strategy_ (STFCS). STFCS chooses the next test case from a set of random candidates based on some criteria or evaluation of the previously-executed test cases [14]: Each random candidate is compared (in some way) with each previously-executed test case, with the "_best_" candidate being selected as the next test case. Two basic (and popular) STFCS implementations are _Fixed-Size-Candidate-Set ART_ (FSCS) [13] and _Restricted Random Testing_ (RRT) [24]. FSCS-ART chooses one of the candidates as the next test case such that it is farthest away from the already-executed test cases; RRT selects the next test case such that its distance from all previously-executed test cases is greater than a given threshold value. Generating each new test case requires calculation of the distance between each candidate and each executed test case, which represents a high computational overhead, especially when the number of executed test cases becomes large. STFCS involves identifying the nearest neighboring executed test case for each candidate, as part of the decision process to choose the "_best_" one as the next test case. A key part of STFCS, therefore, can be viewed as an instance of finding a _Nearest Neighbor_ (NN) [25]. Various enhanced ART implementations have been proposed to address the STFCS overheads problem, including _forgetting_[26], and a _precise NN_ approach [27]. The forgetting approach involves discarding ("forgetting") some of the previously-executed test cases to reduce the number of distance computations for each candidate. The precise NN approach, in contrast, often uses a tree-index structure (such as a \(k\)_-dimensional tree_, KD-tree [28]) to speed up the search process. Because forgetting does not use all the information of the already-executed test cases, it can be considered to sacrifice some fault-detection capability. The precise NN approach still needs to precisely identify the nearest neighbor for each candidate, which can still be computationally expensive, especially when the input domain dimensionality is high, or the number of executed test cases is large [25]. In this paper, we propose an _Approximate Nearest Neighbor_ (ANN) ART approach using the well-known ANN algorithm _Locality Sensitive Hashing_ (LSH)1[33]: We call this _LSH-ART_. LSH-ART attempts to balance testing effectiveness and efficiency: It makes use of all previously-executed test cases, to maintain fault-detection performance, but retrieves the approximate (not exact) nearest neighbor of each candidate, thus reducing the search cost. Footnote 1: Because the set of executed test cases dynamically increases with ART, either the _dynamic LSH_[29, 30] or _Scalable LSH_ (SLH) [31, 32] approaches need to be used in place of the static LSH: We adopted SLSH. To evaluate our proposed approach, we conducted a series of simulations and empirical studies with 34 real-life programs, comparing with the original ART algorithms and their corresponding enhanced variants. The main contributions and findings of this paper are: * To the best of our knowledge, this is the first paper that proposes an LSH-based approach to enhance ART performance. * We propose a framework to support LSH-ART, and present two implementations: LSH-FSCS and LSH-RRT. * LSH-ART maintains comparable testing effectiveness (including fault-detection effectiveness) to the original ART and its variants, sometimes obtaining better results, especially for high dimensions. * LSH-ART has better testing efficiency, generally incurring lower computational costs (including for test case generation) than the original ART and its variants, especially when the input domain dimensionality is high. * LSH-ART is much more cost-effective than the original ART, in all scenarios, and outperforms the variants in most scenarios. The rest of this paper is organized as follows: Section II presents some background information. Section III introduces the proposed approach, including a motivating example, the framework, algorithms, and complexity analyses. Section IV explains the experimental set-up to examine the performance of our proposed approach. Section V presents and analyzes the experimental results, and provides some potential threats to validity. Section VI provides some discussion regarding software failures and faults. Section VII discusses some related work about ART. Section VIII concludes this paper, and identifies some potential future work. ## II Background In this section, we present some of the preliminary concepts, including an introduction to both ART and ANN. ### _Preliminaries_ Given a faulty software-system under test (SUT), a test case \(t\) is called a _failure-causing input_ if the output or behavior of the SUT, when executed with \(t\), is not as expected, as determined by the _test oracle_[34, 35, 36]. Generally Fig. 1: Examples of three failure patterns and sample faulty programs in two-dimensional input domains. speaking, two fundamental features can be used to describe the properties of the faulty SUT: the _failure rate_; and the _failure pattern_[37]. The failure rate is the number of failure-causing inputs as a proportion of all possible inputs; and the failure pattern is the distributions of failure-causing inputs across the input domain, including both the locations and geometric shapes. Before testing, these two features are fixed, but unknown. Figure 1 shows examples of the three broad categories of _point_, _strip_, and _block_ failure patterns, as originally identified by Chan et al. [37]: The squares in the upper part of each subfigure represent the two-dimensional input domain boundary; and the black dots, strip, or block represent the failure-causing inputs (the failure pattern). The code snippets in each dashed box in the subfigures represent sample faulty programs that can produce the corresponding failure pattern. Previous studies have noted that block and strip patterns are more commonly encountered than point patterns [15, 16, 17, 18, 19]. Although many different geometric shapes of failure patterns exist, the three broad categories (of point, strip, and block) can generally represent them all: For example, a two-dimensional (2D) failure region may be a circle-like shape, which can be categorized as a block pattern, because a circle-like failure region is also a block; similarly, a narrow (2D) eclipse can be classified as a strip pattern; and so on. ### _Adaptive Random Testing (ART)_ An important observation, made, independently, by multiple researchers in different areas, is that failure-causing inputs tend to cluster into connected regions: _failure regions_[15, 16, 17, 18, 19]. Accordingly, if a test case \(t_{1}\) is a failure-causing input, then it is highly probable that its neighbors are also failure-causing inputs; similarly, if \(t_{2}\) is not a failure-causing input, then its neighbors are also likely to not be failure-causing. This is shown in Figure 2. Consider the two candidate test cases \(c_{1}\) and \(c_{2}\) in Figure 2(b). Because \(c_{1}\) is close to the failure-causing input \(t_{1}\), it is considered to have a higher probability to be a failure-causing input. Similarly, because \(c_{2}\) is close to the non-failure-causing input \(t_{2}\), is has a higher probability of being a non-failure causing input. This leads to the heuristic that a program input far away from non-failure-causing inputs may have a higher probability of causing failure than neighboring test inputs. This has inspired _Adaptive Random Testing_ (ART), which aims to achieve a more diverse, or even spread of test cases across the input domain [13, 20]. ART refers to a family of testing approaches that randomly generate test cases that are evenly spread over the entire input domain [2, 14]. Because there are many approaches to generating these test cases, there are many different ART implementations [14]. In this paper, we focus on the _Select-Test-From-Candidates Strategy_ (STFCS) ART category [14], which has been the most popular and extensively-studied category. STFCS selects the next test case from a set of randomly-generated candidates based on some criteria or evaluation involving the previously-executed test cases. Figure 3 shows the STFCS framework [14], which contains two components: _random-candidate-set-construction_; and _test-case-selection_, STFCS makes use of two sets; one to store the random candidates (the _candidate set_, \(C\)); and one to store the previously-executed test cases (the _executed set_, \(E\)). Two of the most popular ART STFCS algorithms are _Fixed-Size-Candidate-Set ART_ (FSCS) [13] and _Restricted Random Testing_ (RRT) [24], both of which are examined in this paper. #### Iii-B1 Fixed-Size-Candidate-Set ART (FSCS) When generating the next test case, FSCS randomly generates \(k\) candidates to form the candidate set \(C\). Each candidate is then evaluated against all the previously-executed test cases -- all elements in \(E\). An element \(c\) from \(C\) will be the "_best_" choice as the next test case if it satisfies the following criterion: \[\forall c^{\prime}\in C,\ \min_{e\in E}dist(c,e)\geq\min_{e\in E}dist(c^{ \prime},e), \tag{1}\] where \(dist(c,e)\) is a function that measures the distance between two test inputs \(c\) and \(e\). In other words, assessment of each random candidate \(c^{\prime}\) in \(C\) involves identification of the closest element in \(E\) -- the _Nearest Neighbor_ (NN) of \(c^{\prime}\) in \(E\). Because \(k\) is usually a fixed constant (\(10\) in most cases [38]), each next test case generation only requires the identification of this fixed number of nearest neighbors. However, as the size of \(E\) increases, identification of each nearest neighbor incurs an increasingly large amount of overhead. Therefore, the FSCS time overheads mainly depend on the NN search process. Fig. 3: Framework pseudocode for STFCS ART [14]. Fig. 2: Motivation of ART. 2 Restricted Random Testing (RRT) In contrast to FSCS, RRT continues to generate and check random candidates, one by one, until the first suitable one is identified as the next test case. This means that the number of random candidates is flexible, not fixed, during test case generation. The RRT criterion to determine whether or not a random candidate \(c\) can be selected as the next test case, against \(E\), is defined as: \[\min_{e\in E}dist(c,e)>\sqrt[d]{\frac{\sigma_{d}\times A\times R}{\pi^{\lfloor d /2\rfloor}\times|E|}}, \tag{2}\] where \(d\) is the dimension of the input domain \(\mathcal{D}\); \(A\) is the size of \(\mathcal{D}\); \(R\) is a constant parameter (generally called the _exclusion ratio_[39]); and \(\sigma_{d}\) is the formula coefficient, which can be written as: \[\sigma_{d}=\left\{\begin{array}{rl}(\sigma_{d-2}\times d)/2,&d>2,\\ 1,&d=2,\\ 1/2,&d=1.\end{array}\right. \tag{3}\] Similar to FSCS, the RRT computational overheads are strongly connected to the speed of identification of each NN, and can increase as the size of \(E\) increases. ### _Approximate Nearest Neighbor (ANN)_ In this section, we first introduce the _Nearest Neighbor_ (NN) problem, and then focus on the _Approximate Nearest Neighbor_ (ANN). A definition of NN is [40, 41]: **Definition II.1** (_Nearest Neighbor, NN)_.: Given a set \(V\) of \(m\) data points in a \(d\)-dimensional space \(\mathbb{R}^{d}\) (i.e., \(V\subset\mathbb{R}^{d}\)), and given a query point \(q\in\mathbb{R}^{d}\), the point \(v\) of \(T\) is the nearest neighbor to \(q\) such that:_ \[\forall v^{\prime}\in V,\ dist(v,q)\leq dist(v^{\prime},q), \tag{4}\] _where \(dist(v,q)\) gives the distance between points \(v\) and \(q\)._ Although the NN problem can find the exact nearest neighbors, its query efficiency (for both space and query time) can be low, especially when the dimensionality is high. An alternative to NN is to query _approximate_ (not exact) nearest neighbors, as in the Approximate NN (ANN) problem: ANN may enable a more efficient performance. The concept of ANN can be defined as [40, 41]: **Definition II.2** (_Approximate Nearest Neighbor, ANN)_.: Given a set \(V\) of data points in a metric space \(\mathbb{R}^{d}\) (i.e., \(V\subset\mathbb{R}^{d}\)), and given a query point \(q\in\mathbb{R}^{d}\), a point \(v\) from \(V\) is a approximate nearest neighbor satisfying the following condition:_ \[dist(v,q)\leq(1+\epsilon)dist(v^{*},q), \tag{5}\] _where \(v^{*}\) is the exact or real nearest neighbor to \(q\), and \(\epsilon\) is an approximation factor parameter (\(\epsilon>0\))._ ANN algorithms aim to find a point whose distance from the query point is at most \((1+\epsilon)\) times the distance from the query to the actual nearest neighbor, where \((1+\epsilon)\) is generally called the _approximation factor_[42]. #### Ii-C1 Locality Sensitive Hashing (LSH) There are many techniques to solve the ANN problem, one of which is _Locality Sensitive Hashing_ (LSH) [33]. The basic principle of LSH is to make use of a hash function from the same hash family for placing each data point into a bucket such that the probability of collision is much higher for points that are close to each other than for those that are far apart. An ANN search can be carried out as follows: (1) Find the bucket to which a query point is hashed and choose the data points in these buckets as candidates; and (2) Rank candidates according to their actual distance to the query point to find the nearest neighbor. LSH relies on the existence of LSH functions. Given \(\mathcal{H}\), a family of hash functions mapping \(\mathbb{R}^{d}\) to the integer set \(\mathbb{N}\) (i.e., \(\mathcal{H}=\{h:\mathbb{R}^{d}\rightarrow\mathbb{N}\}\)), then a definition of LSH [33] is described as follows: **Definition II.3** (_Locality Sensitive Hashing, LSH)_.: A family \(\mathcal{H}\) is called \((r_{1},r_{2},p_{1},p_{2},)\)-sensitive if, for any two points \(v,q\in\mathbb{R}^{d}\), \[\left\{\begin{array}{rl}\text{if }dist(v,q)\leq r_{1},\text{ then }\text{Pr}_{\mathcal{H}}[h(q)=h(v)]\geq p_{1},\\ \text{if }dist(v,q)\geq r_{2},\text{ then }\text{Pr}_{\mathcal{H}}[h(q)=h(v)]\leq p_{2}, \end{array}\right. \tag{6}\] where \(h\) is a function selected randomly (with uniform distribution) from \(\mathcal{H}\); \(\text{Pr}_{\mathcal{H}}[h(q)=h(v)]\) represents the probability that \(h(q)=h(v)\); and \(r_{1}\), \(r_{2}\), \(p_{1}\), and \(p_{2}\) are parameters satisfying \(p_{1}>p_{2}\) and \(r_{1}<r_{2}\). In this study, we adopted an LSH family based on \(p\)-stable distributions2 (also called E\({}^{2}\)LSH) [43], that works for all \(p\in(0,2]\). Formally, each hash function \(h_{\mathbf{a},b}(\mathbf{v})\): \(\mathbb{R}^{d}\rightarrow\mathbb{N}\) maps a \(d\)-dimensional vector \(\mathbf{v}\) onto the set of integers, which is defined as: Footnote 2: For example, a _Cauchy distribution_, defined by the density function \(g(x)=\frac{1}{\sqrt{2\pi}}\frac{1}{1+x^{2}}\), is 1-stable; however, a _Gaussian (normal) distribution_, defined by the density function \(g(x)=\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}\), is 2-stable [43]. \[h_{\mathbf{a},b}(\mathbf{v})=\left\lfloor\frac{\mathbf{a}\cdot\mathbf{v}+b}{w}\right\rfloor, \tag{7}\] where \(\mathbf{a}\) is a random \(d\)-dimensional vector each entry in which is selected independently from a \(p\)-stable distribution; \(w\) is a fixed parameter for the entire family (\(w\) acts as a quantization width); and \(b\) is a real number selected uniformly from the range \([0,w)\). The \(h_{\mathbf{a},b}(\mathbf{v})\) corresponds to the composition of a projection to a random direction with a random offset and a quantization by a constant. According to Eq. (7), the collision probability between two points \(\mathbf{q}\) and \(\mathbf{v}\) can be calculated as: \[\text{Pr}_{\mathcal{H}}[h(\mathbf{q})=h(\mathbf{v})]=\int_{0}^{w}\frac{1}{\alpha}\cdot g \bigg{(}\frac{x}{\alpha}\bigg{)}\cdot\bigg{(}1-\frac{x}{w}\bigg{)}dx, \tag{8}\] where \(\alpha=dist(\mathbf{q},\mathbf{v})\), and \(g(x)\) is the density function. It can be seen that with the increase of \(\alpha\), the collision probability decreases monotonically. To make locality-sensitiveness actually useful, LSH employs a function family \(\mathcal{F}\) (a hash table) by concatenating \(M\) hash functions from \(\mathcal{H}\), i.e., \(\mathcal{F}=\{f(\mathbf{v}):\mathbb{R}^{d}\rightarrow\mathbb{N}^{M}\}\), such that each \(f\in\mathcal{F}\) can be represented as: \(f(\mathbf{v})=(h_{1}(\mathbf{v}),h_{2}(\mathbf{v}),\cdots,h_{M}(\mathbf{v}))\), where \(h_{i}\in\mathcal{H}\)\((1\leq i\leq M)\). #### Ii-C2 Scalable Locality Sensitive Hashing Because traditional LSH (including E\({}^{2}\)LSH) deals with ANN for static data sets, _Scalable Locality Sensitive Hashing_ (SLSH) [31, 32] has been proposed for dynamic data sets. In this study, we adopted the SLSH from Hu & Jiang [31], which adjusts the original E\({}^{2}\)LSH in the following two ways: * Scalable hash family: SLSH extends the hash family \(\mathcal{H}\) of E\({}^{2}\)LSH by dynamically adjusting the value of \(w\): If \(\mathcal{H}(w)\) is a scalable hash family, and \(\forall\mathbf{v}\in\mathbb{R}^{d}\), \(\mathcal{H}(w)=\{h(\mathbf{v},w):\mathbb{R}^{d}\rightarrow\mathbb{N}\}\), then \(\mathcal{H}(w)=\{h_{1}(\mathbf{v},w_{1}),h_{2}(\mathbf{v},w_{2}),\cdots,h_{M}(\mathbf{v},w_ {M})\}\). * Scalable indexing structure: According to the scalable hash family, a scalable hash table is designed as \(\mathcal{T}(\mathbf{w})=\{t(\mathbf{v},\mathbf{w}):\mathbb{R}^{d}\rightarrow\mathbb{N}^{M}\}\), from which an individual hash table \(t(\mathbf{v},\mathbf{w})\) can be described as \(t(\mathbf{v},\mathbf{w})=(h_{1}(\mathbf{v},w_{1}),h_{2}(\mathbf{v},w_{2}),\cdots,h_{M}(\mathbf{v}, w_{M}))\), where \(w_{1}>w_{2}>\cdots>w_{M}\). Unlike LSH, SLSH adopts a hierarchical indexing structure to organize \(M\) hash functions. SLSH introduces a new parameter \(m\) to configure the upper bound of the hash bucket capacity. If the number of collision elements in a hash bucket exceeds the limit \(m\) (this _ordinary hash bucket_ becomes a _super hash bucket_), then these elements will be re-hashed to the next level of hash buckets. In other words, each ordinary hash bucket stores data points, and each super hash bucket stores some pointers to other hash buckets. Figure 4 presents an example of the scalable indexing structure for SLSH, where it can be seen that different hash tables may have different numbers of hash functions. ART generates the new test cases based on a dynamic set of previously executed test cases [13, 14]. Because SLSH is a scalable version of LSH, for dynamic data sets, it is more suitable for ART. For ease of description, unless explicitly stated otherwise, LSH in the following sections refers to SLSH. ## III Proposed LSH-ART Approach In this section, we propose a novel approach for ART STFCS based on the LSH version of ANN: _LSH based ART_ (LSH-ART). We first introduce the motivation of this work, and then provide a framework to support LSH-ART. We also present two detailed algorithms to implement the framework, and present their complexity analyses. ### _A Motivating Example_ As discussed in Sections II-B1 and II-B2, the computational overheads of both FSCS and RRT depend strongly on the NN search process, especially when the number of already executed test cases is large. This is because it is necessary to consider all executed test cases in the NN search process. To reduce the computational overheads, therefore, it is necessary to reduce the number of executed test cases that need to be processed. One way to do this is to sacrifice an amount of accuracy in the NN search, by adopting the ANN search instead. We next present a motivating example to illustrate this. Figure 5 shows a situation where eight test cases have already been executed, \(t_{1},t_{2},\cdots,t_{8}\). In FSCS (and somewhat similarly for RRT), nearest (executed test case) neighbors for each of the four candidate test cases (\(c_{1},c_{2},c_{3}\), and \(c_{4}\)) must be identified. To calculate the NN of \(c_{1}\), for example, the distance from \(c_{1}\) to each executed test case \(t_{i}\) (\(i=1,2,\cdots,8\)) is calculated, producing eight distances. With an ANN (instead of exact NN) search, both \(t_{1}\) and \(t_{2}\) may be selected as the (approximate) NN of \(c_{1}\). (If \(t_{1}\) is selected as the NN for \(c_{1}\), then the ANN search process is equivalent to the (exact) NN search process.) Although \(t_{2}\) is slightly farther away from \(c_{1}\) than \(t_{1}\), it is still close. The \(c_{2}\) candidate has a similar situation with \(t_{3}\) and \(t_{4}\), but the \(c_{3}\) and \(c_{4}\) candidates do not face this: \(c_{3}\) and \(c_{4}\) may have only one element for both NN and ANN search processes (\(t_{5}\) for \(c_{3}\); and \(t_{6}\) for \(c_{4}\)), resulting in that element being selected as the NN/ANN. ### _Framework_ Figure 6 shows the proposed LSH-ART framework, which consists of five components: (1) Constructing random candidates; (2) Searching the hash bucket; (3) Identifying the nearest neighbor (NN); (4) Determining a new test case; and (5) Inserting the new test case into the LSH tree. We next explain each component. Component (1) involves constructing/selecting a random candidate (or some random candidates) from the input domain. Component (2) consists of hashing each random candidate in the corresponding ordinary hash bucket. The hashing process Fig. 4: An example of an SLSH indexing structure. Fig. 5: A motivating example. of each candidate \(c\) starts with the root of the LSH indexing structure, and then checks its hash bucket type: If the current hash bucket \(\mathcal{B}\) is an ordinary hash bucket, \(\mathcal{B}\) will be returned for \(c\) (i.e., \(c\) should be included in \(\mathcal{B}\)); however, if \(\mathcal{B}\) is a super hash bucket, then the hashing process is repeated to retrieve the next levels, according to its hash function, until an ordinary hash bucket is found for \(c\). For each candidate \(c\), when an ordinary hash bucket \(\mathcal{B}\) is searched, it is believed that all elements in \(\mathcal{B}\) are considered as potential ANNs for \(c\). Then, Component (3) identifies the NN within this hash bucket \(\mathcal{B}\) (compared with all executed test cases, however, this NN is approximate). Component (4) then determines which candidate should be selected as the next test case (i.e., the new test case), according to some criterion. Once a new test case is determined, Component (5) inserts this test case in the LSH indexing tree: If the corresponding ordinary hash bucket has not exceeded its limit, then this new test case is directly inserted into the hash bucket; otherwise, this ordinary hash bucket is promoted into a super hash bucket (involving an _updating_ process), and its elements are rehashed in the ordinary hash buckets in the next level. ### _Algorithms_ In this study, we focus on the STFCS category of ART, proposing an STFCS version of LSH-ART, _LSH-based STFCS_ (LSH-STFCS). We first present the LSH-STFCS algorithm, and then discuss two well-known implementations of STFCS ART (FSCS [13] and RRT [24]). We then present the LSH-STFCS versions of FSCS and RRT: _LSH-based FSCS_ (LSH-FSCS); and _LSH-based RRT_ (LSH-RRT). #### Iii-C1 Lsh-Stfcs Algorithm 1 shows a pseudocode description of LSH-STFCS. In the initialization stage (Lines 1 to 3), \(C\) is used to store random candidates, \(E\) is used to store the already-executed test cases, and \(\mathcal{T}\) is an LSH table (or an LSH indexing tree): \(C\) and \(E\) are initialized as empty sets; and \(\mathcal{T}\) is initialized as a null tree. The algorithm randomly constructs a candidate \(t\) from the input domain \(\mathcal{D}\) as the next test case, executes it, and then adds it into \(E\) (Lines 4 and 5). The algorithm repeats the generation process until the stopping condition is satisfied (Lines 6 to 18). When a random candidate \(t\) is selected as the next test case, it is inserted into the LSH indexing tree \(\mathcal{T}\) using the function \(\mathtt{InsertTest}\)**(\(\mathcal{T},t\))** (Component (5) of the framework) (Line 7). A set of random candidates are constructed from the input domain \(\mathcal{D}\) using the function \(\mathtt{ConstructCandidate}\)**(\(\mathcal{D}\))** (Component (1) of the framework). The number of random candidates can be either fixed or flexible. For each candidate \(c\), the algorithm tries to find a potential ordinary hash bucket \(\mathcal{B}\) to store \(c\), by searching the hash table \(\mathcal{T}\) (function \(\mathtt{SearchHashBucket}\)**(\(\mathcal{T},c\))** for Component (2)). Then, the function \(\mathtt{IdentifyNN}\)**(\(\mathcal{B},c\))** identifies the NN of \(c\) in \(\mathcal{B}\) (Component (3)). A new test case is determined, according to the NN \(s\) in \(S\) of each candidate \(c\) in \(C\), using the function \(\mathtt{DetermineNewTest}\)**(\(C,S\))** (Component (4)). Because Components (1) and (4) can be implemented using different strategies, we discuss them separately in the following sections. Here, we discuss the common components, Components (2), (3), and (5) (the functions \(\mathtt{SearchHashBucket}\)**(\(\mathcal{T},c\))**, \(\mathtt{IdentifyNN}\)**(\(\mathcal{B},c\))**, and \(\mathtt{InsertTest}\)**(\(\mathcal{T},t\))** ). Algorithm 2 contains the pseudocode for the function \(\mathtt{SearchHashBucket}\)**(\(\mathcal{T},c\))**: The algorithm repeatedly checks the hash bucket by hashing the candidate test case \(c\) until an ordinary hash bucket is found, in ``` Input : Input domain \(\mathcal{D}\). Output : Executed test cases \(E\). 1\(C\leftarrow\mathtt{EmptySet}\)() 2\(E\leftarrow\mathtt{EmptySet}\)() 3\(\mathcal{T}\leftarrow\mathtt{CreateHashBucket}\)() 4\(t\leftarrow\mathtt{ConstructCandidate}\)(\(\mathcal{D}\)) 5Execute the test case 6\(E\leftarrow\mathtt{Append}\)(\(E,t\)) 7whileThe stopping condition is not satisfieddo 8\(\mathcal{T}\leftarrow\mathtt{InsertTest}\)(\(\mathcal{T},t\)) \(\triangleright\) Component (5) 9\(C\leftarrow\bigcup\{\mathtt{ConstructCandidate}\)(\(\mathcal{D}\)) \(\}\)\(\triangleright\) Component (1) 10\(S\leftarrow\mathtt{EmptySet}\)() 11foreach\(c\in C\)do 12\(\mathcal{B}\leftarrow\mathtt{SearchHashBucket}\)(\(\mathcal{T},c\)) \(\triangleright\) Component (2) 13\(s\leftarrow\mathtt{IdentifyNN}\)(\(\mathcal{B},c\)) \(\triangleright\) Component (3) 14\(S\gets S\bigcup\{s\}\) 15 16 end while 17\(t\leftarrow\mathtt{DetermineNewTest}\)(\(C,S\)) \(\triangleright\) Component (4) 18 Execute the test case \(t\) 19\(E\leftarrow\mathtt{Append}\)(\(E,t\)) 20 end while return\(E\) ``` **Algorithm 1**LSH-STFCS Pseudocode Fig. 6: LSH-ART Framework. which \(c\) will be stored. Algorithm 3 has the implementation of the function \(\textbf{IdentifyNN}\left(\mathcal{B},c\right),\) which is used to identify the nearest neighbor in an ordinary hash bucket \(\mathcal{B}\) of a candidate \(c\), by calculating the distances between \(c\) and each element \(e\in\mathcal{B}\). ``` Input : LSH indexing tree \(\mathcal{T}\), candidate test case \(c\). Output : Hash bucket \(\mathcal{B}\). 1\(\mathcal{S}\leftarrow\mathcal{T}\) 2\(flag\leftarrow\)true 3while\(flag\)do 4\(flag\leftarrow\)IsSuperBucket (\(\mathcal{B}\)) 5if\(flag\)then 6\(\mathcal{S}\leftarrow\)GetHashBucket (\(\mathcal{B},c\)) 7else 8\(flag\leftarrow\)false 9 10 end if 11 12 end while 13return\(\mathcal{B}\) ``` **Algorithm 2**SearchHashBucket\(\left(\mathcal{T},c\right)\) Algorithm 4 lists the pseudocode for the function \(\textbf{InsertTest}\left(\mathcal{T},t\right),\) which starts by finding the ordinary hash bucket \(\mathcal{B}\) for \(t\), using the function \(\textbf{SearchHashBucket}\left(\mathcal{T},t\right).\) The algorithm then checks whether or not \(\mathcal{B}\) needs to be updated, according to its capacity limit threshold \(m\) (defined in advance). If \(\mathcal{B}\) is not already at capacity, then \(t\) is inserted directly; otherwise, \(\mathcal{B}\) is promoted into a super hash bucket, and its elements are re-hashed into ordinary hash buckets. Finally, the original \(\mathcal{B}\) is replaced in \(\mathcal{T}\). #### Iii-C2 Lsh-Fscs To maintain the failure-finding benefits of the original FSCS, LSH-FSCS keeps the main FSCS processes, changing only those related to storing previously-executed test cases, and searching for each candidate's NN. Accordingly, the LSH-FSCS Component (1) involves construction of the fixed number (\(k\)) of randomly-generated candidates from the input domain (according to uniform distribution). The LSH-FSCS Component (4) selects as the next test case that candidate \(c\) whose distance to its NN is greatest, in terms of Eq. (1). #### Iii-C3 Lsh-Rrt As with LSH-FSCS, LSH-RRT also retains the core original processes of RRT, changing only how executed test cases are stored, and how the NNs for each candidate are identified. As discussed in Section II-B2, RRT checks successive randomly-generated candidates to determine whether or not they satisfy the selection criteria, in terms of Eq. (2). In other words, RRT's generation of the next test case requires the construction and checking of a flexibly-sized set of random candidates. More specifically, LSH-RRT uses Component (1) to construct a random candidate and then Component (4) to check whether or not it is valid. This process is repeated until a satisfactory new test case is generated. ``` Input : LSH indexing tree \(\mathcal{T}\), test case \(t\). Output : Updated LSH indexing tree \(\mathcal{T}\). 1\(\mathcal{S}\leftarrow\)SearchHashBucket (\(\mathcal{T},t\)) 2\(\mathcal{B}^{\prime}\leftarrow\mathcal{B}\) 3if\(|\mathcal{B}^{\prime}|+1<m\)then 4\(\mathcal{B}^{\prime}\leftarrow\)\(m\) is the capacity of the ordinary hash bucket */ 5\(\mathcal{B}^{\prime}\leftarrow\)Append(\(\mathcal{B}^{\prime},t\)) 6else 7\(\mathcal{B}^{\prime}\leftarrow\)CreateHashTable() 8\(\mathcal{B}^{\prime}\leftarrow\)HashElements(\(\mathcal{B}^{\prime},\mathcal{B}\)) 9 end if 10\(\mathcal{T}\leftarrow\)ReplaceHashBucket (\(\mathcal{T},\mathcal{B}^{\prime},\mathcal{B}\)) return\(\mathcal{T}\) ``` **Algorithm 3**IdentifyNN\(\left(\mathcal{B},c\right)\) ### _Complexity Analysis_ This section presents a brief investigation of the LSH-ART time and space complexity, through a formal mathematical analysis. For ease of description, \(n\) denotes the number of \(d\)-dimensional test cases to be generated, and \(\mu_{n}\) is an upper bound of the number of hash functions used to generate the \(n\) test cases. As discussed by Hu & Jiang [31], \(\mu_{n}\) is generally equal to \(\log(n)\). In addition, \(\psi_{n}\) denotes the depth of the LSH indexing tree when generating the \(n\) test cases, where \(\psi_{n}\leq\mu_{n}\). Finally, an analysis of time and space complexity is given for different ART implementations. #### Iii-D1 Time Complexity The time complexity involved in generating the \(i\)-th (\(1\leq i\leq n\)) test case from \(k\) random candidates \(C=\{c_{1},c_{2},\cdots,c_{k}\}\) using LSH-ART is as follows. As discussed in Section III-C, each candidate \(c\in C\) requires the following two steps that incur computational costs: (1) Searching for the ordinary bucket \(\mathcal{B}\) in the LSH indexing tree (or the LSH table) to (potentially) store \(c\); and (2) Identifying \(c\)'s NN in \(\mathcal{B}\). LSH-ART then needs to complete two further tasks: (3) Determining the next test case \(t\); and (4) Inserting \(t\) into the hash table. The worst case scenario for Step (1) is that the targeted ordinary bucket \(\mathcal{B}_{i}\) is in the \(\psi_{i}\)-th level (the deepest level) of the LSH indexing tree, resulting in an \(O(\psi_{i})\) order of time complexity for each candidate. For Step (2), because each ordinary bucket \(\mathcal{B}\) contains at most \(m\) previously-executed test cases, a maximum time complexity of \(O(d\times m)\) is needed to calculate the distance between each candidate and the \(m\) executed test cases, and a further complexity of \(O(m)\) to identify its NN. Therefore, a time complexity of \(O(k\times(\psi_{i}+d\times m+m))\) is required, for \(k\) candidates, for (1) and (2). Step (3) involves LSH-ART choosing an element from \(k\) candidates as the next test case, resulting in a time complexity of \(O(k)\). For Step (4), LSH-ART requires a maximum time complexity of \(O(\psi_{i})\) to insert the new test case into the ordinary bucket \(\mathcal{B}\). If \(\mathcal{B}_{i}\) exceeds its maximum capacity, then it will be upgraded into a super bucket, and all elements will be re-hashed into different ordinary buckets, resulting in a time complexity of \(O(m)\). In total, the worst time complexity for generating the \(i\)-th (\(1\leq i\leq n\)) test case can be described as \(O(k\times(\psi_{i}+d\times m+m)+k+\psi_{i}+m)\). In conclusion, when generating \(n\) test cases, the worst order of time complexity of LSH-ART can be written as: \(O\)(LSH-ART) \[=O\Bigg{(}\sum_{i=1}^{n}\Big{(}k\times(\psi_{i}+d\times m+m)+k+ \psi_{i}+m\Big{)}\Bigg{)}\] \[\leq O\Bigg{(}\sum_{i=1}^{n}\Big{(}k\times(\mu_{i}+d\times m+m)+k +\mu_{i}+m\Big{)}\Bigg{)}\] \[\leq O\Bigg{(}\sum_{i=1}^{n}\Big{(}k\times(\log i+d\times m+m)+k +\log i+m\Big{)}\Bigg{)}\] \[=O\Big{(}k\times(d\times m+m+1)+m\Big{)}+O\Bigg{(}(k+1)\times \sum_{i=1}^{n}\log i\Bigg{)}\] \[=O(k\times d\times m)+O\Big{(}k\times\log n!\Big{)}\] \[<O(k\times d\times m)+O\Big{(}k\times n\times\log n\Big{)}\] \[=O(k\times d\times m)+O(k\times n\times\log n). \tag{9}\] The input domain dimensionality (\(d\)) and the maximum size of the ordinary bucket (\(m\)) are constants, set before testing begins. The number of random candidates (\(k\)) is also a constant in LSH-FSCS, though not in LSH-RRT. As explained by Mayer & Schneckenburger [44], however, the average number of random candidates for RRT is probably logarithmic to the number of previously executed test cases. Because the maximum size of each ordinary bucket is equal to \(m\), thus the LSH-RRT \(k\) is, at most, approximately \(\log m\), which can be also considered a constant. In summary, therefore, the worst order of time complexity of LSH-FSCS is \(O(k\cdot n\cdot\log n)\), and that of LSH-RRT is \(O(\log m\cdot n\cdot\log n)\). #### Iii-B2 Space Complexity LSH-ART requires memory space to store (1) all \(n\) executed test cases, (2) all \(k\) candidates, and (3) all hash buckets (both super and ordinary) of the SLSH indexing structure. Obviously, the executed test cases need an order of space complexity of \(O(n\times d)\); while the candidate test cases need an order of space complexity of \(O(k\times d)\). For the hash buckets, in the worst case scenario -- when each ordinary hash bucket contains only one test case, and all buckets are in the \(\mu_{n}\)-th level -- the order of space complexity is \(O(n\times\psi_{n})\). Therefore, the worst order of space complexity for LSH-ART can be described as: \[O\text{(LSH-ART)} =O(n\times d+k\times d+n\times\psi_{n})\] \[\leq O(n\times d+k\times d+n\times\mu_{n})\] \[=O(n\times d+k\times d+n\times\log n) \tag{10}\] Because the input domain dimension \(d\) is determined before LSH-ART is applied, \(d\) is a constant. In addition, \(k\) is also a constant in both FSCS and RRT versions of LSH-ART. Therefore, the worst order of LSH-ART space complexity is \(O(n\cdot\log n)\). #### Iii-B3 Complexity Comparisons The time complexity of some ART implementations has been examined in previous studies [45, 46, 27, 44]. Chen et al. [45], for example, reported on the order of time complexity of the original FSCS being equal to \(O(n^{2})\), when generating \(n\) test cases. Mayer [44] showed the time complexity of the original RRT to be of the order of \(n^{2}\log n\). Mao et al. [46, 27] gave the order of time complexity for FSCS with distance-aware forgetting as \(O(n)\), and for FSCS with KD-tree as \(O(n\cdot\log n)\). To the best of our knowledge, however, there are no time or space complexity analyses available for the other ART implementations. In this paper, therefore, we provide an analysis of time and space complexity for these ART implementations: Table I presents a detailed comparison of these complexities, for different ART implementations, when generating \(n\) test cases. In the table, \(d\) is the dimension of the input domain; and \(k\), \(\lambda\), \(\tau\), and \(m\) the constant parameters. ## IV Experimental Frameworks In this section, we first present the research questions related to the performance of our proposed approach, and then examine the experiments we conducted to answer them, from the perspectives of testing effectiveness, efficiency, and cost-effectiveness. ### _Research Questions_ A goal of LSH-ART is to reduce the computational overheads of the original ART (FSCS [13] and RRT [24]) methods. Therefore, examination of the test case generation time is necessary. In addition to reducing the computational costs, LSH-ART also aims to maintain comparable fault-detection, thus delivering a high testing cost-effectiveness. The LSH-ART fault-detection performance, thus, also needs to be examined, under different scenarios. Unlike some other ART enhancements and variants, such as forgetting [26], LSH-ART does not discard any test case information, and may thus achieve better fault-detection, while maintaining comparable computational cost reductions. Similarly, because LSH-ART implements ART with an ANN search, it should have less computational overheads than an NN search (such as with a KD-tree ART, in KDFC-ART [27]), while hopefully maintaining comparable fault-detection effectiveness. Based on this, we designed the following three research questions: **RQ1:**: [Effectiveness] _How effective is LSH-ART at identifying software failures?_ **RQ1.1:**: _How effective is LSH-ART at detecting the first software failure?_ **RQ1.2:**: _How effective is LSH-ART at triggering software failures when running a fixed number of test cases?_ **RQ2:**: [Efficiency] _How efficient is LSH-ART at generating test cases?_ **RQ3:**: [Cost-effectiveness] _How cost-effective is LSH-ART at revealing software failures?_ For each research question, LSH-ART was compared with the original ART algorithms and some enhanced variants. ### _Independent Variables_ We focused on the test case generation methods and their parameters as the independent variables. #### Iv-B1 Test Case Generation Methods Because LSH-ART aims to enhance two well-known versions of STFCS ART, FSCS and RRT, both LSH-FSCS and LSH-RRT were chosen for this variable; as were the original FSCS and RRT (as baselines for comparisons). We also selected another three types of enhanced variants of both FSCS and RRT [26, 27, 46]. The first enhancement strategy was _forgetting_[26], which attempts to discard some executed test cases during the test case generation process, to reduce computation time. In this study, we considered two versions of forgetting, _Random Forgetting_ (RF) and _Consecutive Retention_ (CR) [26]. When the number of already-executed test cases exceeds a pre-defined parameter \(\lambda\), RF randomly selects \(\lambda\) executed test cases for distance comparisons, whereas CR uses only the most recent \(\lambda\) executed test cases. The second enhancement strategy is a _Distance-aware forgetting strategy_ (DF) [46], which tries to partition the input domain into some disjoint subdomains, and then to discard some distance calculations to reduce the computational overheads. The final enhancement strategy takes advantage of a _KD-tree_ structure to store the executed test cases (which corresponds to the _KD-tree strategy_), thus speeding up the precise NN search. Overall, for each of FSCS and RRT, in addition to the original ART strategy, we also applied five others, resulting in six ART test case generation methods in the experiments: We use _ART_, _RF_, _CR_, _DF_, _KD_, and _LSH_ to represent these six strategies. We also compared our proposed method with RT. #### Iv-B2 Test Case Generation Method Parameters Each ART algorithm has a number of associated parameters, which are considered as constants in the experiments. We used the recommended values for these parameters as identified in their respective studies. For all ART methods, because we used numerical input domains in this study, the Euclidean distance was used to measure the distance between two test cases. FSCS (both the original and its enhanced variants) constructs a fixed number (\(k\)) of random candidates to generate each test case: As recommended by Chen et al. [38], \(k\) was set to 10. Similarly, RRT and its variants require that the exclusion ratio, \(R\) in Eq. (2), be set in advance: Following previous studies [47, 48, 46], \(R\) was set to 0.75. LSH-ART also has parameters that need to be set before testing, including the quantization width \(w\) and the hash bucket capacity \(m\). The quantization width \(w\) should be neither very large nor very small -- if \(w\) is very large, all data points could be assigned to the same hash bucket; however, if it is very small, a large number of hash buckets could be created in the hash table, potentially resulting in each bucket having only one data point. In this study, we set the collision probability as \(p=0.9\), and then made use of the inverse of Eq. (8) to get the value of \(\alpha\). If \(dist(\mathbf{q},\mathbf{v})\) is given, then the lower bound of \(w\) can be calculated. For the lower bound of \(w\) to be as large as possible, \(dist(\mathbf{q},\mathbf{v})\) should also be as large as possible, which means that \(\mathbf{q}\) and \(\mathbf{v}\) are two data points that are as far apart as possible in the input domain \(\mathcal{D}\). In addition, when an ordinary hash bucket is promoted into a super hash bucket, all elements need to be re-hashed with a different \(w\). To make the next level of the hash function more powerful, \(w\) should be decreased: \(w_{i}>w_{j}\) for \(i<j\). Following previous studies [31], \(w_{i+1}=w_{i}/2\) for any \(i\geq 1\). If the capacity of the ordinary hash bucket, \(m\), is very large, then LSH-ART degrades to the original ART; however, if it is very small, then data points with a high collision probability may be assigned into different hash buckets. Because of the lack of guidelines for how to choose an appropriate value of \(m\)[31], we conducted a simulation study to examine its impact on LSH-ART performance, varying \(m\) from 10 to 100 in increments of 10. We observed that larger values of \(m\) resulted in fewer test cases being required to reveal the first failure (_the F-measure_), but with little difference for \(m\geq 60\). We therefore set \(m=60\) in the experiments. The forgetting strategy [26] also has few guidelines regarding the choice of value for \(\lambda\). However, because \(\lambda\) restricts the number of executed test cases used in distance calculations, it is similar to the LSH-ART parameter \(m\). To ensure fair comparisons, therefore, we also set \(\lambda=60\). Two parameters also need to be set for the distance-aware forgetting strategy [46]: \(p_{0}\) is the initial partition number in each dimension; and \(\tau\) is the pre-set threshold for the average number of test cases in a cell. As suggested by Mao et al. [46], we set \(p_{0}\) to 3, and \(\tau\) to 5. Because the DF strategy may not always provide an exact nearest neighbor, due to the parameter configurations, it can be considered an ANN approach. We choose the most efficient KD-tree strategy version [27], which, through parameter design and configuration, can also implement an ANN search. ### _Dependent Variables_ According to the research questions, there are three dependent variables, relating to testing effectiveness; efficiency; and cost-effectiveness. #### Iv-C1 Testing Effectiveness The dependent variable for **RQ1** (for both **RQ1.1** and **RQ1.2**) is the metric to evaluate the fault-detection effectiveness. (1) For **RQ1.1**, we adopted the _F-measure_[38], which is defined as the expected number of test case executions required before detecting the first software failure. When a software failure is identified, as explained by Chen et al. [38], it is common to stop testing and commence the debugging process. Therefore, F-measure is realistic from a practical testing perspective. In addition, most ART approaches generally generate test cases one at a time [14], which means that F-measure is an appropriate measure for evaluating ART. Lower F-measure values indicate better testing performances. Given a failure rate \(\theta\), the F-measure of RT (according to uniform distribution), is theoretically equal to \(\theta^{-1}\)[38]. In this study, we also use the _F-ratio_ to denote the ratio of the F-measure for a method \(\mathcal{M}\) to RT's theoretical F-measure, thus showing the F-measure improvement of \(\mathcal{M}\) over RT: An F-ratio of less than 1.0 indicates that \(\mathcal{M}\) requires fewer test cases than RT to detect the first failure (i.e., has better testing effectiveness); in contrast, an F-ratio greater than 1.0 indicates that RT performs better than \(\mathcal{M}\). (2) For **RQ1.2**, we adopted the _P-measure_[49], which is defined as the probability of a given test set detecting at least one failure. Although the P-measure may appear less practical than the F-measure, it has been widely used in many testing scenarios, especially in automated software testing [50]. Another evaluation metric is the _E-measure_[49], which is defined as the expected number of failures to be detected by a given test set. Similar to the P-measure, the E-measure has also been applied to many automated software testing scenarios [50]. However, because failure regions may tend to cluster [15, 16, 17, 18, 19], higher E-measure values do not imply more faults or more distinct failures [50]. Therefore, the P-measure may be a more appropriate measure than the E-measure. Higher P-measure values indicate more effective software-failure detection. If the total number of test sets is \(N_{t}\) and the number of test sets that identify at least one failure is \(N_{f}\), then the P-measure can be estimated as \(N_{f}/N_{t}\)[50]. For RT, the expected P-measure is equal to \(1-(1-\theta)^{n}\)[37], where \(n\) is the size of the test set. #### Iv-C2 Testing Efficiency The **RQ2** dependent variable relates to the test case generation time, and includes the time taken for both _generation_ and _execution_ of a certain number of test cases. #### Iv-C3 Testing Cost-effectiveness The **RQ3** dependent variable needs to measure both the testing _effectiveness_ and the _efficiency_: As discussed in Huang et al. [14], the _F-time_ -- the time required to detect the first failure -- is a good measure of the ART cost-effectiveness. ### _Simulation Framework_ We created a simulation framework to enable evaluation of our proposed approach. This framework contains a number of configurable features, including: the failure pattern; failure rate related to each failure pattern; and the number of test cases in each test set (used for P-measure evaluations). #### Iv-D1 Failure Pattern and Failure Rate Within the input domain \(\mathcal{D}\), a faulty program was simulated by designating either a continuous region, or some disjoint regions. When a test case was selected from inside such a region, a software failure was considered to be triggered. The simulation framework, therefore, involves the setting up of the failure pattern (the shape of the failure region(s)) and failure rate \(\theta\) (the size of the failure region(s)). The input domain dimensionality \(d\) must also be set in advance. As discussed in Section II-A, Chan et al. [37] identified three common failure patterns types: block; strip; and point (Figure 1). Following previous ART studies [27, 46, 51, 52, 53, 54], we also used these patterns in our simulation framework. Using a unit input domain (\(\mathcal{D}\) was \([0,1.0]^{d}\)), the block pattern was simulated as a single hypercube randomly constructed and located within \(\mathcal{D}\). This was achieved by selecting a random point and then extending the same length for each dimension (with respect to \(\theta\)), producing, for example, a square in two dimensions, or a cube in three dimensions. The strip pattern was simulated using a single strip at any angle: As previously noted [27], because strips that are close to the corners of the input domain result in "fat" but unrealistic strips, these strips were excluded from the experiments. For the point pattern, 25 disjoint equally-sized hypercubes were randomly generated within \(\mathcal{D}\). Representative values for \(d\) and \(\theta\) were selected based on previous ART studies [27, 51], as follows: * Dimension (\(d\)): 1, 2, 3, 4, 5, and 10. * Failure rate (\(\theta\)): \(1.0\times 10^{-2}\), \(5.0\times 10^{-3}\), \(2.0\times 10^{-3}\), \(1.0\times 10^{-3}\), \(5.0\times 10^{-4}\), \(2.0\times 10^{-4}\), and \(1.0\times 10^{-4}\). #### Iv-D2 Number of Test Cases for P-measure Evaluations Calculation of the P-measure requires that the number of test cases (denoted \(n\)) in each test set \(T\) be set in advance (i.e., \(n=|T|\)). As discussed by Shahbazi et al. [50], a good choice of value for \(n\) when using the P-measure to analyze the test case generation approaches is the worst case according to the standard error \(SE\) (i.e., the maximum \(SE\)). This can be estimated as: \[SE=\frac{SD}{\sqrt{N_{t}}}, \tag{11}\] where \(SD\) is the standard deviation. Because \(N_{t}\) is a constant, the maximum \(SD\) leads to the worst case of \(SE\). As reported by Chen et al. [55], the maximum \(SD\) of the P-measure calculation is \(0.5\), and the expected P-measure \(SD\) for RT can be approximated as: \[SD\approx\sqrt{(1-\theta)^{n}-(1-\theta)^{2n}}. \tag{12}\] The value for \(n\) can therefore be estimated using the following equation: \[n=\frac{\log(0.5)}{\log(1-\theta)}. \tag{13}\] Seven values of \(\theta\) were selected for the simulations: \(1.0\times 10^{-2}\); \(5.0\times 10^{-3}\); \(2.0\times 10^{-3}\); \(1.0\times 10^{-3}\); \(5.0\times 10^{-4}\); \(2.0\times 10^{-4}\); and \(1.0\times 10^{-4}\). The corresponding values of \(n\), an integer, were calculated as: 69; 138; 346; 693; 1386; 3465; and 6931. ### _Empirical Study Framework_ To further evaluate the proposed LSH approach, we conducted an empirical study based on mutation testing [56]. This study made use of many independently-produced subject programs. Following previous ART studies [57, 27, 50], we adopted two different evaluation frameworks, using the two different evaluation metrics (F-measure and P-measure). #### Iv-E1 F-measure Evaluation Framework The F-measure evaluation framework involved 23 subject programs, implemented in Java, that have been used in previous studies [27, 38, 46, 47, 48]. Table II presents the detailed information about these 23 programs. The programs ranged in size from 14 to 182 lines of code (LOC), with their input domain dimensions ranging from one to 12. As shown in the "Fault Types" column of Table II, each program was seeded with one to nine faults using some common mutation operators [56]: _constant replacement_ (CR); _arithmetic operator replacement_ (AOR); _relational operator replacement_ (ROR); _scalar variable replacement_ (SVR); _statement deletion_ (STD); and _return statement replacement_ (RSR). The table also shows the input domain and failure rate data for each program. As noted by Huang et al. [14], the first 12 programs have been used in more than 15 previous ART studies, and are considered to be the classic numeric-input benchmark programs for ART. The other 11 programs, each of which has a high-dimensional numeric input domain, can be considered new benchmark programs. Although it may not be possible to collect all subject programs from previous studies (due to a lack of available source code or other reasons), our use of these 23 subject programs seems comprehensive and sufficient to support the evaluation process. We directly adopted the program's source code and mutations, without any modification, in order to ensure fair comparisons. #### Iv-E2 P-measure Evaluation Framework The P-measure evaluation framework used 11 Java subject programs that have been used in the ART literature [50, 57]. In our study, we directly used the original program source code, without any modification. Table III lists the details of these programs. The input domain for each program was constructed by assigning each dimension as an integer range of \([0,2^{24/d}-1]\), leading to \(2^{24}\) possible inputs -- for example, when \(d=3\), the input domain is \([0,255]^{3}\). A total of 3727 mutants of the 11 subject programs were created using the well-known mutation testing tool MuJava [58]. Exhaustive testing was conducted for each mutant: Each mutant was executed with all \(2^{24}\) different inputs. After the exhaustive testing, the following mutants were excluded: (1) _Equivalent Mutants_, whose outputs or behavior was identical to the original program for all inputs; (2) _Mutants with Timeout_, whose execution did not terminate within the time limit of 10 minutes; and (3) _Easy Mutants_, whose failure rates were greater than \(0.01\). This left a total of 780 _Appropriate Mutants_ that were used in the study [57]. We obtained all 3727 mutant programs used in the original study [57], and reran the mutant exclusion process in our environment. Due to differences in our experimental environment compared with the original study [57], 40 of the 780 mutants did not terminate within the time limit, and so we only used 740. These 740 Appropriate Mutants were used, without any modification, to calculate the P-measure of our proposed approach, and compare its performance with the other ART approaches. The failure rate of the subject programs was assumed to be unknown before testing. We set the test set sizes in the study to range from 2 to 10, in intervals of 2; and from 10 to 100, in intervals of 5. Each approach was evaluated according to the P-measure for each test set size. ### _Number of Runs and Statistical Analysis_ Because ART involves random test case construction (which obviously involves some randomness), it was necessary to repeat the experiments a number of times to obtain a large enough sample size to ensure reliable statistical estimates. There were different requirements for the F-measure and P-measure, which are discussed in the following. #### Iv-F1 F-measure Settings Arcuri and Briand [59] have recommended that experiments involving randomization should be run at least 1000 times for each subject program: In our study, therefore, there were 3000 runs for each approach under each testing scenario (or experiment), resulting in 3000 sets of data points (F-measure or F-time data) for further statistical analysis. The two-tailed nonparametric _Mann-Whitney U test_ (U-test) [60] has been recommended for detecting statistical differences in interval-scale results [59]. Both the F-measure and F-time generally have interval-scale results. When comparing two ART approaches, therefore, we used the U-test to collect \(p\)-values to identify significant differences in F-measure or F-time results (at a significance level of 1%). Similarly, standardized _effect size_ measures were also used: Vargha and Delaney's \(\hat{\text{A}}_{12}\) statistic [61] compares two approaches, \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), as: \[\hat{\text{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})=(R_{1}/X-(X+1)/2)/Y, \tag{14}\] where \(R_{1}\) is the rank sum of the first data group under comparison; \(X\) is the number of observations in the data sample of \(\mathcal{M}_{1}\); and \(Y\) is the number of observations in the data sample of \(\mathcal{M}_{2}\). Vargha and Delaney's \(\hat{\text{A}}_{12}\) statistic shows the magnitude of improvement of one ART approach over another -- \(\hat{\text{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})\) indicates the probability that algorithm \(\mathcal{M}_{1}\) outperforms algorithm \(\mathcal{M}_{2}\). For example, \(\hat{\text{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})=0.68\) means that \(\mathcal{M}_{1}\) outperforms \(\mathcal{M}_{2}\) with a probability of 68%. In general, \(\hat{\text{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})=0.5\) means \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) have equal performance; \(\hat{\text{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})>0.5\), means that \(\mathcal{M}_{1}\) is better than \(\mathcal{M}_{2}\); and \(\hat{\text{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})<0.5\), means that \(\mathcal{M}_{2}\) is better than \(\mathcal{M}_{1}\). #### Iv-F2 P-measure Settings A number of test sets, \(N_{t}\), is needed to calculate the P-measure. In our P-measure simulations, following previous studies [50], we generated \(N_{t}=100\) distinct test sets for RT, ART, RF, CR, DF, KD, and LSH. Because each failure pattern was randomly constructed within the input domain, we ran the experiments 3000 times (resulting in 3000 different failure patterns). In the empirical P-measure study, because the number of mutants was fixed (740), more distinct test sets were needed (\(N_{t}=10000\)) than for the empirical studies. When the result of an experiment is dichotomous -- i.e., either one finds a solution to the problem (_success_) or one does not (_fail_) -- it is recommended to adopt the _Fisher exact test_[62] has been recommended [59] to measure whether or not two approaches are significantly different (at a significance level of 1%). When calculating the P-measure, each fixed-size test set is executed on the program, yielding a dichotomous result (the software failure is triggered or not). The Fisher exact test was therefore appropriate for measuring the significant differences in the P-measure experiments. When calculating the standardized effect size measures, the _odds ratio_\(\psi\)[63] has been recommended for dichotomous results between the two algorithms [59]. Its definition is as follows: \[\psi(\mathcal{M}_{1},\mathcal{M}_{2})=\frac{a_{1}+\rho}{n+\rho-a_{1}}\Big{/} \frac{a_{2}+\rho}{n+\rho-a_{2}}, \tag{15}\] where \(a_{1}\) and \(a_{2}\) are the number of success times for the algorithms \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), respectively; \(n\) is the number of observations; and \(\rho\) is any arbitrary positive constant (usually \(\rho=0.5\)) used to avoid problems with zero occurrences. In general, \(\psi(\mathcal{M}_{1},\mathcal{M}_{2})=1.0\) indicates that there is no difference between \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\); \(\psi(\mathcal{M}_{1},\mathcal{M}_{2})>1.0\) means that algorithm \(\mathcal{M}_{1}\) has higher chances of success (of identifying a failure) than algorithm \(\mathcal{M}_{1}\); and \(\psi(\mathcal{M}_{1},\mathcal{M}_{2})<1.0\) means that \(\mathcal{M}_{2}\) outperforms \(\mathcal{M}_{1}\). ## V Experimental Results and Discussions In this section, we provide the experimental results and statistical analyses to answer the three research questions. In the analyses, when comparing two methods \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), we used the \(\bigcirc\) symbol to indicate that there was no statistical difference between them (their \(p\)-value was greater than 0.01); the \(\bigvee\) symbol to indicate that \(\mathcal{M}_{1}\) was significantly better (\(p\)-value was less than 0.01, and the effect size was greater than 0.50 for F-measure or 1.0 for P-measure); and the \(\bigstar\) symbol to indicate that \(\mathcal{M}_{2}\) was significantly better (\(p\)-value was less than 0.01, and the effect size was less than 0.50 for F-measure or 1.0 for P-measure). Each effect size value, i.e., \(\hat{\mathrm{A}}_{12}(\mathcal{M}_{1},\mathcal{M}_{2})\) for F-measure and \(\psi(\mathcal{M}_{1},\mathcal{M}_{2})\) for P-measure, is listed in the parenthesis immediately following by the comparison symbol. ### _Answer to RQ1.1: Effectiveness: F-measure Simulations_ Tables IV to VI present the FSCS F-measure simulation results for block, strip, and point patterns, respectively. Tables VII to IX show the corresponding RRT F-measure simulation results. Each table presents the mean F-ratio results and statistical pairwise comparisons of LSH against the other methods. #### Vi-A1 General LSH F-measure Observations Based on all the F-measure simulation results (Tables IV to IX), we have the following general observations for LSH: 1. Similar to the original ART algorithms (both FSCS and RRT), the F-measure of LSH depends on many factors, including the failure pattern, dimension \(d\), and the failure rate \(\theta\). 2. For a fixed failure rate \(\theta\), the LSH F-ratio increases as \(d\) increases, irrespective of the failure pattern types. This indicates that LSH has poorer fault-detection performance in higher dimensions. 3. For a fixed dimensionality \(d\), as \(\theta\) decreases, the LSH F-ratio decreases for block and point patterns, but remains very similar for the strip pattern. #### Vi-A2 F-measure Simulation Observations: Block Pattern Based on the block pattern F-measure simulation results (Tables IV and VII), we have the following observations: 1. _LSH vs. RT:_ The FSCS version of LSH has much better performance than RT when \(d\) is low (\(d\leq 3\)). As \(d\) increases, however, LSH becomes inferior to RT, regardless of failure rates. The RRT version of LSH also significantly outperforms RT, when \(d\leq 3\), but becomes similar to RT when \(d\geq 4\), even in the case of \(d=10\). The statistical pairwise comparisons between LSH and RT generally support these observations. 2. _LSH vs.__ART:_ The FSCS version of LSH is similar, or slightly worse, than ART for \(d\leq 5\), but significantly better for \(d=10\). The RRT version of LSH has very similar performance to ART when \(d\leq 5\), and, when \(d=10\), is similar to, or slightly better than, ART. The \(p\)-values and effect size values indicate that most comparisons between LSH and ART have no significant differences. 3. _LSH vs. RF:_ The F-ratio of the FSCS version of LSH is much less than that of RF, in all cases, irrespective of failure rates and dimensions. The RRT version of LSH, when \(d=1\), has very similar performance to RF, and performs significantly better than RF when \(d\geq 2\). Compared with the FSCS version, however, the F-ratio difference is relatively small. 4. _LSH vs. CR:_ The comparisons between LSH (both FSCS and RRT) and CR are very similar to LSH vs. RF: LSH shows significantly better fault-detection capability than CR for FSCS, and similar, or better, for RRT. 5. _LSH vs. DF:_ The FSCS version of LSH has similar or slightly worse performance than DF when \(d\leq 5\). However, for \(d=10\), apart from one case with \(\theta=1.0\times 10^{-2}\), LSH has better performance than DF. For the RRT version, when \(d=1\) or \(d=10\), the mean F-ratio differences between LSH and DF are large, with LSH generally performing better. For other dimensions, however, the mean F-ratio differences are relatively small, which means that LSH and DF have very similar performances. 6. _LSH vs. KD:_ The mean F-ratio values for the FSCS version of LSH are similar or slightly greater than those of KD, in most cases. However, the differences are relatively small. Similarly, the F-ratio differences between LSH-RRT and KD are very small, when \(d\leq 5\). When \(d=10\), LSH is similar to, or slightly better than, KD. Overall, the statistical comparisons show no significant differences between LSH and KD in most cases. #### V-B3 F-measure Simulation Observations: Strip Pattern Based on the strip pattern F-measure simulation results (Tables V and VIII), we have the following observations: 1. _LSH vs. RT:_ When \(d=1\), LSH (both FSCS and RRT) has much better performance than RT for the strip pattern, regardless of failure rates -- the strip pattern is equivalent to the block pattern in one-dimensional space. When \(d\geq 2\), LSH has similar, or slightly better, F-measure performance, for all dimensions and failure rates. 2. _LSH vs. ART:_ The LSH and original ART have very similar F-ratio performance, regardless of failure rates and dimensions. All \(p\)-values (other than two LSH-FSCS cases) indicate a lack of significance; and all \(\hat{\mathrm{A}}_{12}\) are around 0.50, indicating very little difference between LSH and ART performance. 3. _LSH vs. RF:_ The FSCS version of LSH is much better than RF when \(d=1\). For all other cases (FSCS when \(d>1\), all RRT), however, the LSH and RF performance is very similar. 4. _LSH vs. CR:_ The comparison of LSH and CR is very similar to LSH vs. RF. When \(d=1\), LSH-FSCS is significantly better than CR, but LSH and CR have very similar performance in all other cases. 5. _LSH vs. DF:_ The FSCS version of LSH has very similar F-ratio results to DF, for all failure rates and dimensions. The situation is similar for the RRT version, except for when \(d=1\) (for which LSH has much better performance than DF). 6. _LSH vs. KD:_ The performance of LSH (both FSCS and RRT) and KD are very similar, regardless of failure rates and dimensions. The \(p\)-values for all the related statistical pairwise comparisons are greater than 0.01, which indicates no significant differences. #### V-B4 F-measure Simulation Observations: Point Pattern Based on the point pattern F-measure simulation results (Tab bles VI and IX), we have the following observations: 1. _LSH vs. RT:_ The FSCS version of LSH has different performance to the RRT version for the point pattern simulations: When \(d\leq 3\), LSH-FSCS is similar to, or slightly better than, RT; but RT significantly outperforms LSH-FSCS overall when \(d\geq 4\). LSH-RRT, however, has similar F-measure results to RT, regardless of dimension and failure rates. 2. _LSH vs. ART:_ When \(d\leq 5\), LSH is very similar to the original ART (both FSCS and RRT), regardless of failure rates. When \(d=10\), however, LSH performs slightly better than ART. The \(p\)-values and effect size values, when \(d\leq 5\), nearly all indicate no significant difference. When \(d=10\), however, some \(p\)-values do indicate significant differences, for both FSCS and RRT, especially when the failure rate is low. 3. _LSH vs. RF:_ LSH-FSCS has similar, or slightly better, F-ratio results than RF when \(d\leq 3\); and performs significantly better than RF when \(d\geq 4\). LSH-RRT is different to LSH-FSCS, overall achieving a similar or slightly better performance than RF, which is supported by the statistical comparisons. 4. _LSH vs. CR:_ The comparisons between LSH and CR are very similar to those between LSH and RF: LSH-FSCS has similar performance to CR when \(d\leq 3\), but is much better than CR when \(d\geq 4\), regardless of the failure rates. LSH-RRT, however, performs similarly to, or slightly better than, CR for all values of \(d\). 5. _LSH vs. DF:_ LSH-FSCS performs similarly to DF when \(d\leq 5\); significantly better than DF for low failure rates when \(d=10\); but slightly worse than DF when \(d=10\) for high failure rates (such as \(\theta=1.0\times 10^{-2}\) and \(\theta=5.0\times 10^{-3}\)). LSH-RRT overall achieves similar, or slightly better, F-ratio results than DF, regardless of failure rates and dimensions. For high dimensions (\(d=10\)), LSH has significantly better performance than DF, especially when the failure rate is low. 6. _LSH vs. KD:_ When \(d\leq 5\), LSH has very similar performance to KD, for both FSCS and RRT. When \(d=10\), however, LSH performs significantly better, especially for lower failure rates. #### Iv-B5 Analysis and Summary Table X presents the numbers of simulation scenarios, for each failure pattern, where LSH is significantly superior (), indistinguishable (\(\mathcal{O}\)), or significantly inferior () to each compared technique, in terms of the F-measure. Each failure pattern has 42 scenarios for pairwise comparison -- 6 dimensions \(\times\) 7 failure rates. Table XI presents the F-measure comparisons for each dimension: Each dimension has 21 scenarios for each pairwise comparison (3 failure patterns \(\times\) 7 failure rates). Previous studies have shown that block and strip patterns are more favourable for ART than for RT [21]. However, as the dimensionality increases, ART may perform worse than RT, due to the _curse of dimensionality_[64]. Tables X and XI show that the low-dimension block pattern is most favourable for LSH, followed by the strip pattern, and then the low-dimension point pattern. Intuitively speaking, LSH may be expected to have poorer fault detection effectiveness than the original ART (and its variants that do not discard information during test case generation): LSH uses an approximate, rather than precise, NN search for each candidate, losing some distance calculation information. In this study, the original ART (_ART_ in the tables) is the only included technique that does not lose information during test case construction. According to Tables X and XI, compared with _ART_, LSH performs similarly overall, sometimes achieving slightly worse performance, which is unsurprising and expected. Surprisingly, however, LSH sometimes performs better than _ART_ in high dimensions. As discussed, while other ART techniques may suffer from the _curse of dimensionality_[64], LSH may (to some extent) alleviate this problem. It is expected that LSH would be comparable to some ART variants that discard information when generating test cases, such as _RF_, _CR_, _DF_, and _KD_. Tables X and XI show that LSH achieves comparable performances to _RF_, _CR_, _DF_, and such as _RF_, _CR_, _DF_, and _KD_. Tables X and XI show that LSH achieves comparable performances to _RF_, _CR_, _DF_, and _KD_. Tables KD_, on the whole. Compared with _RF_ and _CR_, LSH may have better F-measure performances, especially for the block pattern. Furthermore, LSH may perform slightly better than _DF_ and _KD_, especially for the RRT version, and in high dimensions. _Summary of Answers to RQ1.1 for Simulations:_ * _Similar to other ART techniques, compared with RT, LSH also has favorable conditions of F-measure: Block and strip patterns with low dimensions._ * _Compared with the original ART (both FSCS and RRT versions), LSH has similar, or slightly worse, F-measure performance. Nevertheless, surprisingly, LSH is better than ART in high dimensions._ * _Compared with the ART variants that may discard some information for test case generation, LSH has comparable F-measure results overall, and sometimes slightly better performances, especially in high dimensions._ ### _Answer to RQ1.1: Effectiveness: F-measure Empirical Studies_ Tables XII and XIII summarize the F-measure results for the empirical studies using the 23 subject programs. Because the failure rates for the real-life programs were unknown, it was not possible to calculate the theoretical RT F-measure: Instead, the F-measure data are reported in the tables. #### V-B1 F-measure Empirical Study Observations Based on the results reported in Tables XII and XIII, we have the following observations: **(1)**: _LSH vs. RT:_ The LSH-FSCS F-measure performance is very similar to RT for five programs (P8, P9, P14, P16, P18, and P20); better (lower) for 13 programs; and worse (higher) for three programs (P15, P17, and P19). The statistical analysis fully supports these observations. In addition, LSH-RRT performs better than RT for the first seven programs, and for P10 and P12 (the dimensionality, \(d\), for all of which is less than 4). For the other 13 programs, LSH-RRT and RT have very similar F-measures. Overall, LSH-RRT performs similarly to, or better than, RT for all programs. In summary, overall, LSH has similar or better performance than RT for most programs, especially when the dimensionality is low. **(2)**: _LSH vs. ART:_ LSH-FSCS has slightly better performance than ART for two programs (P18 and P19); worse performance for three (P12, P21, and P22); and very similar performance for the remaining 17 programs. LSH-RRT also outperforms ART for P18 and P19; is worse for two programs (P11 and P12); and has similar performance for the remaining 18. However, the statistical analyses show that, overall, the F-measure differences between LSH and ART are not significant, for both FSCS and RRT. **(3)**: _LSH vs. RF:_ Overall, LSH has comparable, or significantly better F-measure performance than RF, for all programs apart from P10 and P21 for LSH-FSCS. **(4)**: _LSH vs. CR:_ The comparison between LSH and CR is very similar to that between LSH and RF: LSH is much better than CR for 8 or 9 subject programs, and similar for the rest (except for P7, P10, and P21 for LSH-FSCS). **(5)**: _LSH vs. DF:_ For both FSCS and RRT versions, the F-measure differences between LSH and DF are very similar for more than half of the programs. LSH-FSCS significantly outperforms DF for four programs (P6, P11, P13, and P19); and is significantly outperformed by DF for two (P7 and P10). LSH-RRT significantly outperforms DF for eight programs (P1-P6, P10 and P19); and is outperformed for one (P12). **(6)**: _LSH vs. KD:_ LSH has very similar performances to KD for all programs except P21 for LSH-FSCS. #### V-B2 Analysis and Summary Table XIV presents the numbers of subject programs in the empirical studies for which LSH is significantly superior (), indistinguishable (), or significantly inferior () to each compared technique, according to the F-measure. As expected, LSH has similar or better fault-detection performance compared with RT (in \(42/46=91.30\%\) of the case): This is because LSH-generated test cases may be more diverse. LSH is also comparable to ART in most scenarios (\(42/46=91.30\%\)), in spite of the fact that LSH discards some information when generating test cases, but ART does not. Similar to RF, CR, and DF, LSH discards some information during test case generation, resulting in similar fault-detection performance. However, LSH achieves similar, or significantly better results compared with RF, CR, and DF, for most programs (\(43/46=93.48\%\), \(42/46=91.30\%\), and \(43/46=93.48\%\), respectively). LSH also has very similar performance to KD (except in one case, the FSCS version for program P21). simulation results. Each table presents the mean P-measure results and statistical pairwise comparisons of LSH against other methods. #### V-A1 General LSH P-measure Simulation Observations Based on all the P-measure simulation results (Tables XV to XX), we have the following general simulation observations for LSH: 1. Similar to the original ART algorithms (both FSCS and RRT), the P-measure of LSH depends on many factors, including the failure pattern, and dimension \(d\). 2. For a fixed failure rate \(\theta\), the P-measure of LSH generally decreases as \(d\) increases, especially for the block and point patterns. In other words, LSH generally has poorer P-measure performance in higher dimensions, similar to the F-measure performance. #### V-A2 P-measure Simulation Observations: Block Pattern Based on the block pattern P-measure simulation results (Tables XV and XVIII), we have the following observations: 1. _LSH vs. RT:_ When \(d\) is small, LSH generally achieves better P-measure results than RT, for both FSCS and RRT. As \(d\) increases, however, LSH generally performs worse than RT. Nevertheless, in the high dimension (\(d=10\)), the LSH-RRT (but not LSH-FSCS) P-measure values approach those of RT. 2. _LSH vs. ART:_ When \(d=1\), LSH and ART have very similar P-measure results; when \(d=2,3,4\), LSH (both FSCS and RRT) performs significantly worse than ART, especially for low failure rates. However, when \(d=5\), the performances of LSH-FSCS and LSH-RRT are different: LSH-FSCS has similar or significantly worse P-measure performance than ART; but LSH-RRT achieves similar or significantly _better_ performance. When \(d=10\), both LSH-FSCS and LSH-RRT overall have significantly better P-measure performance than ART. 3. _LSH vs. RF:_ LSH nearly always performs significantly better than RF (except from a few with \(\theta=1.0\times\) \(10^{-2}\)). 4. _LSH vs. CR:_ The case of CR is similar to that of RF: Overall, LSH has significantly better P-measure performance than CR (for both FSCS and RRT), irrespective of the failure rate \(\theta\) and dimension \(d\). 5. _LSH vs. DF:_ For the FSCS version, LSH overall performs worse than DF in most cases (except for some low failure rates when \(d=10\)). For the RRT version, however, LSH generally has _better_ P-measure results than DF when \(d=1,2,5,10\); while for \(d=3,4\), DF is better than LSH, overall. 6. _LSH vs. KD:_ The comparisons of LSH against KD are similar to against DF: When \(d=2,3,4,5\), LSH overall has similar or significantly worse P-measure performance than KD; but when \(d=1,10\), LSH generally achieves similar or significantly better performance. #### V-B3 P-measure Simulation Observations: Strip Pattern Based on the strip pattern P-measure simulation results (Tables XVI and XIX), we have the following observations: 1. _LSH vs. RT:_ LSH (both FSCS and RRT) has similar or significantly better P-measure performance than RT, regardless of the dimension and failure rates. 2. _LSH vs. ART:_ The LSH P-measure results are very similar to those for ART (both FSCS and RRT), irrespective of the failure rate and dimension. The statistical analysis also shows that there is no significant difference between LSH and ART. 3. _LSH vs. RF:_ When \(d=1\), LSH performs significantly better than RF for all failure rates. For other dimensions, however, LSH is similar to RF, for both FSCS and RRT. This is confirmed by the statistical analysis (because most effect size values are around \(0.50\)). 4. _LSH vs. CR:_ Similar to the case of RF, LSH achieves significantly better P-measure results than CR, when \(d=1\). LSH and CR have similar P-measure performance in the remaining dimensions. 5. _LSH vs. DF:_ Overall, LSH-FSCS performs similarly to DF, regardless of the failure rate and dimension. LSH-RRT is also similar to DF when \(d>1\), but is significantly better than DF, for all failure rates, when \(d=1\). 6. _LSH vs. KD:_ Both FSCS and RRT versions of LSH have comparable P-measure results to KD overall, because their P-measure differences are not highly significant. #### Vi-B4 P-measure Simulation Observations: Point Pattern Based on the point pattern P-measure simulation results (Tables XVII and XX), we have following observations: 1. _LSH vs. RT:_ When \(d=1,2,\) LSH has similar, or slightly better performance than RT, for both FSCS and RRT versions. When \(d\geq 3\), LSH is, overall, worse than RT, but the differences between LSH-RRT and RT are much less than those between LSH-FSCS and RT, especially when \(d\) is large. In other words, the LSH-RRT is closer to RT performance than LSH-FSCS is. For example, when \(d=10\), the LSH-RRT P-measure values range from 0.46 to 0.49, but the LSH-FSCS P-measures range from 0.18 to 0.30. 2. _LSH vs. ART:_ When \(d=1,2,3\), LSH has comparable P-measure results to ART. When \(d=4,5,10\), however, overall, LSH has significantly better P-measure results than ART (for both FSCS and RRT). 3. _LSH vs. RF:_ Overall, LSH has much better P-measure performance than RF, especially when the failure rate is low and the dimension is high. 4. _LSH vs. CR:_ The comparison between LSH and CR is very similar to that between LSH and CR: LSH generally outperforms CR, especially for low failure rates and high dimensions. 5. _LSH vs. DF:_ When \(1\leq d\leq 3\), LSH has very similar or slightly better performance than DF. When \(d\geq 4\), LSH has similar or worse P-measure performance than DF for high failure rates; however, as the failure rate decreases, LSH performs better than DF. 6. _LSH vs. KD:_ When the dimension \(d\) is low (\(d=1,2,3\)), LSH is very comparable to KD. However, when \(d\) is high (\(d\geq 4\)), overall, LSH outperforms KD, especially for lower failure rates. #### V-B5 Analysis and Summary Table XXI presents the numbers of simulation scenarios, for each failure pattern, where LSH is significantly superior (\(\bigstar\)), indistinguishable (\(\O\)), or significantly inferior (\(\bigstar\)) to each compared technique, in terms of P-measure. Each failure pattern has 42 scenarios for pairwise comparison -- 6 dimensions \(\times\) 7 failure rates. Table XXII presents the P-measure comparisons for each dimension: Each dimension has 21 scenarios for each pairwise comparison (3 failure patterns \(\times\) 7 failure rates). Similar to previous studies [21], overall, with respect to the P-measure, LSH outperforms RT for block and strip patterns, but not for the point pattern. Additionally, similar to the F-measure results, the LSH P-measure also shows evidence of the _curse of dimensionality_[64]: In higher dimensions, LSH generally has worse P-measure performance than RT (especially for the block and point patterns). Overall, the P-measure observations for LSH compared with RT align with the F-measure observations. DF and KD perform slightly better than LSH overall; while in high dimensions, LSH generally performs slightly better than DF and KD. In conclusion, in spite of some differences, overall, the P-measure simulation observations are broadly in line with the F-measure simulation observations (Section V-A). **Summary of Answers to RQ1.2 for Simulations:** * _Compared with RT, LSH has comparable or better P-measure performance for the strip pattern; and for the block and point patterns in low dimensions._ * _Compared with the original ART, overall, LSH has similar or better P-measure results for the strip and point patterns, but similar or worse results for the block pattern. In high dimensions, however, LSH is similar or better than ART for all three failure patterns._ * _Overall, LSH has better P-measure results than RF and CR; and comparable results to DF and KD._ ### _Answer to RQ1.2: Effectiveness: P-measure Empirical Studies_ Tables XXIII and XXIV summarize the P-measure results for the empirical study. #### Iv-D1 P-measure Empirical Study Observations Based on the P-measure empirical study results (Tables XXIII and XXIV), we have the following observations: **(1)** _LSH vs. RT:_ Overall, LSH has significantly better P-measure results than RT, for both FSCS and RRT versions, irrespective of the test set size. **(2)** _LSH vs. ART:_ When the test set size is relatively small (for example, \(|T|\leq 30\)), LSH-FSCS performs significantly worse than ART; when the test set size becomes large, however, there is no significant difference compared with ART. LSH-RRT has, overall, better P-measure results than ART, especially when the test set size is relatively large. 3. _LSH vs. RF:_ The comparison between LSH and RF is similar to the comparison between LSH and ART. When executing a small test set, LSH-FSCS is worse than RF; and when running more test cases, the performance becomes similar to RF. LSH-RRT is better than RF overall, especially when a large number of test cases are executed. 4. _LSH vs. CR:_ The P-measure comparisons between LSH and CR are very similar to the comparisons against ART and RF, for both FSCS and RRT versions. 5. _LSH vs. DF:_ When the test set size is relatively small, LSH has similar or slightly worse P-measure results than DF for both FSCS and RRT versions. However, when the test set size is large, the LSH performance is different for the different versions: when executing a larger number of test cases, LSH-FSCS performs similarly to DF, but LSH-RRT outperforms DF. 6. _LSH vs. KD:_ The P-measure comparisons between LSH and KD are, overall, very similar to the comparisons with DF: When the test set is relatively small, KD outperforms LSH; but when the test set is relatively large, LSH overall outperforms outperforms KD. #### V-B2 Analysis and Summary Table XXV presents the numbers of testing scenarios in the empirical studies for which LSH, according to the P-measure, is significantly superior (), indistinguishable (), or significantly inferior () to each compared technique. According to the P-measure observations, overall, LSH has significantly better performances than RT (\(43/46=93.48\%\) of the cases), which is consistent with the observations based on the empirical F-measure study. LSH has comparable or better P-measure performances than ART in most scenarios (\(37/46=80.43\%\)), even though LSH may discard some information when generating test cases -- because LSH adopts the ANN search rather than the NN search. When the test set size is less than \(\lambda\) (a pre-defined parameter for RF and CR), both RF and CR are equivalent to ART: They both only _forget_ previous test cases after already executing \(\lambda\) test cases -- \(\lambda\) was set to \(60\) in this study. Even when generating slightly more than \(\lambda\) test cases, RF and CR may only be slightly worse than ART. As shown in Table XXV, LSH has comparable or significantly better results than RF and CR in most cases (\(36/46=78.26\%\) and \(39/46=84.78\%\), respectively). Compared with DF and KD, LSH has different P-measure results for different versions of LSH. More specifically, LSH-FSCS is similar or worse than DF and KD; while LSH-RRT performs similarly or better than DF and KD. Overall, LSH achieves comparable P-measure results to DF and KD. _Summary of Answers to RQ1.2 for Empirical Studies:_ * _Overall, LSH has significantly better P-measure results than RT in most scenarios._ * _LSH has very similar or worse results than the original ART, for the FSCS version; and significantly better results for the RRT version._ * _LSH generally has better P-measure results than RF and CR, in most cases, and comparable results to DF and KD._ ### _Answer to RQ2: Efficiency: Test Generation Time_ We investigated the execution time required by the different techniques to generate \(n\) test cases for each \(d\)-dimensional input domain \([0,1.0)^{d}\). The values of \(d\) were selected as 1, 2, 3, 4, 5, and 10. The values of \(n\) were chosen from 500 to 10000, with an increment of 500: \(n=500,1000,\cdots,10000\). Figures 7 and 8 present the test case generation time for the various methods under study. Each subfigure presents data for a specific dimension \(d\), with the \(x\)-axis representing the number of generated test cases \(n\), and the \(y\)-axis showing the time required to generate those \(n\) test cases. Table XXVI presents the test case generation time for each dimension, when \(n\) was fixed at four representative values: 500; 1000; 5000; and 10000. #### Vi-E1 Observations for Fixed Dimensions (\(d\)) Based on the data in Figures 7 and 8, we have the following observations: _(1) LSH vs. RT:_ Compared with RT, LSH always requires more computational time to generate the same number of test cases. This is because RT generally uses very little or no information to guide the test-case generation. _(2) LSH vs. ART:_ For all dimensions (\(d\)) and all numbers of test cases (\(n\)), LSH has a much lower execution time than the original ART technique when generating the same number of test cases. As \(n\) increases, the difference in time taken becomes significantly larger. LSH is much more efficient than ART when generating the same number of test cases. This observation applies for both FSCS and RRT. _(3) LSH vs. RF:_ When \(d=1\), LSH takes longer than RF, but when \(d>1\), the LSH test case generation time is always less than that of RF. As \(n\) increases, the difference between LSH and RF becomes slightly larger. These observations are valid for both FSCS and RRT. _(4) LSH vs. CR:_ Because their test case generation processes are very similar, CR and RF require almost the same amount of time to generate the same number of test cases. Accordingly, the observations from comparing LSH with CR are the same as for comparing LSH with RF. _(5) LSH vs. DF:_ For each fixed dimension (\(d\)), LSH requires much less time than DF to generate the same number of test cases, for all values of \(n\). As \(n\) increases, the difference between LSH and DF also increases. These observations apply for both FSCS and RRT. _(6) LSH vs. KD:_ When \(d=1\), for \(n\leq 5000\), LSH-FSCS needs similar, or slightly more time than KD; however, when \(n\) increases, LSH-FSCS begins to need slightly less time than KD (to generate the same number of test cases). When \(d=2\), LSH-FSCS takes more time than KD, regardless of \(n\). When \(d=3\), the LSH-FSCS time is similar, or slightly greater, overall. For the remaining dimensions, LSH-FSCS is much more efficient than KD, regardless of \(n\). In addition, the observations for LSH RRT are, overall, the same as for LSH-FSCS, when \(d=2,4,5,10\). However, LSH-RRT is slightly different from LSH-FSCS when \(d=1\) or \(d=3\): LSH-RRT is faster than KD in these dimensions. In summary, LSH is faster than KD in these dimensions. Fig. 7: **FSCS version: Test case generation time for various test set sizes.** more efficient than KD for all values of \(n\), except when \(d=2\). #### V-B2 Observations for Fixed Number of Test Cases (\(n\)) It can be observed from Table XXVI that, when generating a fixed Fig. 8: **RRT version**: **Test case generation time** for various test set sizes. number \(n\) of test cases, as the dimensionality increases, the FSCS techniques generally require an increasing amount of time. However, this is not the case for the RRT techniques: For higher dimensions, DF and KD take more time for the RRT version. However, the time taken by (RRT) ART, RF, CR, and LSH, when \(d=2\), is highest among all dimensions, for all values of \(n\). This is mainly due to the characteristics of RRT. Comparing LSH with the other techniques, we have the following observations: **(1)**: _LSH vs. RT:_ LSH is more time-consuming than RT when generating \(n\) test cases, regardless of \(d\). **(2)**: _LSH vs. ART:_ When \(n\) is fixed, LSH is much more efficient than ART, in all dimensions, for all values of \(n\). As \(n\) increases, the differences in time taken also increase, for all dimensions. **(3)**: _LSH vs. RF:_ For all values of \(n\), when \(d=1\), LSH is less efficient than RF; however, when \(d\geq 2\), LSH is more efficient. Nevertheless, the differences in time taken are relatively small. These observations apply for both FSCS and RRT. **(4)**: _LSH vs. CR:_ Since CR has very similar test case generation time to RF, the comparisons between LSH and CR are very similar to those between LSH and RF: Apart from \(d=1\), LSH is more efficient than CR. **(5)**: _LSH vs. DF:_ LSH is much more efficient than DF for all values of \(n\) and \(d\). In addition, as \(d\) increases, the difference between LSH and DF also increases. **(6)**: _LSH vs. KD:_ When \(n=500\), LSH-FSCS is slightly faster than KD for \(d\geq 5\). When \(n=1000\), KD outperforms LSH-FSCS for \(d\leq 3\), but LSH-FSCS is faster when \(d>3\). When \(n=5000\), LSH-FSCS outperforms KD for \(d>3\). When \(n=10000\), LSH-FSCS is faster than KD for all dimensions, except \(d=2\). LSH-RRT is faster than KD for all values of \(n\), and all values of \(d\) except \(2\). Besides, as the dimensionality increases, for all values of \(n\), the differences between LSH and KD also, overall, increase. The main reason is that, as the dimension \(d\) increases, the increment in computation time for KD is much greater than for LSH. In other words, when the dimension is higher, LSH is much more efficient than KD. #### V-B3 Analysis and Summary It has been noted [45, 65] that ART generation of the "next" (\((n+1)\)-th) test case -- for both FSCS and RRT -- incurs \(n\times k\) distance calculations between \(k\) random candidates and the already-executed \(n\) test cases. In FSCS, \(k\) is generally assigned a constant number, typically \(k=10\)[45]; but in RRT, the average value of \(k\) is probably logarithmic in \(n\): \(k=\beta_{1}\log(n)+\beta_{2}\), where \(\beta_{1}\) and \(\beta_{2}\) are two constants [65]. Application of a _forgetting_ strategy involves setting a predefined constant \(\lambda\), and only retaining the information for \(\lambda\) executed tests: The other \(n-\lambda\) (where \(\lambda<n\)) executed test cases are discarded, and distance calculations are only required for \(\lambda\) executed test cases, for each candidate. This means that forgetting strategies need only a constant number of calculations (\(\lambda\times k\)) to generate a new test case, resulting in a linear time-complexity order, \(O(n)\). As shown in Figures 7 and 8, the LSH trends are very very similar to those of the forgetting strategies, suggesting that the LSH time complexity is also very close to a linear order. Figure 9 presents the LSH test case generation times, for various \(n\). The average times for each dimension \(d\) are shown with connected lines, with the associated fitted curves shown as dashed lines. Figure 9(a) presents the LSH-FSCS data, and Figure 9(b) shows the LSH-RRT data. An analysis of the cumulative generation time shows that a line function with high determination coefficient (\(R^{2}\geq 0.9941\)) can be identified for each fitted curve, for all dimensions, for both LSH versions: The average LSH test-case generation time has a significant linear relationship with the number of test cases, for each dimension. In conclusion, although the worst-case LSH time complexity order is \(O(n\log n)\), as discussed in Section III-D, the simulation results show the time complexity to be approximately linear. Fig. 9: Curve-fitting for LSH test-case generation times, for various test set sizes and dimensions. **Summary of Answers to RQ2:**_ * _LSH is always less efficient than RT when generating the same number of test cases, for all dimensions._ * _LSH is much more efficient than the original ART (for both FSCS and RRT) when generating the same number of test cases, in all dimensions._ * _LSH is also much more efficient than the DF, RF, and CR variations of ART, when generating the same number of test cases, for all dimensions except_ \(d=1\)_._ * _LSH has similar (sometimes slightly better or worse) efficiency to KD for low dimensions (_\(d\leq 3\)_), but is much more efficient for high dimensions, especially for larger numbers of test cases._ ### _Answer to RQ3: Cost-effectiveness: F-time_ Tables XXVII and XXVIII present the F-time results for the 23 subject programs. #### V-F1 F-time Observations Based on the results shown in Tables XXVII and XXVIII, we have the following observations: **(1)** _LSH vs. RT:_ LSH (both FSCS and RRT) has better (lower) F-time than RT for the P4 program, indicating that LSH is more cost-effective for this program. For the remaining 22 programs, however, LSH is much less cost-effective than RT. **(2)** _LSH vs. ART:_ LSH requires much less time than ART to detect the first failure for all 23 programs, for both FSCS and RRT: LSH is much more cost-effective than the original ART. **(3)** _LSH vs. RF:_ The F-time differences between LSH-FSCS and RF are very small (around \(1\) ms) for the four programs P1, P2, P3, and P5, with the statistical analyses showing no very/significantly different performances. LSH-FSCS performs significantly better than RF for the remaining programs, apart from P10. In addition, LSH-RRT needs less time than RF to identify the first failure for all the programs. Overall, the statistical analyses show LSH-RRT to be significantly more cost-effective than RF for all except the first three programs. **(4)** _LSH vs. CR:_ The comparisons between LSH and CR are very similar to those for RF: LSH is more cost-effective than CR for most programs. **(5)** _LSH vs. DF:_ LSH is much faster than DF at finding the first failure, especially for the subject programs with high dimensions. The LSH-FSCS F-time for P22, for example, is only about 111 ms, compared with 423,722 ms for DF (almost four thousand times as long). The statistical analyses show LSH to be significantly more cost-effective than DF. **(6)** _LSH vs. KD:_ LSH is faster than KD for the 1-dimensional programs (P1 to P5), but KD is more cost-effective when \(d=2\) (P6 to P8). When \(d=3\) or \(4\) (P9 to P13), LSH is comparable to KD, with LSH performing better than KD for some programs, but also similarly, or worse, for others. For the programs with \(d\geq 5\) except P23 (i.e., P14 to P22), LSH is much faster than KD, which is supported by the statistical analyses. These observations are valid for both FSCS and RRT. #### V-F2 Analysis and Summary Table XXIX summarizes the number of real-life programs in the empirical studies, for which, according to F-time, the LSH performance is significantly superior (\(\boldsymbol{\check{\boldsymbol{\vee}}}\)), indistinguishable (\(\boldsymbol{\check{\boldsymbol{\vee}}}\)), or significantly inferior (\(\boldsymbol{\check{\boldsymbol{\vee}}}\)) to each compared technique. Based on the table, we have the following observations: Compared with RT, LSH is less cost-effective in nearly all scenarios. Compared with ART and DF, LSH is significantly more cost-effective in all scenarios. LSH has better cost-effectiveness than RF and CR in most cases: LSH performs similarly to, or better than, RF and CR in \(43/46=93.48\%\) of the scenarios. In addition, LSH is also usually more cost-effective than KD (\(34/46=73.91\%\) of the scenarios). During the testing, when the first failure was triggered for any program \(P\), for any test-case generation approach, the F-measure (denoted \(n\)) and F-time were both recorded. The F-time includes two parts: the time taken to generate the \(n\) test cases (denoted \(TG\)); and the time taken to execute them to test the program \(P\) (denoted \(TE\)), i.e., F-time \(=TE+TG\). As illustrated in Section V-B, LSH overall requires a smaller number of test cases than RT to find the first failure, which indicates that the \(TE\) of LSH is generally less than that of RT, i.e., \(TE_{\text{LSH}}<TE_{\text{RT}}\). However, the test case generation time of LSH is still much higher than that of RT (as shown in Section V-E), i.e., \(TG_{\text{LSH}}\gg TG_{\text{RT}}\). Therefore, in general, \(TE_{\text{LSH}}+TG_{\text{LSH}}\gg TE_{\text{RT}}+TG_{\text{RT}}\). Compared with ART and DF, overall, LSH has similar F-measure performance -- LSH, ART, and DF need similar numbers of test case executions to identify the first software failure, which means that \(TE_{\text{LSH}}\approx TE_{\text{ART}}\approx TE_{\text{DF}}\). As discussed in Section V-E, however, the ART and DF execution times increase rapidly as the number of test cases or the dimensionality increase: To generate the same number of test cases, ART and DF generally require much more computation time than LSH, i.e., \(TG_{\text{LSH}}\ll TG_{\text{ART}}\) and \(TG_{\text{LSH}}\ll TG_{\text{DF}}\). LSH, therefore, requires less F-time than ART and DF, and is thus more cost-effective. Although LSH has better F-measure performance than RF and CR for the 1-dimensional programs (Section V-B), it also has a higher computation time (Section V-E), resulting in different F-time performances for different programs. For the programs with \(d\geq 2\), overall, LSH has similar F-measure performances to RF and CR, but requires less execution time to generate the same number of test cases, resulting in better F-time performance. According to the F-measure observations in Section V-B, overall, LSH has similar performance to KD. According to the computation time observations (Section V-E), LSH takes longer than KD to generate the same number of test cases when \(d=2\), but a similar or much lower amount in other dimensions. Consequently, LSH is less cost-effective than KD when \(d=2\), but more cost-effective overall, especially when the dimensionality is high. _Summary of Answers to RQ3:_ * _LSH is less cost-effective than RT for nearly all programs._ * _LSH is much more cost-effective than the original ART (both FSCS and RRT) with all programs._ * _LSH is more cost-effective than DF in all the scenarios; and has better cost-effectiveness than RF, CR, and KD in most scenarios._ ### _Threats to Validity_ This section addresses some potential threats to the validity of our study. #### V-G1 Threats to Experimental Design Some potential threats to the study validity relate to the experimental design, and can be examined as follows: * Although the simulations only used a limited number of failure patterns, failure rates, and dimensions, this configuration has also been widely used in ART research [14, 27, 50]. Nevertheless, our future work will involve additional simulations with more failure patterns, failure rates, and dimensions: This will enable further and deeper evaluation of the proposed approach. * In spite of the number and diversity of real-life programs used in the empirical studies, their sizes were all relatively small. Nevertheless, these subject programs are very representative of those used in the literature for ART with numeric inputs. Our future work, however, will include exploration of more subject programs, which we anticipate will strengthen the generalizability of the experimental results. * Finally, mutants, rather than real faults, were used in the empirical studies. A set of real faults is typically restricted [66] by the _competent programmer hypothesis_[67] and the _coupling effect hypothesis_[68]. The competent programmer hypothesis states that competent programmers, when producing faulty programs, tend to write programs that are nearly correct: Although a program written by a competent programmer may have some faults, it is likely that these will be relatively small faults, and the difference compared with a correct version of the program will be minor [67]. The coupling effect hypothesis states that complex faults are coupled to simple faults, and that a test set that identifies all simple faults in a program will also identify a high percentage of the complex faults [68]. Generally speaking, mutation testing introduces simple faults into a program. Therefore, if test cases detect more mutants (considered simple faults), they should also be capable of detecting more complex faults [69]. Furthermore, previous studies have shown some relatively strong correlations between mutant detection and real fault detection: Mutants can thus be used as a substitute for real faults when comparing different test suites [70, 71, 72]. Nevertheless, further experiments with real faults will be conducted in the future. #### V-G2 Threats to Evaluation Metric Selection A second potential threat to validity relates to the selection of evaluation metrics. Four evaluation metrics (F-measure, P-measure, computation time, and F-time) were adopted in our study. These metrics were used to measure testing effectiveness, efficiency, and cost-effectiveness; and have been widely used in other ART studies [14, 27, 46]. Nevertheless, more evaluation metrics, related, for example, to test case distribution [23] or code coverage [22], would be useful to further assess our proposed approach. We look forward to exploring these perspectives in our future work. #### V-G3 Threats to Parameter Settings The third potential threat to validity relates the parameter settings. Different parameter settings may produce different results and observations. However, testing all possible parameter settings was not practical for this study, and so we set the various parameters according to the following principles: (1) the parameters for some ART approaches were set according to previous ART studies [27, 38, 46]; (2) some parameters were specifically set to ensure fair comparisons across techniques; and (3) some parameter settings were determined through preliminary experiments, as discussed in Section IV-B2. ## VI Differences between Failures and Faults In this section, we provide some discussions regarding the differences between software failures and faults. ### _Software Failures versus Software Faults_ According to the IEEE [73], a software developer may introduce a _fault_ (_defect_ or _bug_) in the software, due to his or her mistake. When executing a test case \(t\) with this SUT, a software _failure_ may be produced -- for example, the SUT may behave unexpectedly: The output/behavior with \(t\) is different from that of the test oracle. Generally speaking, the presence of a failure means that the SUT has fault(s). However, the presence of faults does not mean that the software failures can be revealed: The triggering conditions may not be satisfied [73]. As discussed in Section II, test cases that can trigger software failures are called _failure-causing inputs_. The set of all failure-causing inputs is called the _failure region_(s). As noted by Chan et al. [37], a software fault may result in a single failure region (such as the block pattern or a strip pattern); or many failure regions (for example, the point failure pattern). Similarly, multiple software faults may result in a single failure region (due to the interactions among these faults) or multiple failure regions (if the faults are independent). In other words, there is no obvious relationship between the number of faults and the number of failure regions. We adopted three representative failure patterns (block, strip, and point) in the framework for our simulation-based study. Each failure pattern may be caused by a single software fault or multiple faults in actual programs: Each simulated failure pattern may represent diverse software faults. We also considered two evaluation frameworks in the empirical study, one each for the F-measure and the P-measure. The F-measure framework seeded one to nine faults into the program source code, constructing one or more failure regions with relatively small failure rates for each program. These empirical study failure patterns were are not limited to the block, strip, and point failure patterns. The P-measure framework used a mutation tool to construct many mutants, producing many different failure patterns and failure rates. In conclusion, both simulation and empirical study frameworks provided diverse faults (or simulated faults) with diverse failure rates. ### _Failure Detection versus Fault Detection_ Failure detection is said to take place when the actual output/behavior of the SUT, when executed with test case \(t\), is different to expectation -- the output/behavior is different to that of the oracle. In this case, \(t\) is also called a failure-causing input. In contrast, _fault_ detection refers to detection/localization of the specific fault in the SUT: This focuses on to identifying the reason for the observed failure(s), and thus relates to software debugging, and _software fault localization_[74]. In our simulation framework, when a test case is generated inside a constructed failure region, a failure is said to be detected. As discussed above, a failure region may represent multiple faults, but the concept of fault detection plays no role in the simulations. In our empirical study framework, when the SUT output for a test case is different from its expected output, a failure is considered to have been found. At this point, the SUT is shown to have some faults, however, the reason(s) or location(s)) leading to the fault is not yet known -- processes such as fault localization or debugging will need to be carried out before the answers to this can be determined. _Adaptive random testing_ (ART) was motivated by a goal of more quickly finding evidence of a problem in the SUT. This means that the ART algorithms focus on finding the first failure, often with an expectation that testing will stop once this has been found [13, 14]. This also explains the popularity of the F-measure -- the expected number of test case executions to detect the first failure -- in ART studies. Expanding the current study to examine ART performance for more failures or faults, beyond the first found, was therefore Fig. 11: The number of test cases required to find all faults, for different test case generation approaches. Fig. 10: Percentage of faults detected by different test case generation approaches, for various test set sizes. not considered: Once ART finds the first failure of an SUT, a different strategy should be then be used to find other failures or faults in that SUT. Sections V-C and V-D provided the results of our simulations and empirical studies involving multiple failures. We also conducted a preliminary, simulation-based investigation into the number of faults (failures in fact) that our proposed approach could identify. The simulation used a two-dimensional input domain, in which we randomly constructed 100 equally-sized block-pattern failure regions \(\theta_{i}=1.0\times 10^{-3}\)\((1\leq i\leq 100)\). Each block-pattern failure region corresponded to a single fault -- once the failure region was detected, the fault was considered to be identified. All failure regions were independent of each other, thus overlapping regions were possible. To overcome randomness, we repeated the process of random failure region construction 100 times, and independently run 1000 sets of test cases for each technique, resulting in 100000 data points. We adopted two evaluation metrics related to the number of faults: (1) the number of faults identified by a fixed number of test cases; and (2) the number of test cases required to detect all faults. Figure 10 shows the percentage of faults detected by the different test-case generation approaches, for various test set sizes. According to the results, it can be observed that: * When the number of executed test cases was small (for example, \(n\leq 200\)) or when the number of test cases is large (for example, \(n\geq 4000\)), LSH had very similar performance to all other approaches (RT, ART, CR, RF, DF, and KD), for both FSCS and RRT versions. * For medium amounts of test cases (\(200<n<4000\)), LSH was better than RT, RF, and CR; and had very similar performance to ART and KD, overall, for both FSCS and RRT versions. * LSH-FSCS had similar performance to DF; but LSH-RRT had better performance than DF. Figure 11 presents the number of test cases required to detect all faults for different approaches of test case generation. The figure shows the distribution of the 100000 data points (100 rounds of failure-region construction, each with 1000 generated test cases): Each box plot shows the mean (square in the box); median (line in the box); upper and lower quartiles (lines above and below the box); and minimum and maximum values, for each approach. From the data, it can be seen that: * LSH-FSCS performs similarly to ART, DF, and KD; but requires fewer test cases to identify all faults than RT, RF, and CR. * LSH-RRT has similar performance to LSH-FSCS, but also outperforms DF. Overall, the observations from Figure 11 are consistent with those from Figure 10. ### _Failure Pattern versus Fault Pattern_ As discussed in Section II, a failure pattern refers to the distributions of failure-causing inputs over the input domain, including both their locations and geometric shapes. Failure patterns have been categorized into three types: block; strip; and point [37]. Although not labelled patterns, different types of faults have been identified and categorized [75, 76, 77, 78, 79]. Among the various taxonomies, Hayes et al. [79] provide a detailed hierarchy of faults based on their location and usage. As discussed, an aim of ART is to (quickly) find the first failure. The focus of our investigation has been to balance the tradeoff between ART effectiveness and efficiency by replacing the NN search with an ANN search. Although investigation of the relationships between failure patterns and fault patterns is beyond the scope of this study, it would be very interesting to examine such things, which we look forward to pursuing in our future work. ## VII Related Work This section briefly summarizes some related work about ART. As explained by Huang et al. [14], in addition to the _Select-Test-From-Candidates Strategy_ (STFCS), there are five other ART implementation strategies: _Partitioning-Based Strategy_ (PBS); _Test-Profile-Based Strategy_ (TPBS); _Question-Random Strategy_ (QRS); _Search-Based Strategy_ (SBS); and _Hybrid Strategies_ (HSs). ### _Partitioning-Based Strategy (PBS)_ A _Partitioning-Based Strategy_ (PBS) has two components: The _partitioning schema_; and the _subdomain selection_. The partitioning schema defines how to partition the input domain \(\mathcal{D}\) into \(m\) disjoint subdomains \(\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D}_{m}\)\((m>1)\), such that: \(\mathcal{D}_{i}\bigcap\mathcal{D}_{j}=\emptyset\)\((1\leq i\neq j\leq m)\), and \(\mathcal{D}=\mathcal{D}_{1}\bigcup\mathcal{D}_{2}\bigcup\cdots\bigcup\mathcal{D}_ {m}\). The subdomain-selection component defines how to choose the subdomain in which the next test case will be generated. The partitioning schema can be _static_[45, 80], or _dynamic_[81, 82, 83]. A static schema means that the input domain is divided before test case generation begins, and then no further partitioning takes place once testing begins. A dynamic schema, in contrast, divides the input domain dynamically, often as each new test case is generated. Many criteria exist to support the subdomain selection, including choosing the largest [45], or the one with the least number of already-generated test cases [83]. ### _Test-Profile-Based Strategy (TPBS)_ A _Test-Profile-Based Strategy_ (TPBS) [47] generates test cases based on a well-designed test profile, not uniform distribution, dynamically updating the profile after each test-case generation. A test profile can be considered the distribution of selection probability for all test inputs in the input domain \(\mathcal{D}\), with inputs in different locations (potentially) having different selection probabilities. When a test case is executed without revealing failure, its selection probability is then assigned a value of \(0\)[47]. According to the principles of ART, a test profile should be adjusted to satisfy the following criteria [47]: (1) The farther away a test input is from previously-executed test cases, the _higher_ the selection probability that it is assigned should be; (2) The closer a test input is to the previously-executed test cases, the _lower_ the selection probability that it is assigned should be; and (3) The probability distribution should be dynamically adjusted to maintain these two features. ### _Quasi-Random Strategy (QRS)_ The _Quasi-Random Strategy_ (QRS) [84] makes use of _quasi-random sequences_ -- point sequences with low discrepancy and low dispersion -- to implement ART. QRS generally has two main components: The _quasi-random-sequence-selection_ component, which constructs each point; and the _randomization_ component, which randomizes each constructed point, making the next test case. There are many different quasi-random sequences, including: _Halton_[85]; _Sobol_[86]; and _Niederreiter_[87]. There are also a number of different randomization methods, such as: _Crunley-Patterson Rotation_[88, 89]; _Owen Scrambling_[90]; and _Random Shaking and Rotation_[91]. ### _Search-Based Strategy (SBS)_ _Search-Based Strategies_ (SBSs), which come from _Search-Based Software Testing_[92, 93], make use of search-based algorithms to evenly spread the test cases over the input domain. Because ART needs to retain some randomness in the generated test cases, SBS creates an initial test set population \(PT\) of randomly generated test sets. A search-based algorithm is then adopted to iteratively evolve \(PT\) into its next generation, according to the pre-defined criterion. Once a stopping condition has been satisfied, the best solution from \(PT\) is selected as the final test set. Two core elements of SBS, therefore, are the choice of search-based algorithm for evolving \(PT\), and the evaluation (_fitness_) function for each solution. A number of search-based algorithms have been used to evolve ART test sets, including: _Hill Climbing_[94]; _Simulated Annealing_[95]; _Genetic Algorithm_[95]; _Simulated Repulsion_[96]; _Local Spreading_[97]; and _Random Border Centroidal Voronoi Tessellations_[50]. ### _Hybrid Strategies (HSs)_ _Hybrid Strategies_ (HSs) are combinations of multiple ART strategies, usually designed to enhance the testing effectiveness (e.g., fault-detection effectiveness) or efficiency (e.g., test-generation cost). Chow et al. [83], for example, combined STFCS and PBS, producing an efficient and effective method called _ART with divide-and-conquer_ that independently applies STFCS to each bisection-partitioned subdomain. Mayer [65] used bisection partitioning to control the STFCS test-case-identification component, only calculating distances between the candidate \(c\) and executed test cases in its neighboring regions, not those in other regions. Liu et al. [47] augmented TPBS with PBS and STFCS: When the PBS chooses a subdomain within which to generate test cases [83], all inputs in this subdomain are given a probability of being selected, and those outside are given none. ## VIII Conclusions and Future Work _Adaptive Random Testing_ (ART) [14, 38] is a family of testing approaches that enhance the fault-detection effectiveness of _Random Testing_ (RT). Previous studies have demonstrated that, compared with RT, ART generates more diverse test cases, spread more evenly over the input domain, and that this can deliver better testing effectiveness [14]. There are many ART strategies and implementations, among which the most well known is the _Select-Test-From-Candidates Strategy_ (STFCS) [14]. STFCS makes use of the concept of dissimilarity among test cases, and selects an element from random candidates as the next test case such that it has the largest dissimilarity to the previously-generated test cases. Although popular, STFCS suffers from a problem of high computational costs. Many enhanced STFCS algorithms exist that aim to reduce computation time, but they also face challenges balancing the testing effectiveness and efficiency. In this paper, based on the concept of _Approximate Nearest Neighbor_ (ANN), we have proposed a new ART approach that uses _Locality Sensitive Hashing_ (LSH) to support and implement STFCS: _LSH-based ART_ (LSH-ART). LSH-ART makes use of all previously generated test cases to maintain the fault-detection ability, but uses ANNs for each candidate to reduce computational overheads. The results of our simulations and empirical studies show that, overall, LSH-ART achieves comparable and even better fault detection than the original ART and its variants. LSH-ART incurs lower computational costs when generating the same number of test cases, especially when the input domain dimensionality is high, resulting in better cost-effectiveness. LSH-ART uses an SLSH table that can re-hash elements in a hash bucket, to store previously-generated test cases. The traditional E\({}^{2}\)LSH makes use of multiple hash tables to improve the search accuracy of ANNs [43]. Therefore, an important research direction for future work will be about how multiple SLSH tables could be used to improve LSH-ART. Another important research direction will be the application of other ANN approaches to ART in the future. In addition to LSH, there are many other approaches to support the ANN process, including: _vector quantization_[98]; _semantic hashing_[99]; and _production quantization_[100]. It will be interesting to investigate how these other ANN approaches can guide the ART process, and we look forward to comparing and analyzing their favorable (and unfavorable) conditions. Finally, it will be interesting to explore the extension of LSH-ART from numeric to non-numeric input domains. The input domains in this study were all numeric, which meant that LSH-ART could be directly applied to generate the test cases. Adapting LSH-ART to non-numeric input domains will be challenging, involving, amongst other things, an encoding process to transform non-numeric inputs into the corresponding numeric vectors. This encoding problem, and the associated difficulty in defining (dis)similarity, will be interesting research challenges for our future work. Once these are addressed, it will be possible to directly apply LSH-ART to any non-numeric input domains. ## Acknowledgment We would like to thank the anonymous reviewers for their many constructive comments. We would also like to thank Andrea Arcuri and Chengying Mao for providing us the source code and fault data of the subject programs used in their papers [27, 57]. This work is supported by the Macau Science and Technology Development Fund under Grant 0046/2021/A, and the Faculty Research Grants (FRG) of Macau University of Science and Technology under Grant FRG-22-103-FIE. This work is partly supported by the National Natural Science Foundation of China under Grant 61872167 and Grant 61502205.
2310.13446
Simple binning algorithm and SimDec visualization for comprehensive sensitivity analysis of complex computational models
Models of complex technological systems inherently contain interactions and dependencies among their input variables that affect their joint influence on the output. Such models are often computationally expensive and few sensitivity analysis methods can effectively process such complexities. Moreover, the sensitivity analysis field as a whole pays limited attention to the nature of interaction effects, whose understanding can prove to be critical for the design of safe and reliable systems. In this paper, we introduce and extensively test a simple binning approach for computing sensitivity indices and demonstrate how complementing it with the smart visualization method, simulation decomposition (SimDec), can permit important insights into the behavior of complex engineering models. The simple binning approach computes first-, second-order effects, and a combined sensitivity index, and is considerably more computationally efficient than the mainstream measure for Sobol indices introduced by Saltelli et al. The totality of the sensitivity analysis framework provides an efficient and intuitive way to analyze the behavior of complex systems containing interactions and dependencies.
Mariia Kozlova, Antti Ahola, Pamphile T. Roy, Julian Scott Yeomans
2023-10-20T12:19:36Z
http://arxiv.org/abs/2310.13446v2
Simple binning algorithm and SimDec visualization for comprehensive sensitivity analysis of complex computational models ###### Abstract Models of complex technological systems inherently contain interactions and dependencies among their input variables that affect their joint influence on the output. Such models are often computationally expensive and few sensitivity analysis methods can effectively process such complexities. Moreover, the sensitivity analysis field as a whole pays limited attention to the nature of interaction effects, whose understanding can prove to be critical for the design of safe and reliable systems. In this paper, we introduce and extensively test a simple binning approach for computing sensitivity indices and demonstrate how complementing it with the smart visualization method, simulation decomposition (SimDec), can permit important insights into the behavior of complex engineering models. The simple binning approach computes first-, second-order effects, and a combined sensitivity index, and is considerably more computationally efficient than Sobol' indices. The totality of the sensitivity analysis framework provides an efficient and intuitive way to analyze the behavior of complex systems containing interactions and dependencies. ## 1 Introduction Engineering models of complex technological systems are characterized by a high degree of complexity and specificity. While the researchers developing them are experts in such specific mathematical approaches, as finite element methods, structural reliability and integrity modeling, and fitness-for-service (Mahadevan and Ni, 2003; prEN 1993-1-14, 2023; Shen et al., 2023; Shittu, Mehmanparast, and Hart, 2021), their professional expertise is frequently deficient in the mathematically comprehensive field of sensitivity analysis (SA) (Saltelli et al., 2019). In many structural elements, potential errors in the underlying structural analysis models can lead to failures that cause catastrophic consequences in service. To prevent such outcomes, it is essential to conduct an effective SA on those models. Furthermore, SA can also be used in model development to identify the most important parameters and their interactions and guide the process. SA can be used to explicitly test the models and exclude those parameters that do not significantly affect the outputs to build simplified approaches for engineering. The development of simple and intuitive SA approaches is paramount for incorporating SA into standard practice to fully complement the processes of building and analyzing a computational model (Iooss, Sudret, Piano, and Prieur, 2022; Saltelli et al., 2019). Consequently, this paper provides several contributions to SA for assessing the behavior of complex engineering models. Firstly, we introduce a novel intuitive procedure that efficiently computes global variance-based sensitivity indices for a given dataset (Section 2), together with its motivation presented (Section 1.1). Secondly, in contrast to recent mathematical developments (Barr and Rabitz, 2023; Borgonovo, Plischke, and Prieur, 2023; Da Veiga, 2015; Lamboni and Kucherenko, 2021; Mara, Tarantola, and Annoni, 2015), the new approach is shown to capture dependent inputs automatically while simultaneously revealing several interesting properties (Section 4). Thirdly, the entirety of modern sensitivity analysis literature is focused solely on the computation of sensitivity indices (Ballester-Ripoll and Leonelli, 2022; Barr and Rabitz, 2023; Jung and Taflanidis, 2023; Shang, Su, Fang, and Zhang, 2023; Shi, Zhou, and Zhou, 2023; Thapa and Missoum, 2022; Vuillod, Montemurro, Panettieri, & Hallo, 2023; Wang and Jia, 2023; Xiong et al., 2022). In contrast, we advocate for the use of visualizations to investigate the shape of the effects in addition to their strength (Section 1.2), and illustrate this with a structural reliability model that possesses nested heterogeneous interaction effects that could not be uncovered without such visual representation (Section 5). Overall, this paper introduces a simple but powerful framework for the SA of engineering models, that computes sensitivity indices to prioritize the input factors followed by a smart visualization for communicating the nature of the discovered effects. There are open-source codes in several programming languages that implement the entire framework to accompany the article. ### Sensitivity indices computation Understanding the behavior of computational models forms the basis for design and decision-making. The decision-maker obtains information on: what sources of uncertainties are most crucial and might require protection against; which actionable parameters make the most difference and thus represent the perfect levers for managing the system; and, which model parameters create the most noise and require clarification to obtain a clearer representation of the system being modeled. The sole purpose of the scientific field of SA is to identify which input variables affect the output(s) the most in the computational models (Saltelli et al., 2008). For educational purposes, the conceptual idea behind the sensitivity indices is often explained using scatter plots (Caers, 2018; Saltelli, 2023) as depicted in Figure 1, borrowed from one of the talks devoted to SA (Saltelli, 2023). Figure 1 displays the intuition behind the variance-based sensitivity indices: namely, bin the \(X\), compute the average of \(Y\) in every bin, compute the variance of these average values of \(Y\), and divide the resulting conditional variance to the total variance of \(Y\) to obtain the sensitivity index value. Quite unexpectedly, however, no common methods have previously adopted this logic in computing sensitivity indices. Among the widely used methods for computing sensitivity indices: some involve game theory (Shapley et al., 1953) and some Fourier analysis (FAST) (Cukier, Fortuin, Shuler, Petschek, & Schaibly, 1973); the classic Sobol' indices (Homma and Saltelli, 1996; Sobol, 1993) require juggling multiple simulation matrices; some methods employ polynomial chaos expansion (Crestaux, Le Marte, & Martinez, 2009; Sudret, 2008); there is a so-called moment-independent measure which is based on quantifying the shift in the distribution of the output caused by an input (Borgonovo, 2007); some are based on derivatives (Kucherenko and Song, 2016; Sobol and Kucherenko, 2010); there is a variogram-based method (VARS) (Razavi and Gupta, 2016); while the most recent approach utilizes discrepancy measures (Puy, Roy, and Saltelli, 2022) to quantify the difference between the cumulative probability distributions of simulated data and its running averages. Modern research on sensitivity analysis is actively developing and predominantly targets an increase in the computational efficiency of these various methods (Iooss et al., 2022; Saltelli, Jakeman, Razavi, & Wu, 2021). Marzban and Lahmer (2016) introduced a method that follows the logic of Figure 1, though this approach has been largely overlooked by the modeling community (their article has only nine citations in Scopus). The authors call it _conceptual implementation of the variance-based sensitivity analysis_ and demonstrate how the approximation of first-order sensitivity indices can be computed with this approach Figure 1: Conceptual representation of the sensitivity indices, adapted from (Saltelli, 2023). (see Section 2.1 for details). Our focus is drawn to this method because: * it is intuitive and works precisely as sensitivity indices are conceptually introduced (Figure 1); * it is computationally efficient; * it works with a given dataset. As Plischke (2012) notes, for such methods: (i) no access to the model is required; (ii) they can work on measured data; and as a result, (iii) the method is indifferent to distribution types and sampling methods. Furthermore, it enables smoother passing of information between the model and sensitivity analysis, if they are performed in different software or by different people. This paper extends the concept of Marzban and Lahmer (2016) to capture second-order effects. This modified new approach, referred to as the _Simple binning approach_, introduces additional benefits: * the choice of the number of bins is no longer the user's responsibility, it is automated based on experiments by Marzban and Lahmer (2016); * it works with dependent inputs (although developments that allow sensitivity analysis of models with dependent input variables exist (Borgonovo et al., 2023; Da Veiga, 2015; Lamboni and Kucherenko, 2021; Mara et al., 2015), they all require certain additional transformations, whereas the simple binning approach captures dependent inputs "as is" and preserves the conservation property); * it works with categorical/discrete variables; * it is resistant to different numbers of observations in bins due to the usage of weighted variance; * as a result, it can work with empirical datasets, or * it can be used to quantify the sensitivity of the output to intermediate outputs in simulation models. ### Importance of visualization The modern practice of sensitivity analysis systematically stops at computing sensitivity indices (Ballester-Ripoll and Leonelli, 2022; Barr and Rabitz, 2023; Jung and Taflanidis, 2023; Shang et al., 2023; Shi et al., 2023; Thapa and Missoum, 2022; Vuillod et al., 2023; Wang and Jia, 2023; Xiong et al., 2022) and does not take an additional step of investigating the shape of the effects in a model. Model interactions may arise in many different forms (Kozlova, Moss, Yeomans, and Caers, in press). In one model, they might be the result of multiplication in which the effect of one input variable linearly increases in the higher values of another input variable. In another model, the effect of the variable might be strong in a certain region of another input variable, but negligible in the others. The direction of the influence of one input factor on the output might be flipped in different regions of another input variable. In all of these cases, the sensitivity index will show a positive second-order effect of a certain degree, but no more. Namely, no indication of the _type_ of interaction effect can be obtained through sensitivity indices alone. However, explicit knowledge of the nature of interaction effects is crucial for effective decision-making and engineering design and, thus, visualization of the effects is a "must-have" practice for computational model analysis. Heat maps (M. P. Owen, Panken, Moss, Alvarez, and Leeper, 2019; Pleil, Stiegel, Madden, and Sobus, 2011) and response surfaces (Myers, Montgomery, and Anderson-Cook, 2016) are inherently three-dimensional visualizations that can portray the joint effect of a pair of input variables on the model output. However, these visualization types cannot handle simultaneous variations of multiple inputs and, thus, fail to depict higher-order interactions. These visualization limitations are overcome by the _simulation decomposition_ (SimDec) approach (Kozlova and Yeomans, 2022). SimDec partitions the output probability or frequency distribution into sub-distributions comprised of combinations of specific regions of influential input variables, thereby exposing the nature of interaction effects. The SimDec approach has demonstrated value in multiple applications (Deviatkin, Kozlova, and Yeomans, 2021; Kozlova and Yeomans, 2019; Liu, Leifsson, Pietrenko-Dabrowska, and Koziel, 2022). In this paper, it will be shown that the concurrent employment of the simple binning approach to identify the most influential input variables with SimDec to display the nature of those effects results in a simple-to-implement, intuitive, and sophisticated framework for conducting SA. Simple binning approach to sensitivity indices ### Existing conceptual implementation for first-order effects Marzban and Lahmer (2016) introduced a conceptual implementation to Sobol' first-order indices. Unlike the multiple matrices normally needed for calculating Sobol' indices, their approach requires only a single \(N_{observations}*K_{inputs}\) matrix generated from one Monte Carlo simulation. The approach involves binning along the \(X\)'s range, computing averages of \(Y\)s within every bin, and then taking the variance of those averages (as shown in Figure 1). The first-order sensitivity index is then computed as the conditional variance divided by the variance of \(Y\): \[S_{x_{i}}=\frac{\text{Var}(\mathbb{E}(Y\mid X_{i}))}{\text{Var}(Y)} \tag{1}\] Furthermore, Marzban and Lahmer (2016) define the optimal number of bins for different \(N_{observations}\) and \(K_{inputs}\) as shown in Table 1, and demonstrate this approach on several benchmark problems with known analytical solutions for their sensitivity indices. ### Extension to second-order effects Following on from the first-order index calculation approach, we extend the binning idea into the computation of second-order effects. This implies that instead of binning a line defined by a single input, we must now bin a two-dimensional area defined by two inputs. Figure 2. The calculation steps follow the same procedure for computing averages in the bins and their variance, except we employ a weighted variance to account for possibly unequal numbers of observations in bins \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Number of input variables, \(K_{inputs}\)} \\ \cline{2-4} Number of observations, \(N_{obs}\) & 3 & 6 & 12 \\ \hline 1000 & 10 & 10 & 10 \\ 2500 & 25 & 10 & 10 \\ 5000 & 50 & 10 & 10 \\ 7500 & 50 & 25 & 10 \\ 10000 & 50 & 50 & 10 \\ 25000 & 100 & 50 & 25 \\ 50000 & 100 & 50 & 50 \\ \hline \hline \end{tabular} \end{table} Table 1: Optimal number of bins for different combinations of the number of input factors and simulation runs according to Marzban and Lahmer (2016). Figure 2: Simple binning approach for first-order indices (left) and second-order indices (right). relevant to categorical variables with different frequencies for different categories, and for second-order effects: \[\text{Var}_{w}(Y_{X_{i}})=\frac{\sum_{b=1}^{N_{\text{bins}}}N_{\text{obs}}( \overline{Y_{b}}-\overline{Y})^{2}}{\sum_{b=1}^{N_{\text{bins}}}N_{\text{obs}}}. \tag{2}\] where the binning happens over \(X_{i}\) and the averages are taken of \(Y\) values as illustrated in Figure 2. The second-order effect is defined in the classic way. \[S_{X_{i}X_{j}}=\frac{V(E(Y|X_{i}X_{j})}{V(Y)}-S_{X_{i}}-S_{X_{j}} \tag{3}\] or \[S_{X_{i}X_{j}}=\frac{V(E(Y|X_{i}X_{j})}{V(Y)}-\frac{V(E(Y|X_{i})}{V(Y)}-\frac{ V(E(Y|X_{j})}{V(Y)} \tag{4}\] The number of bins is defined based on the optimal number of observations per bin. For the first-order effects, the number of bins is a linear approximation of the optimal experimental number of bins as a function of the number of simulation runs and the number of inputs, Table 1, floored at 10. For the calculation of second-order effects, to preserve approximately the same number of observations in two-dimensional bins, the range of each input variable is broken down into a number of intervals equal to a square root (rounded to the nearest integer) of the previously computed number of bins for the first-order effects. The same number of bins should be applied to all terms in Equation 4. For example, a stylized scheme in Figure 2 depicts four bins for \(X_{i}\) for the first-order effect, which translates into two bins for \(X_{i}\) and two for \(X_{j}\) for second-order effects, thus, the bin size remains the same. An aggregate or combined sensitivity index can now be calculated which, for each input variable, sums up its first-order effect and halves second-order indices with all other input variables: \[S_{\text{combined}_{X_{i}}}=S_{X_{i}}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{K_{\text{inputs}}}\frac{S_{X_{i}X_{j}}}{2} \tag{5}\] This combined sensitivity index aggregates the individual and the interaction effects of each input variable. The sum of combined indices for all model input variables would equal 1 if there are no effects higher than second-order in the model and no unaccounted variation. However, due to the approximation nature of the method, potentially noisy results due to binning (Marzban & Lahmer, 2016), and the imperfections of sampling, the sum of combined sensitivity indices can slightly deviate from 1 even in the absence of higher-order effects. Irrespectively, we advocate for the usage of this approach in combination with SimDec to determine the selection of the prime input variables for decomposition. As long as the ranking of inputs' importance is preserved, noise does not prove to be an impediment. ## 3 Testing the simple binning approach ### Capturing interactions - The toy model To demonstrate the efficacy of the simple binning approach in capturing second-order effects, we examine the simple toy model presented in the seminal sensitivity analysis textbook by Saltelli, Tarantola, Campolongo, Ratto, et al. (2004). At the heart of the toy model are summations and multiplications which are basic operations in any computational model. The model equation and the input distributions are as follows: \[Y=C_{s}P_{s}+C_{t}P_{t}+C_{j}P_{j} \tag{6}\] where \(P_{s}\sim\ N(0;4)\), \(P_{t}\sim\ N(0;2)\), \(P_{j}\sim\ N(0;1)\), \(C_{s}\sim\ N(250;200)\), \(C_{t}\sim\ N(400;300)\), and \(C_{j}\sim\ N(500;400)\). A side-by-side comparison of sensitivity index results obtained by Sobol' approach as reported in (Saltelli et al., 2004) and the new simple binning method are presented in Table 2. Table 2 demonstrates that the simple binning method produces meaningful sensitivity indices close to Sobol' indices. However, the simple binning method used only 1000 model evaluations, while 21000 data points were required to compute the Sobol' indices. The number of bins used was ten for first-order effects and three for second-order effects. The indices produced by both approaches in Table 2 are based on limited model evaluations with simple random sampling. Because different simulation runs can generate to some degree different datasets, we repeated the exercise 1000 times to ascertain how stable the index estimations were. Figure 3 provides a comparison of the results obtained with 10K and 1K model evaluations with the simple binning approach, respectively, contrasted by Sobol' indices computed on an efficient quasi-random (Sobol' sequence) sampling of the size 1792 (14 different simulations required for 6 inputs variables, 128 points each). The simple binning approach even based on a smaller 1K sample produces more accurate estimates than Sobol' indices. A larger sample narrows the deviations in sensitivity indices estimation even further. \begin{table} \begin{tabular}{l c c c} \hline Effect & Sobol’ indices (Saltelli et al., 2004) & Simple binning method & Delta \\ \hline First-order effects & & & \\ \(P_{s}\) & 36 \% & 35 \% & \(-1\) \% \\ \(C_{s}\) & 0 \% & 1 \% & 1 \% \\ \(P_{t}\) & 22 \% & 20 \% & \(-2\) \% \\ \(C_{t}\) & 0 \% & 1 \% & 1 \% \\ \(P_{j}\) & 8 \% & 8 \% & 0 \% \\ \(C_{j}\) & 0 \% & 2 \% & 2 \% \\ Sum of first-order effects & 66 \% & 67 \% & 1 \% \\ \hline Second-order effects (of selected pairs of variables) & & & \\ \(P_{s}C_{s}\) & 18 \% & 16 \% & \(-2\) \% \\ \(P_{t}C_{t}\) & 11 \% & 10 \% & \(-1\) \% \\ \(P_{j}C_{j}\) & 5 \% & 6 \% & 1 \% \\ Sum of second-order effects & 34 \% & 32 \% & \(-2\) \% \\ Sum of all effects & 100 \% & 99 \% & \(-1\) \% \\ \hline Model evaluations & 21000 & 1000 & \\ \hline \end{tabular} \end{table} Table 2: Comparison of the Sobol’ indices and simple binning approach on a toy model. In addition, we compared how the simple binning approach performs with (i) simple random sampling (Ollen & Rotem, 1986; Singh & Singh, 2003), which is often used for Monte Carlo simulation, (ii) quasi-random sampling (A. B. Owen, 2023; Sobol', 1967) with the Sobol' low discrepancy sequence, which is reportedly a more efficient sampling strategy that fills the space more uniformly, and (iii) full factorial design, frequently used in engineering contexts (Alidoosti et al., 2013; Suard, Hostikka, & Baccou, 2013; Tong, 2006), Figure 4. For this experiment, we change the distribution of inputs from normal to uniform, selecting as minimum and maximum values \(\pm\) two standard deviations of the corresponding normal distributions. Figure 3: Sobol’ indices obtained with quasi-random sampling (QMC) of the size 1792 and sensitivity indices obtained with the simple binning approach and simple random sampling (MC) for 1K and 10K model evaluations of the toy model. From Figure 4, one can observe that simple random sampling results in noisier estimates and that this noise increases substantially the smaller the sample. Full factorial design sampling results in deterministic estimates for sensitivity indices, but the simple binning approach performed on such a sample underestimates first-order indices and overestimates second-order effects. Quasi-Monte Carlo (QMC), done with scrambling, gives more accurate and clear estimates than other sampling strategies, even with a smaller sample, and, thus, is recommended for sensitivity indices computation. QMC is easy to implement and its coded functions are available in Python (SciPy, 2023), R (Chalabi et al., 2023), Julia (Robbe, 2018), and Matlab (MathWorks, 2013). However, when using any of these packages, it is important to have the first point in the sequence sampled, otherwise, its performance drops (A. B. Owen, 2020). ### Capturing cyclic behavior - The Ishigami function The Ishigami function, equation 7, is one of the benchmark functions often used in sensitivity analysis studies for the validation of different methods, since analytical solutions for its sensitivity indices can be readily determined. \[Y=\sin(X_{1})+a\sin(X_{2})^{2}+bX_{3}^{4}\sin(X_{1}) \tag{7}\] The function is periodic in nature, Figure 5, which represents a significant challenge for approximate methods (Zichn & Tomlin, 2009). Figure 4: Toy model sensitivity indices estimation by the simple binning approach with simple random sampling Monte Carlo simulation (MC), quasi-random sampling or quasi-Monte Carlo (QMC), and full factorial design (FFD) sampling. The simple binning algorithm is tested on the Ishigami function with parameters \(a=7\) and \(b=0.1\), quasi-random sampling of size \(10^{5}\), \(10^{4}\), and \(10^{3}\), Figure 6. Figure 6 indicates that the simple binning algorithm with QMC sample size \(10^{4}\) or larger captures first-order effects very well, but underestimates the highly-curved (see Figure 5 (center)) interaction effect between \(X_{1}\) and \(X_{3}\). A smaller sample translates into fewer bins for estimating the first-order effects and the square root of that for second-order effects (a sample size of 1000 is analyzed with 10 bins for first-order effects and only 3 bins for the second-order effects), so the high-frequency relationships are downplayed. Thus, if a model is known to possess periodic or cyclic effects, a larger sample size becomes essential in order to produce reliable estimates for sensitivity analysis. ### Capturing correlation - The mechanical engineering model In this section, we reproduce and then modify the mechanical engineering problem presented in Marzban and Lahmer (2016) by introducing dependent inputs into their model and examining the resulting performance capabilities of our sensitivity analysis algorithm in processing this additional complication. Figure 5: Ishigami function, equation 7, with \(a=7\) and \(b=0.1\) (the third factor is fixed at 0 on each plot). Figure 6: Analytical and approximate sensitivity indices for Ishigami function. #### 3.3.1 Case background Civil engineering structures play an essential role in most modern infrastructures (e.g., in buildings, process industry, and shells). A wide spectrum of steel frames are required within these structures. From an engineering perspective, such frames must be designed to withstand numerous different kinds of load actions due to such things as payloads, self-weight, and environmental impacts. In statically loaded structures (characterized by permanent load actions), it is necessary to analyze stresses and displacements. Stress analyses are usually related to the ultimate limit state (ULS), while analyses on displacement cover the serviceability limit states (SLS) that are affiliated with the operational conditions and functionalities (service) of structures. It is crucial to understand the effects of different parameters on the mechanical system behavior. For some simple cases, the effects can be determined analytically. Frequently, however, nonlinear becomes impractical or impossible to solve analytically in even the most straightforward systems. In such cases, numerical modeling becomes a requisite tool for assessing structural behavior under mechanical loads. From an engineering viewpoint, it is usually easy to determine the most influential geometric and material parameters in the underlying linear analyses (LAs) of mechanical behavior of the system, such as deflections and stresses. However, geometrically nonlinear analyses (GNAs) may uncover different behavior of the system with a different set of influential parameters. Thus, the decision, which analysis to use is often critical for engineering design and structural analysis. To support the decision as to whether a nonlinear approach is required, a simplified model that computes the ratio between the results of two different analysis types (e.g., LA and GNA) can be employed. The sensitivity indices for different parameters can then provide insight for decision-making. A two-dimensional frame structure is employed to demonstrate such an approach (Figure 7), as in (Marzban & Lahmer, 2016). The frame structure is comprised of two vertical columns joined with a horizontal beam. The beam is loaded by a uniformly distributed shear load along the beam and the top left corner of the frame is loaded by a horizontal load. The structural behavior, focusing on the displacements, of the frame structure is numerically solved via LA and GNA. As the original CALFEM finite element (FE) code (Austrell et al., 2004) applied in Marzban and Lahmer (2016) was not accessible, a similar frame structure compiled by three beam elements was recreated (Figure 7). The numerical FE models were parameterized in an API tool using the FEMAP 2022.2 (Siemens PLM) software. The system was studied employing nine different input parameters. Namely, the height \(H\) and width \(W\) of the frame; the modulus of elasticity \(E\); the cross-sectional area of columns and beam, \(A_{c}\) and \(A_{b}\), respectively; the second moment of areas of columns and beam, \(I_{c}\) and \(I_{b}\), respectively; lateral force, P; and uniformly distributed shear load \(q_{0}\). Each parameter was varied by \(\pm 25\%\) of its mean value and Table 3 displays all of the corresponding minimum and maximum values. A total number of 10,000 models were simulated and analyzed using both LAs and GNAs. As an output value, the ratio between lateral displacements in the top left corner of the frame, obtained using GNA and LA, was used. Figure 7: Shape and dimensions of the studied frame structure. #### 3.3.2 Estimation of sensitivity indices Table 4 displays the first- and second-order sensitivity indices computed for the original model used in Marzban and Lahmer (2016). To clarify the visual perception, all index values less than 2% have been grayed out. As can be seen from Table 4, the frame height is the mode influencing parameter. The relative magnitudes shown for the first-order effects coincide with the those obtained by Marzban and Lahmer (2016). The negligible second-order indices signy an absence of any interaction effects. To introduce dependency into the system, the \(H/I_{c}\) ratio was fixed as per the dimensions of the case structure (i.e., \(I_{c}=4.706\)e-\(6\cdot H\)). With this change in effect, to investigate the second-order effects of input parameters with correlated input parameters, a subsequent computational experiment was conducted with a total of 10,000 simulations. The new sensitivity indices calculated for this correlated system are presented in Table 5. Comparing Tables 4 and 5, we can clearly observe the striking difference arising in the second-order effects. While for the original model, all second-order effects were negligible, in the correlated model, we can see a negative effect of 20%. The negative second-order effect reflects the introduced dependency between the inputs. It should be noted that this negative second-order effect essentially zeroes out the first-order effect of the \(I_{c}\) if summing up all effects. Thus, the negative second-order effect represents a correction of an overlapping first-order effect. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & \(H\) & \(W\) & \(E\) & \(A_{c}\) & \(I_{c}\) & \(A_{b}\) & \(I_{b}\) & \(P\) & \(q_{0}\) \\ & (m) & (m) & (N/m\({}^{2}\)) & (m\({}^{2}\)) & (m\({}^{4}\)) & (m\({}^{2}\)) & (m\({}^{4}\)) & (N) & (N/m) \\ \hline First-order indices & 41 \% & 16 \% & 13 \% & 0 \% & 10 \% & 0 \% & 0 \% & 0 \% & 13 \% \\ \hline Second-order indices & & & & & & & & & & \\ \hline \(H\) (m) & 1 \% & 1 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 1 \% \\ \(W\) (m) & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(E\) (N/m\({}^{2}\)) & & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(A_{c}\) (m\({}^{2}\)) & & & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(I_{c}\) (m\({}^{4}\)) & & & & & & 0 \% & 0 \% & 0 \% & 0 \% \\ \(A_{b}\) (m\({}^{2}\)) & & & & & & & 0 \% & 0 \% & 0 \% \\ \(I_{b}\) (m\({}^{4}\)) & & & & & & & 0 \% & 0 \% \\ \(P\) (N) & & & & & & & & & 0 \% \\ \hline \end{tabular} \end{table} Table 4: Sensitivity indices for the engineering model. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & \(H\) & \(W\) & \(E\) & \(A_{c}\) & \(I_{c}\) & \(A_{b}\) & \(I_{b}\) & \(P\) & \(q_{0}\) \\ & (m) & (m) & ((GPa) & (e-3 m\({}^{2}\)) & (e-5 m\({}^{4}\)) & (e-3 m\({}^{2}\)) & (e-5 m\({}^{4}\)) & (kN) & (kN/m) \\ \hline Minimum & 2.55 & 3.0 & 150 & 1.5 & 1.2 & 4.5 & 4.05 & 7.5 & 37.5 \\ Maximum & 4.25 & 5.0 & 250 & 2.5 & 2.0 & 7.5 & 6.75 & 12.5 & 62.5 \\ \hline \end{tabular} \end{table} Table 3: Model parameter ranges in the frame model. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & \(H\) & \(W\) & \(E\) & \(A_{c}\) & \(I_{c}\) & \(A_{b}\) & \(I_{b}\) & \(P\) & \(q_{0}\) \\ & (m) & (m) & ((N/m\({}^{2}\)) & (m\({}^{2}\)) & (m\({}^{4}\)) & (m\({}^{2}\)) & (m\({}^{4}\)) & (N) & (N/m) \\ \hline First-order indices & 41 \% & 16 \% & 13 \% & 0 \% & 10 \% & 0 \% & 0 \% & 0 \% & 13 \% \\ \hline \end{tabular} \begin{tabular}{c c c c c c c c c c} \hline \(H\) & \(W\) & \(E\) & \(A_{c}\) & \(I_{c}\) & \(A_{b}\) & \(I_{b}\) & \(P\) & \(q_{0}\) \\ & (m) & (m) & (N/m\({}^{2}\)) & (m\({}^{2}\)) & (m\({}^{4}\)) & (m\({}^{2}\)) & (m\({}^{4}\)) & (N) & (N/m) \\ \hline First-order indices & 41 \% & 16 \% & 13 \% & 0 \% & 10 \% & 0 \% & 0 \% & 0 \% & 13 \% \\ \hline \end{tabular} \begin{tabular}{c c c c c c c c c c} \hline Second-order indices & & & & & & & & & & \\ \hline \(H\) (m) & 1 \% & 1 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 1 \% \\ \(W\) (m) & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(E\) (N/m\({}^{2}\)) & & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(A_{c}\) (m\({}^{2}\)) & & & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(I_{c}\) (m\({}^{4}\)) & & & & & 0 \% & 0 \% & 0 \% & 0 \% & 0 \% \\ \(A_{b}\) (m\({}^{2}\)) & & & & & & & 0 \% & 0 \% & 0 \% \\ \(I_{b}\) (m\({}^{4}\)) & & & & & & & 0 \% & 0 \% \\ \(P\) (N) & & & & & & & & & 0 \% \\ \hline \end{tabular} \end{table} Table 5: Sensitivity indices for the modified engineering model with correlated inputs. It can be further observed that the first-order indices have changed their values as well. By effectively eliminating one input variable via correlation, the overall output variance of the output has changed so that corresponding portions of the explained variance have also changed accordingly. This engineering case demonstrates that the binning approach for calculating sensitivity indices is capable of capturing and identifying the impacts from dependent inputs. Even though such sensitivity analyses can be time-consuming for complex systems (e.g., for a high number of elements), this approach could be used for sub-models to highlight the influencing factors and importance of GNAs. While in the presented example, the correlation was introduced artificially, it occurs naturally in more complex systems (e.g., strength properties are dependent on the material chosen, or, if giving an example outside engineering applications, different price ranges lead to different demand levels, etc.). Projecting these findings to a larger scope of applications,an increase in the accuracy of structural analyses for SLS estimates would improve the overall reliability and integrity of the structures. Consequently, costly required changes to structural elements potentially found _post hoc_ during service/operation could be avoided via proper _a priori_ engineering analysis and design. ## 4 Relationships between second-order indices and correlation The mechanical engineering model demonstrated that in the presence of positive correlation, the second-order sensitivity index turns negative (Table 5). But how does the index behave when the correlation is negative? Furthermore, is the relationship the same in the presence of an interaction of inputs that affect the output? To explore these questions, we set up a series of simulation experiments for two simple two-factor models. The first model is additive, \(Y=A+B\), where the variables do not interact. The second model is multiplicative, \(Y=AB\), which possesses a positive second-order effect when the variables are independent, thereby manifesting interaction. We also examine two different types of correlation, (i) joint multivariate uniform distribution copula and (ii) equating values of \(B\) to \(A\) over a portion of the sample, which might be considered as a structural change in the system. Both correlation types are modeled for different correlation strengths. For validation purposes, Pearson and Spearman correlation coefficients are reported in each case. Both \(A\) and \(B\) assume uniform distribution between 0 and 5. The indices are calculated based on a sample size of \(10^{6}\) to ensure a high level of precision. The sensitivity indices estimates are presented in Table 6, and the relationship between the second-order indices and the Pearson correlation coefficient is visualized in Figure 8. Hart and Gremaud (2018) have indicated that theoretical second-order indices can become negative in the presence of dependent inputs and such an outcome can be observed for the approximated indices computed by our simple binning algorithm (Table 6). Additive models cross the origin (see Figure 8) and possess zero second-order effects under a no-dependency case. No interaction is anticipated from the additive model. Copula variables show the second-order effect values very close to the correlation coefficients and form a linear relationship between the two. Positive correlation results in negative second-order effects, conveying overlapping effects of the Figure 8: Second-order effect as a function of correlation for additive (0% interaction) and multiplicative (14% interaction) functions with two correlation types, (1) - portion of equal values, (2) - copula. variables. Conversely, negative correlation results in a positive second-order effect for additive models as if the combined effect of the variables is greater than the sum of their individual effects (see equation 4). The additive model, in which dependence is modeled through a portion of equal values, has largely the same pattern except for lower positive second-order effects in the presence of negative correlation. The calculation of significance indices for both additive models at -100% correlation fails, because all \(Y\) values turn to 0. Instead, the values for the negative 95% correlation are computed and displayed in Figure 8. In the multiplicative model, the second-order effect index behaves asymmetrically for positive and negative correlation (Figure 8). With increasing positive correlation, the second-order index becomes increasingly negative (as in the additive models), signifying the overlapping effects of the input variables on the output. For negative correlations, the second-order index is initially positive and increasing, peaking at 50% correlation. This translates into a synergistic effect where the negatively correlated inputs magnify the impact on the output. But when the negative correlation exceeds 50%, the second-order index decreases and approaches \(-1\). One could speculate that with a strong negative correlation for interacting variables, the joint overlapping effect becomes stronger than the synergistic one, driving the second-order index to \(-1\). For all cases, the _conservation_ property (the sum of all indices is equal to 1) holds. The _boundedness_ property for each index extends from \([0,1]\), as for the classic Sobol' indices, to \([-1,1]\). Because the conservation property holds under all cases, this implies that correlation and interaction effects are actually both combined into the single second-order effect, and, simultaneously affecting the first-order index behavior, perfectly fitting into the variance decomposition of the output. The presence of interaction can be observed in Figure 8 as the vertical distance between the additive and multiplicative models' lines on the correlation range between \(-0.25\) and \(0.25\). In other words, the different degrees of dependency move the second-order effect up or down, but the presence of interaction reveals itself in higher second-order effect values. ## 5 Visualizing second-order effects - Beyond sensitivity indices The sensitivity analysis field has focused on computing sensitivity indices to be able to rank the input variables by their relative importance (Da Veiga, Gamboa, Iooss, & Prieur, 2021; Saltelli et al., 2008). And yet it becomes apparent that sensitivity indices are not able to convey the full relationship story. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Correlation type} & \multirow{2}{*}{Model} & \multirow{2}{*}{\(S\)} & \multicolumn{8}{c}{Correlation} \\ \cline{3-11} & & & \multicolumn{3}{c}{negative} & \multicolumn{8}{c}{positive} \\ & & & -100 \% & \multicolumn{3}{c}{-50 \%} & \multicolumn{3}{c}{0 \%} & \multicolumn{3}{c}{\%} & 50 \% & \multicolumn{3}{c}{\%} & 100 \% \\ \hline \multirow{6}{*}{\begin{tabular}{c} \end{tabular} } & \multirow{6}{*}{\(A+B\)} & \(S_{A}\) & - & 0.14 & 0.27 & 0.38 & 0.50 & 0.62 & 0.74 & 0.87 & 1.00 \\ & & \(S_{B}\) & - & 0.14 & 0.27 & 0.38 & 0.50 & 0.62 & 0.74 & 0.87 & 1.00 \\ & & \(S_{AB}\) & - & **0.71** & **0.47** & **0.23** & **0.00** & **-0.24** & **-0.49** & **-0.74** & **-1.00** \\ & & \(\sum\) & - & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \cline{2-11} & & \(S_{A}\) & 1.00 & 0.31 & 0.27 & 0.33 & 0.43 & 0.56 & 0.70 & 0.85 & 1.00 \\ & & \(S_{B}\) & 1.00 & 0.31 & 0.27 & 0.33 & 0.43 & 0.56 & 0.70 & 0.85 & 1.00 \\ & & \(S_{AB}\) & **-1.00** & **0.39** & **0.47** & **0.35** & **0.14** & **-0.11** & **-0.40** & **-0.70** & **-1.00** \\ & & \(\sum\) & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \cline{2-11} & & \(\frac{1}{2}\) & Pearson & -1.00 & -0.73 & -0.48 & -0.24 & 0.00 & 0.24 & 0.48 & 0.73 & 1.00 \\ & & Spearman & -1.00 & -0.73 & -0.48 & -0.24 & 0.00 & 0.24 & 0.48 & 0.73 & 1.00 \\ \hline \multirow{6}{*}{ \begin{tabular}{c} \end{tabular} } & \multirow{6}{*}{\(A+B\)} & \(S_{A}\) & - & 0.61 & 0.36 & 0.45 & 0.50 & 0.65 & 0.79 & 0.93 & 1.00 \\ & & \(S_{B}\) & - & 0.04 & 0.26 & 0.47 & 0.50 & 0.67 & 0.76 & 0.83 & 1.00 \\ & & \(S_{AB}\) & - & **0.35** & **0.39** & **0.09** & **0.00** & **-0.32** & **-0.56** & **-0.76** & **-1.00** \\ & & \(\sum\) & - & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \cline{2-11} & & \(S_{A}\) & 1.00 & 0.52 & 0.39 & 0.42 & 0.43 & 0.44 & 0.51 & 0.74 & 1.00 \\ \cline{2-11} & & \(S_{B}\) & 1.00 & 0.09 & 0.17 & 0.35 & 0.43 & 0.59 & 0.86 & 0.93 & 1.00 \\ & & \(S_{AB}\) & **-1.00** & **0.39** & **0.44** & **0.24** & **0.14** & **-0.03** & **-0.36** & **-0.67** & **-1.00** \\ & & \(\sum\) & 1.00 & 1.00 & 1.00 & 1.01 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \cline{2-11} & \(\frac{1}{2}\) & Pearson & -1.00 & -0.71 & -0.52 & -0.23 & 0.00 & 0.23 & 0.52 & 0.71 & 1.00 \\ \cline{2-11} & Spearman & -1.00 & -0.71 & -0.53 & -0.24 & 0.00 & 0.24 & 0.53 & 0.71 & 1.00 \\ \hline \hline \end{tabular} \end{table} Table 6: Sensitivity indices and correlation coefficients for a simple \(Y=AB\) model with \(A\) and \(B\) dependent to different degrees. As observed in our earlier analysis with dependent input variables (Table 6), the second-order indices for the same model, but operating under different assumptions, appear to be very similar. However, the scatter plots (Table 6) reveal how different these relationships actually are, and this knowledge can lead to different structural design decisions. A fact that has been largely neglected throughout modern sensitivity analysis studies is that although specific summary values can appear near-identical in magnitude, an examination of the underlying data can reveal very different shapes and perspectives (Kozlova et al., in press; Puy, Beneventano, et al., 2022; Saltelli et al., 2004). Consequently, different kinds of visualizations can be used to augment and improve the understanding of the underlying model. These data visualizations can include such formats as basic scatter plots, response surfaces, heat maps, and parallel coordinate plots. However, all of these representations possess dimensionality limitations. SimDec is a recent approach that is able to project multidimensional systems onto a two-dimensional graph (Kozlova and Yeomans, 2022b) and, in combination with the simple binning sensitivity analysis approach, can be used to convey the nature of interactions in complex engineering models to produce a much more comprehensive sensitivity analysis framework (Kozlova et al., in press). To illustrate the power of this framework, this section showcases the application of the simple binning approach together with SimDec visualization for investigating the behavior of a complex structural reliability model containing both correlations and interactions. ### Case background: structural reliability model for fatigue assessments Many structural applications experience cyclic or fluctuating load conditions during service life (such as ship and offshore structures as well as various vehicles, trucks and trains). In these complex systems, fatigue is amongst the most important design criteria to control and maintain the lifespan of ageing components without concerns of structural failures. Due to the random nature of acting cyclic loads in many applications and the complexity of physical modeling of fatigue phenomenon, the vast majority of existing models for predicting fatigue life are necessarily based on probabilistic or data-driven approaches (Bai, Huang, Li, Lu, & Huang, 2023; Leonetti, Majlaars, & Snijder, 2020; Ruiz Munoz and Sorensen, 2020; Ye, Yang, Zhang, Meng, & Wang, 2023). However, current trends for improving material usage and optimization in mechanical components have led to an ever-increasing need for creating more sophisticated and more accurate deterministic models to assess the fatigue life of components. This is particularly the case for modern infrastructures that utilize high-strength steels, due to the fact that an increase in material strength does not contribute to higher fatigue performance in welded components (Lieurade, Huther, & Lefebvre, 2008). To account for the effects of material strength and welding quality, a multi-parametric model - named the 4R method - was introduced for fatigue assessment of welded components. The 4R model incorporates four parameters - material strength, welding residual stresses, applied stress ratio, and welding quality (Ahola, Muikku, Braun, & Bjork, 2021; Ahola, Skriko, & Bjork, 2020). These four key parameters affect the fatigue strength of welded components depending on the conditions (Hobbacher, 2016). As the 4R model employs more parameters than conventional models, a key question still rests on a determination of the sensitivity of each parameter on the 4R output value. The basis of the 4R method is to utilize the effective stress concept, usually known as the effective notch stress (ENS) approach (Sonsino et al., 2012), in association with the mean stress correction using the requisite four parameters. In the 4R model, material elastic-plastic behavior at the notch root is analytically or numerically computed, after which the local hysteresis loops of cyclic stress are determined. The output reference stress applied in fatigue assessments (in the form of S-N curve) is computed based on the well-known Smith-Watson-Topper (Smith, Watson, & Topper, 1970) mean stress correction on the linear-elastic notch stress (equation 8). Figure 9 demonstrates the workflow of the 4R method. \[\Delta\sigma_{k,ref}=\frac{\Delta\sigma_{k}}{\sqrt{1-R_{local}}} \tag{8}\] ### Sensitivity indices for the structural reliability model The 4R model is simulated \(10^{4}\) times with inputs uniformly distributed within the ranges specified in Table 7. The three levels of residual stress and stress ratio for each case material are formed based on actual conditions in structural components. Usually welding causes high tensile residual stresses, up to the yield strength of parent material. On the other hand, residual stresses can be reduced by, for example, post-heat treatments and/or mechanical post-weld treatments or residual stress relaxation via the mechanical load. The applied stress ratio is specific to load conditions in service, and the selected three cases represent available cases in engineering components. The weld toe radius is converted in the model to fatigue-effective stress, \(K_{f}\), the sensitivity to which is analyzed. The three other inputs used in the sensitivity analysis, as is. The sensitivity indices to the model output, \(\Delta\sigma_{k,ref}\), computed with the simple binning method, are presented in Table 8. The sensitivity indices in Table 8 show that the residual stress \(\sigma_{res}\), alone, explains half of the variance of the output. Residual stress \(\sigma_{res}\) also has 11% interaction with stress ratio \(R\), and a negative \(-6\%\) second-order effect with steel grade \(R_{p0.2}\). In addition, stress ratio \(R\) and steel grade \(R_{p0.2}\) have a 4% interaction. The remaining second-order effects are zero. Thus, there are three input variables in the model that have pair-wise interaction effects with each other: residual stress \(\sigma_{res}\), stress ratio \(R\), and steel grade \(R_{p0.2}\). Together these variables explain 96% of the output variability. However, as mentioned above, the sensitivity indices alone cannot adequately explain the nature of those interactions and how they should be accounted for in decision-making processes. \begin{table} \begin{tabular}{l l l l l} \hline \hline Case material & \begin{tabular}{l} Weld toe radius, \\ \(r_{true}\) (mm) \\ \end{tabular} & \begin{tabular}{l} Steel grade, \\ \(R_{p0.2}\) (MPa) \\ \end{tabular} & \begin{tabular}{l} Residual stress, \\ \(\sigma_{res}\) (MPa) \\ \end{tabular} & \begin{tabular}{l} Stress ratio, \\ \(R\) (-) \\ \end{tabular} \\ \hline S355 & 0.5 \(\pm\) 0.5 & 355 \(\pm\) 100 & \begin{tabular}{l} 250 \(\pm\) 100 \\ -100 \(\pm\) 100 \\ -100 \(\pm\) 0.2 \\ \end{tabular} & \begin{tabular}{l} 0.5 \(\pm\) 0.2 \\ -1 \(\pm\) 0.2 \\ \end{tabular} \\ \hline S960 & 0.5 \(\pm\) 0.5 & 960 \(\pm\) 100 & \begin{tabular}{l} 850 \(\pm\) 100 \\ -300 \(\pm\) 100 \\ \end{tabular} & \begin{tabular}{l} 0.5 \(\pm\) 0.2 \\ 0 \(\pm\) 0.2 \\ -1 \(\pm\) 0.2 \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 7: Assumptions on input variation for the structural reliability 4R model. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{Variable} & \multicolumn{2}{l}{First-order} & \multicolumn{4}{c}{Second-order effect} & Combined \\ \cline{3-6} & \multicolumn{2}{l}{Residual} & Stress & Steel & Fatigue- & effect \\ & & \multicolumn{2}{l}{\(\sigma_{res}\)} & \multicolumn{2}{l}{\(R\)} & \multicolumn{2}{l}{\(R_{p0.2}\)} & stress, \(K_{f}\) & \\ \hline Residual stress, \(\sigma_{res}\) & 50\% & & 11\% & \(-6\%\) & 0\% & 51\% \\ Stress ratio, \(R\) & 28\% & & & 4\% & 0\% & 35\% \\ Steel grade, \(R_{p0.2}\) & 11\% & & & & 0\% & 10\% \\ Fatigue-effective stress, \(K_{f}\) & 4\% & & & & & 4\% \\ \(\Sigma\) & 92\% & & & & & 100\% \\ \hline \hline \end{tabular} \end{table} Table 8: Sensitivity indices for the structural reliability 4R model. Figure 9: Description and workflow of the 4R model for fatigue assessment. ### Exploring the nature of second-order effects To investigate how the model output is affected simultaneously by the three input variables identified as possessing pairwise interaction effects, we employ SimDec to provide a multidimensional visualization perspective of their impacts (Kozlova et al., in press; Kozlova and Yeomans, 2022a). Since SimDec uses a frequency distribution, the same simulation data that was used to compute sensitivity indices with the simple binning method, can be used for the visualization. The idea behind SimDec is to decompose the simulation data into regions formed by the combinations of different ranges (or states) of influential input variables. An intelligent color-coding of these regions in the frequency distribution communicates a clear visual representation of the inherent input-output relationships within the model (for the algorithm details, see (Kozlova et al., in press)). The open-source SimDec code is available for Python, R, Julia, and Matlab (Kozlova, Roy, Moss, & Alam, 2023). For the structural reliability model, we choose to break down the residual stress \(\sigma_{res}\) into three states (low, medium, high), the stress ratio \(R\) into two states (reversed, pulsating), and the steel grade \(R_{p0.2}\) into two states (mild, UHSS) (Table 9). Altogether, all possible combinations of these state settings for the input variables (Table 9) form 12 separate scenarios. The probability distribution of the model output, \(\Delta\sigma_{k,ref}\), is partitioned using these scenarios and color-coded. The coloring logic is important to facilitate the overall interpretability of the visual perceptions. The states of the most influential input variable assume distinct main colors, and each of these main colors is sub-shaded to further highlight the partitions, Figure 10. Figure 10 demonstrates how non-monotonic the effects of the input variables on the model output are, and that the effect of one input is conditioned to the states of another one. First, the residual stress \(\sigma_{res}\) divides the distribution into two narrow scenarios, _medium_ and _high_, whereas the _low_ scenario sub-distribution spreads out through the entire range of the output values. Stress ratio \(R\) has only a minor effect on the output in _medium_ and _high_ states of residual stress (small horizontal shift of the respective sub-distributions) but plays an important role in the _low_ residual stress, almost dividing the output into halves. In _low_ residual stress, steel grade \(R_{p0.2}\) has a minor effect when the stress ratio is _reversed_ but creates a substantial rightward shift if the stress ratio is _pulsating_. The combinations of _medium_ residual stress & _UHSS_ steel grade and _high_ residual stress & _mild_ steel grade are non-existent, which reflects physical material constraints and thereby explains the negative second-order effect. These visceral relationships would not have been revealed or apparent without the SimDec visualization. ## 6 Conclusions This paper introduces and extensively tests the simple binning approach for computing first- and second-order sensitivity indices, which provide more accurate results than classic in global sensitivity analysis Sobol' indices. Further, the paper demonstrates the importance of visualizing input-output relationships on a structural reliability model using a SimDec approach. The resulting methodological framework is streamlined (i.e., it works with a given data of as little as 1000 data points), open-sourced, and provides an intuitive way of 'looking' into a computational model and grasping its behavior. The presented framework can be recommended for analyzing disparate complex technological models for engineering design and decision-making. ## Acknowledgements The authors are grateful to Prof. Art Owen for his comments in the early stages of this work. The work was supported by grant 220178 from the Finnish Foundation for Economic Education, by project CaNeLis, \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Residual stress & & \multicolumn{3}{c}{Stress ratio} & \multicolumn{3}{c}{Steel grade} \\ \hline State & min & max & State & min & max & State & min & max \\ \hline Low & –400 & 100 & Reversed & \(-1.2\) & \(-0.25\) & Mild & 255 & 657 \\ Medium & 100 & 650 & Pulsating & \(-0.25\) & 0.7 & UHSS & 657 & 1060 \\ High & 650 & 950 & - & - & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 9: Decomposition set-up for the structural reliability 4R model (UHSS stands for ultra-high steel strength). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Color**} & **Scenario** & **Residual** & **Stress** & **Steel** & \multirow{2}{*}{**m**in} & **m**in} & **m**in} & **m**in} & **m**in} & **m**in} \\ & & stress, \(\sigma_{res}\) & **Stress** & ratio, \(R\) & & & & & \\ \hline \multirow{4}{*}{**Color**} & **sc1** & \multirow{4}{*}{**Low**} & \multirow{4}{*}{**Reversed**} & Mild & 107 & 332 & 475 & 11\% \\ & **sc2** & & & UHSS & 11 & 264 & 515 & 11\% \\ & **sc3** & & & Mild & 320 & 467 & 591 & 22\% \\ & **sc4** & & & UHSS & 68 & 543 & 819 & 22\% \\ \hline \multirow{4}{*}{**Color**} & **sc5** & \multirow{4}{*}{**Medium**} & **Reversed** & Mild & 383 & 456 & 524 & 6\% \\ & **sc6** & & UHSS & NaN & NaN & NaN & NaN \\ & **sc7** & & & Mild & 422 & 499 & 622 & 12\% \\ & **sc8** & & & UHSS & NaN & NaN & NaN & NaN \\ \hline \hline \multirow{4}{*}{**Color**} & **sc9** & \multirow{4}{*}{**High**} & **Reversed** & Mild & NaN & NaN & NaN & NaN \\ & **sc10** & & & UHSS & **630** & **704** & **795** & 6\% \\ \cline{1-1} & **sc11** & & & Mild & NaN & NaN & NaN & NaN \\ \cline{1-1} & **sc12** & & & UHSS & **656** & **746** & **851** & 12\% \\ \hline \hline \end{tabular} \end{table} Table 10: Summary of _simulation decomposition_ of the structural reliability 4R model. Figure 10: _Simulation decomposition_ of the structural reliability 4R model. 3406/31/2022, funded by Business Finland, and by grant OGP0155871 from the Natural Sciences and Engineering Research Council.
2302.13657
Functors on relational structures which admit both left and right adjoints
This paper describes several cases of adjunction in the homomorphism preorder of relational structures. We say that two functors $\Lambda$ and $\Gamma$ between thin categories of relational structures are adjoint if for all structures $\mathbf A$ and $\mathbf B$, we have that $\Lambda(\mathbf A)$ maps homomorphically to $\mathbf B$ if and only if $\mathbf A$ maps homomorphically to $\Gamma(\mathbf B)$. If this is the case, $\Lambda$ is called the left adjoint to $\Gamma$ and $\Gamma$ the right adjoint to $\Lambda$. In 2015, Foniok and Tardif described some functors on the category of digraphs that allow both left and right adjoints. The main contribution of Foniok and Tardif is a construction of right adjoints to some of the functors identified as right adjoints by Pultr in 1970. We generalise results of Foniok and Tardif to arbitrary relational structures, and coincidently, we also provide more right adjoints on digraphs, and since these constructions are connected to finite duality, we also provide a new construction of duals to trees. Our results are inspired by an application in promise constraint satisfaction -- it has been shown that such functors can be used as efficient reductions between these problems.
Víctor Dalmau, Andrei Krokhin, Jakub Opršal
2023-02-27T10:53:33Z
http://arxiv.org/abs/2302.13657v3
# Functors on relational structures which admit both left and right adjoints ###### Abstract This paper describes several cases of adjunction in the homomorphism order of relational structures. For these purposes, we say that two functors \(\Gamma\) and \(\Delta\) between categories of relational structures are _adjoint_ if for all structures \(A\) and \(B\), we have that \(\Gamma(A)\) maps homomorphically to \(B\) if and only if \(A\) maps homomorphically to \(\Delta(B)\). If this is the case \(\Gamma\) is called the left adjoint to \(\Delta\) and \(\Delta\) the right adjoint to \(\Gamma\). In 2015, Foniok and Tardif described some functors category of digraphs that allow both left and right adjoints. The main contribution of Foniok and Tardif is a construction of right adjoints to some of the functors identified as right adjoints by Pultr in 1970. We generalise results of Foniok and Tardif to arbitrary relational relational structures, and coincidentely, we also provide more right adjoints on digraphs, and since these constructions are connected to finite duality, we also provide a new construction of duals to trees. Our results are motivated by the application in promise constraint satisfaction -- it has been shown that such functors can be used as efficient reductions between these problems. Keywords:relational structure, digraph, homomorphism, homomorphism duality, constraint satisfaction problem ## 1 Introduction The study of homomorphisms between graphs and general relational structures plays an important role in combinatorics and theoretical computer science [11, 12, 13]. The importance in theoretical computer science stems, in particular, from the fact that the Constraint Satisfaction Problem (CSP) and its relatives can be cast as the problem of existence of a homomorphism from one relational structure to another. The class of all relational structures of a given signature (e.g., all graphs) admits the homomorphism preorder, where \(\mathbf{A}\leq\mathbf{B}\) for two structure \(\mathbf{A}\) and \(\mathbf{B}\) if and only if there exists a homomorphism from \(\mathbf{A}\) to \(\mathbf{B}\). This preorder is interesting in its own right [11], and, moreover, important computational problems such as non-uniform CSPs and Promise CSPs (PCSPs) can be stated in terms of a relative position of an input structure with respect to a fixed structure or a pair of fixed structures. Specifically, the problem \(\mathrm{CSP}(\mathbf{A})\) for a fixed structure \(\mathbf{A}\) asks whether Introduction The study of the structure of a graph is a fundamental problem in the study of the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph to the structure of a graph. The structure of a graph is a natural generalization of the structure of a graph to the structure of a graph to the structure of a graph to the structure of a graph to the structure of a graph. The ## 2. Preliminaries We recall some basic definitions and notation. ### Structures and homomorphisms A directed graph can be defined as a pair \(\mathbf{G}=(G,E^{\mathbf{G}})\) where \(G\) is the set of vertices of \(\mathbf{G}\) and \(E^{\mathbf{G}}\subseteq G\times G\) a set of edges of \(\mathbf{G}\). This is a special case of a relational structure with a single relation of arity \(2\) as defined below. **Definition 2.1**.: A _relational signature_\(\tau\) is a tuple of relational symbols \(R,S,\dots\) where each symbol is assigned a positive integer, called an arity and denoted by \(\operatorname{ar}R\), \(\operatorname{ar}S,\dots\). A _relational \(\tau\)-structure_ is a tuple \(\mathbf{A}=(A;R^{\mathbf{A}},S^{\mathbf{A}},\dots)\), where \(A\) is a set called the _domain of \(\mathbf{A}\)_, and \(R^{\mathbf{A}}\subseteq A^{\operatorname{ar}R}\), \(S^{\mathbf{A}}\subseteq A^{\operatorname{ar}S}\), \(\dots\) are relations on this domain of the corresponding arity. The relational symbols \(R,S,\dots\) in \(\tau\) are also referred to as _\(\tau\)-symbols_. When no confusion can arise, we say a \(\tau\)-structure (dropping "relational"), or even simply a structure, when \(\tau\) is clear from the context. Two structures of the same signature are said to be _similar_. We will call elements from the domain of some structure _vertices_, and tuples in a relation \(R\) of some structure \(R\)-_edges_, or simply _edges_ if the symbol \(R\) is either irrelevant or clear from the context. In the rest of the paper, we will not use the symbol \(V\) as a relational symbol--we restrict its use to refer to the domain of the structure (e.g., in Definition 2.5 and Section 3 below). Loosely speaking, a homomorphism between two similar structures is a map that preserves relations, e.g., a graph homomorphism would be a map between the two graphs that preserves edges. Formally, a homomorphism is defined as follows. **Definition 2.2**.: Let \(\mathbf{A}\) and \(\mathbf{B}\) be two structures of the same signature \(\tau\). A _homomorphism_\(f\colon\mathbf{A}\to\mathbf{B}\) is defined to be a mapping \(f\colon A\to B\) such that for each relational symbol \(R\) in \(\tau\) and each \((a_{1},\dots,a_{\operatorname{ar}R})\in R^{\mathbf{A}}\), we have \[(f(a_{1}),\dots,f(a_{\operatorname{ar}R}))\in R^{\mathbf{B}}.\] We will write \(\mathbf{A}\to\mathbf{B}\) if there exists a homomorphism from \(\mathbf{A}\) to \(\mathbf{B}\). The set of all homomorphisms from \(\mathbf{A}\) to \(\mathbf{B}\) is denoted by \(\hom(\mathbf{A},\mathbf{B})\). The above definition can be rephrased by saying that the function \(f\colon A\to B\) has a coordinate-wise action on each of the relations \(R\), i.e., for each relational symbol \(R\) in the signature, the expression \[f^{R}((a_{1},\dots,a_{\operatorname{ar}R}))=(f(a_{1}),\dots,f(a_{\operatorname {ar}R}))\] defines a function \(f^{R}\colon R^{\mathbf{A}}\to R^{\mathbf{B}}\). We use this symbol \(f^{R}\) throughout this paper. Finally, since we will be extensively working with the homomorphism preorder, this in particular means that we will often work with structures up to homomorphic equivalence--we say that two structures \(\mathbf{A}\) and \(\mathbf{B}\) are _homomorphically equivalent_ if we have \(\mathbf{A}\to\mathbf{B}\) and \(\mathbf{B}\to\mathbf{A}\). Such structures would be identified if we followed the standard procedure to turn the homomorphism preorder into a proper partial order. Note that two structures \(\mathbf{A}\) and \(\mathbf{B}\) are homomorphically equivalent if and only if for every structure \(\mathbf{C}\), we have \(\mathbf{C}\to\mathbf{A}\) if and only if \(\mathbf{C}\to\mathbf{B}\), i.e., they allow homomorphism from the same structures. The same is also true for allowing homomorphisms to the same structures, i.e., \(\mathbf{A}\) and \(\mathbf{B}\) are homomorphically equivalent if and only if, for all \(C\), we have \(A\to C\) if and only if \(B\to C\). A structure is called a _core_, if it is not homomorphically equivalent to any of its proper substructures. Certain structures called _trees_ play a special role in this paper. We use a definition of a tree equivalent to the one given in [17, Section 3]. Loosely speaking, a relational structure is a _tree_ if it is connected, contains no cycles and none of its relations has tuples with repeated entries. This is more precisely defined by using the incidence graph of a structure. Definition 2.3.: The _incidence graph_ of a structure \(A=(A;R^{A},\dots)\) is a bipartite multigraph with vertex set being the disjoint union of \(A\) and all the tuples in \(R^{A}\) for each relational symbol \(R\). There is an edge connecting every tuple \((a_{1},\dots,a_{k})\in R^{A}\) with every one of its coordinates \(a_{i}\in A\); in particular, if some element appears multiple times in this tuple, then the edge connecting it to the tuple appears with the same multiplicity. A \(\tau\)-structure is a _(\(\tau\)-)tree_ if its incidence graph is a tree, i.e., an acyclic connected digraph. Note that any structure that contains a tuple with repeated entries (e.g., a graph with a loop) is not a tree since its incidence graph then contains multiple edges, i.e., a cycle of length \(2\). Remark 2.4.: An undirected graph is a relational structure with a single relation \(E\) whose universe is the set of vertices \(V\). The relation \(E\subseteq V\times V\) contains for every edge two tuples \((u,v)\) and \((v,u)\). This means that no undirected graph with at least one edge is a tree according to the above definition since if \((u,v)\) is an edge, then \(u\), \((u,v)\), \(v\), \((v,u)\) is a \(4\)-cycle of the incidence graph. Intuitively, relational structures with binary relations are directed graphs; an undirected graph is encoded as directed by including both orientations of each edge which results in a directed cycle of length \(2\). Under the above definition a directed graph is a tree if it is an oriented tree. We fix notation for certain small structures containing either a single vertex or a single \(R\)-edge for some relational symbol \(R\) that we will use later. Definition 2.5.: Fix a relational signature \(\sigma\). We define the following structures: * \(V_{1}\) is the structure with a single vertex, i.e., \(V_{1}=\{1\}\), and empty relations, i.e., \(R^{V_{1}}=\emptyset\) for all \(\sigma\)-symbols \(R\). * let \(S\) be a relational symbol, \(S_{1}\) is a structure with \(\operatorname{ar}S\) vertices related by \(S\) and all other relations empty. More precisely, \(S_{1}=\{1,\dots,k\}\), \(S^{S_{1}}=\{(1,\dots,k)\}\), and \(R^{S_{1}}=\emptyset\) for all \(\sigma\)-symbols \(R\) except \(S\). ### Pultr functors: gadget replacement and pp-constructions Traditionally, in CSP literature, pp-constructions would be described in the language of logic using so called _primitive positive formulae_ (logical formulae that use only \(\exists\), \(\wedge\), and =). We refer to [1, Definition 19] for details. In this paper, we define "pp-constructions" using a language similar to [11, Definitions 2.1-2.3]. Definition 2.6.: Let \(\sigma\) and \(\tau\) be two relational signatures. A _\((\sigma,\tau)\)-Pultr template_ is a tuple of \(\sigma\)-structures consisting of \(\mathbf{P}\), and \(\mathbf{Q}_{R}\), one for each \(\tau\)-symbol \(R\), together with homomorphisms \(\epsilon_{i,R}\colon\mathbf{P}\to\mathbf{Q}_{R}\) for each \(\tau\)-symbol \(R\) and all \(i\in[\operatorname{ar}R]\). Definition 2.7.: Given a \((\sigma,\tau)\)-Pultr template as above, we define two functors \(\Lambda\) and \(\Gamma\) called _Pultr functors_. * Given a \(\tau\)-structure \(\mathbf{A}\), we define a \(\sigma\)-structure \(\Lambda(\mathbf{A})\) in the following way: For each \(a\in A\), introduce to \(\Lambda(\mathbf{A})\) a copy of \(\mathbf{P}\) denoted by \(\mathbf{P}_{a}\), and for each \(\tau\)-symbol \(R\) and each \((a_{1},\dots,a_{k})\in R^{\mathbf{A}}\), introduce to \(\Lambda(\mathbf{A})\) a copy of \(\mathbf{Q}_{R}\) with the image of \(\mathbf{P}\) under \(\epsilon_{i,R}\) identified with \(\mathbf{P}_{a_{i}}\) for all \(i\in[k]\). * Given a \(\sigma\)-structure \(\mathbf{B}\), we define a \(\tau\)-structure \(\Gamma(\mathbf{B})\) whose universe consists of all homomorphisms \(h\colon\mathbf{P}\to\mathbf{B}\). The relation \(R^{\Gamma(\mathbf{B})}\) where \(R\) is a \(\tau\)-symbol is then defined to contain all tuples \((h_{1},\dots,h_{k})\) of such homomorphisms for which there is a homomorphism \(g\colon\mathbf{Q}_{R}\to\mathbf{B}\) such that \(h_{i}=g\circ\epsilon_{i,R}\). We will refer to the functor \(\Lambda\) as the _gadget replacement_ and to the functor \(\Gamma\) as the _pp-construction_. The latter name is with some abuse of terminology, since formally a structure \(\mathbf{A}\) is pp-constructible from \(\mathbf{B}\) if it is homomorphically equivalent to the structure \(\Gamma(\mathbf{B})\) for some Pultr functor \(\Gamma\). A detailed exposition of Pultr functors between digraphs as pp-constructions is presented in [11, Section 4.1]. Once the definitions are settled, it is not too hard to show that for any Pultr template, the corresponding Pultr functors \(\Lambda\) and \(\Gamma\) are left and right adjoints. This statement is attributed to [10] though it was certainly rediscovered on numerous occasions. The fact that both \(\Lambda\) and \(\Gamma\) are thin functors follows from the adjunction. It can be also easily proved directly, for example, a homomorphism \(f^{\Gamma}\colon\Gamma(\mathbf{A})\to\Gamma(\mathbf{B})\) can be obtained from a homomorphism \(f\colon\mathbf{A}\to\mathbf{B}\) by setting \(f^{\Gamma}(h)=f\circ h\) for each \(h\colon\mathbf{P}\to\mathbf{A}\). The main contribution of the present paper is an investigation of the cases of Pultr templates for which \(\Gamma\) is also a left adjoint, i.e., it admits a right adjoint \(\Omega\). The following necessary condition for this was proved in [11]. **Theorem 2.8** ([11, Theorem 2.5]).: _Let \(\Lambda\) and \(\Gamma\) be a pair of left and central Pultr functors defined by a \((\sigma,\tau)\)-Pultr template. If \(\Gamma\) has a right adjoint, then, for each \(\sigma\)-tree \(\mathbf{T}\), \(\Lambda(\mathbf{T})\) is homomorphically equivalent to a \(\tau\)-tree._ We remark that the proof of this result in [11] is given only for directed graphs, but it goes through verbatim for general structures: The key to the proof is showing that if \(\Gamma\) has both a left adjoint \(\Lambda\) and a right adjoint \(\Omega\), and \((\mathbf{T},\mathbf{D})\) is a duality pair (i.e., for all structures \(\mathbf{A}\) similar to \(\mathbf{T}\), either \(\mathbf{T}\to\mathbf{A}\) or \(\mathbf{A}\to\mathbf{D}\)) then \((\Lambda(\mathbf{T}),\Omega(\mathbf{D}))\) is a duality pair as well. The rest then follows from the characterisation of finite duality in [10]. A simple corollary of the above is that if a central Pultr functor \(\Gamma\) has a right adjoint, then all the structures in its Pultr template are homomorphically equivalent to trees. This is because, as is shown in [11] (and not hard to see), if the template is composed of structures \(\mathbf{P}\) and \(\mathbf{Q}_{R}\) for a \(\tau\)-symbol \(R\), then \(\mathbf{P}\) is the \(\Lambda\) image of the \(\tau\)-structure \(\mathbf{V}_{1}\) with a single vertex and empty relations, and, similarly, for a \(k\)-ary \(\tau\)-symbol \(R\), \(\mathbf{Q}_{R}\) is the \(\Lambda\) image of the structure \(\mathbf{R}_{k}\) with \(k\) vertices related in \(R\). It is open whether the above condition is also sufficient to have an adjoint. In this paper, as well as Foniok-Tardif do in [11], we focus on the cases when \(\mathbf{P}\) and \(\mathbf{Q}_{R}\)'s are actually trees, and we give a concrete construction of the adjoint in two cases: \(\mathbf{P}=\mathbf{V}_{1}\) is just a single vertex with no edges (this is shown in Section 5), and \(\mathbf{P}=\mathbf{S}_{1}\) is the tree with a single \(S\)-edge for some \(\sigma\)-symbol \(S\) as defined in Definition 2.5 (this is shown in Section 6). Finally, in the last section we discuss what can be obtained by composition of these functors. ## 3. An inductive construction of trees For our constructions, it will be convenient to describe \(\tau\)-trees by certain formal terms. Our terms will correspond to trees that are rooted, either in a vertex or in an edge (i.e., a tuple in one of the relations). Each term will correspond to a unique inductive construction of a tree, but the same tree can be obtained by several inductive constructions. We recall that we assume that none of the relational signatures uses \(V\) as a relational symbol. Each term will be either a \(V\)-term (\(V\) stands for _vertices_) or an \(R\)-term where \(R\) is a relational symbol in \(\tau\). The rules for the inductive construction of terms and their semantics are as follows: * vertex is a \(V\)-term. * If \(t_{1},\ldots,t_{k}\) are \(V\)-terms and \(R\) is a \(k\)-ary symbol in \(\tau\) then \(\mathsf{edge}_{R}(t_{1},\ldots,t_{k})\) is an \(R\)-term. * If \(t\) is an \(R\)-term and \(i\in\{1,\ldots,\mathrm{ar}\,R\}\), then \(\mathsf{pr}_{i}(t)\) is a \(V\)-term. The rooted tree \((\mathbf{T}(t),r_{t})\), where \(r_{t}\) is the root, corresponding to a term \(t\) is defined in the following way: * \(\mathbf{T}(\mathsf{vertex})\) is the one-vertex tree, rooted at its only vertex, i.e., \(\mathbf{T}(\mathsf{vertex})=\mathbf{V}_{1}\) and \(r_{t}=1\). * If \(t=\mathsf{edge}_{R}(t_{1},\ldots,t_{k})\) is an \(R\)-term then \(\mathbf{T}(t)\) is the tree obtained by taking the disjoint union of the trees \(\mathbf{T}(t_{i})\) (with roots \(r_{t_{k}}\)) and adding the tuple \((r_{t_{1}},\ldots,r_{t_{k}})\) to the relation \(R\). The root \(r_{t}\) is defined to be this new tuple. * If \(t=\mathsf{pr}_{i}(s)\) is a \(V\)-term and \(r_{s}=(v_{1},\ldots,v_{k})\), then we set \(\mathbf{T}(t)=\mathbf{T}(s)\) and \(r_{t}=v_{i}\). We say that \(t\)_represents a tree \(\mathbf{T}\)_ if \(\mathbf{T}(t)\) is isomorphic to \(\mathbf{T}\), and that \(t\)_represents a tree \(\mathbf{T}\) rooted in \(r\)_ if \(\mathbf{T}(t)\) is isomorphic to \(\mathbf{T}\) via an isomorphism mapping \(r_{t}\) to \(r\). The same tree can in general be represented by several terms (see the following example), but each term represents one tree. Example 3.1.: The following are examples of terms and trees, they represent in the relational signature of digraphs. The corresponding roots are highlighted. We drop the brackets around arguments of \(\mathsf{pr}_{i}\) to ease readability. \[\begin{array}{c}\includegraphics[width=14.226378pt]{vertex}\quad\mathsf{ edge}_{E}(\mathsf{vertex},\mathsf{vertex})\quad\mathsf{pr}_{2}\,\mathsf{edge}_{E}( \mathsf{vertex},\mathsf{vertex})\end{array}\] Note that the difference between the second tree and the third tree is the root; the tree on the second row is obtained from the other by applying \(\mathsf{pr}_{2}\), and thus only changing the root. Further, we present a few examples of more complicated trees. \[\begin{array}{c}\mathsf{edge}_{E}(\mathsf{pr}_{2}\,\mathsf{edge}_{E}( \mathsf{vertex},\mathsf{vertex}),\mathsf{vertex})\quad\mathsf{edge}_{E}( \mathsf{vertex},\mathsf{pr}_{2}\,\mathsf{edge}_{E}(\mathsf{vertex},\mathsf{ vertex}))\end{array}\] \[\begin{array}{c}\mathsf{edge}_{E}\big{(}\mathsf{pr}_{1}\,\mathsf{edge}_{E} (\mathsf{pr}_{2}\,\mathsf{edge}_{E}(\mathsf{vertex},\mathsf{vertex}), \mathsf{vertex}),\mathsf{pr}_{2}\,\mathsf{edge}_{E}(\mathsf{vertex}, \mathsf{vertex})\big{)}\end{array}\] Remark that a tree can be represented by multiple terms even when we fix the root. For example, the last tree above can be also represented by the term \[\begin{array}{c}\mathsf{edge}_{E}\big{(}\mathsf{pr}_{2}\,\mathsf{edge}_{E} (\mathsf{vertex},\mathsf{pr}_{1}\,\mathsf{edge}_{E}(\mathsf{vertex},\mathsf{ vertex})),\mathsf{pr}_{2}\,\mathsf{edge}_{E}(\mathsf{vertex},\mathsf{ vertex})\big{)}.\end{array}\] **Lemma 3.2**.: _Fix a relational signature \(\tau\). Any finite \(\tau\)-tree can be represented by a term. More precisely,_ * _for every finite tree_ \(\mathbf{T}\) _and_ \(r\in T\) _there is a_ \(V\)_-term_ \(t\) _such that_ \(\mathbf{T}(t)\) _is isomorphic to_ \(\mathbf{T}\) _via an isomorphism that maps_ \(r_{t}\) _to_ \(r\)_, and_ * _for every finite tree_ \(\mathbf{T}\) _and_ \(r\in R^{\mathbf{T}}\) _for some_ \(\tau\)_-symbol_ \(R\)_, there is an_ \(R\)_-term_ \(t\) _such that_ \(\mathbf{T}(t)\) _is isomorphic to_ \(\mathbf{T}\) _via an isomorphism that maps_ \(r_{t}\) _to_ \(r\)_._ The proof is a simple argument by induction on the number of edges of the tree. We include it in detail to provide more intuition about terms. Proof.: We prove this statement by an induction on the number of edges of \(\mathbf{T}\). Each induction step is moreover split in two: we first prove that trees with \(n\)-edges rooted in an edge can be represented, and then, assuming the above, we show that trees with \(n\)-edges rooted in a vertex can be represented. 1. We start with \(n=0\). There is a single tree \(\mathbf{T}\) with no edges, namely the tree with one vertex. It cannot be rooted in an edge, but it can be rooted in its only vertex. It is represented by the term vertex. 2. Assume \(n>0\), and we can represent all trees rooted in a vertex with less than \(n\) edges. Assume that \(\mathbf{T}\) has \(n\)-edges and is rooted in an edge \((r_{1},\ldots,r_{k})\in R^{\mathbf{T}}\) for some \(k\)-ary symbol \(R\). Removing this edge from \(\mathbf{T}\) splits \(\mathbf{T}\) into \(k\) connected components \(\mathbf{T}_{1},\ldots,\mathbf{T}_{k}\) where \(\mathbf{T}_{i}\) contains \(r_{i}\) for each \(i\). Each of this components is a tree. Assuming that \(\mathbf{T}_{i}\) rooted in \(r_{i}\) is represented by the term \(t_{i}\) for each \(i\), \(\mathbf{T}\) rooted rooted in \((r_{1},\ldots,r_{k})\) is represented by \(\mathsf{edge}_{R}(t_{1},\ldots,t_{k})\). 3. Assume that \(n>0\), and we can represent all trees rooted in an edge with at \(n\) edges. Let \(\mathbf{T}\) be a tree with \(n\) edges and \(r\in T\). Since \(\mathbf{T}\) is connected, \(r\) is involved in some edge, say \((v_{1},\ldots,v_{k})\in R^{\mathbf{T}}\) where \(k=\operatorname{ar}R\) and \(r=v_{i}\) for some \(i\). Now, \(\mathbf{T}\) rooted in \((v_{1},\ldots,v_{k})\) is represented by an \(R\)-term \(t\) by the inductive assumption, so \(\mathbf{T}\) rooted in \(r\) is represented by \(\operatorname{pr}_{i}(t)\). Finally, our constructions use the notion of a _subterm_ of a term. Intuitively a subterm of a term \(t\) is any proper term that appears as a part of \(t\). The set of all subterms of \(t\) encodes all intermediate byproducts of the inductive construction of \(\mathbf{T}(t)\). Formally, we define subterms as follows. * The only subterm of vertex is itself. * If \(t=\mathsf{edge}_{R}(t_{1},\ldots,t_{k})\) is an \(R\)-term where \(t_{1},\ldots,t_{k}\) are \(V\)-terms and \(R\) is a \(k\)-ary symbol in \(\tau\) then the set of its subterms consists of the term itself and the union of all sets of subterms of \(t_{1},\ldots,t_{k}\). * If \(t=\operatorname{pr}_{i}(s)\) is a \(V\)-term where \(s\) is an \(R\)-term, then the set of its subterms consists of the term itself and the set of all subterms of \(s\). If \(s\) is a subterm of \(t\), then we write \(s\leq t\), moreover is \(s\) is a proper subterm of \(t\), i.e., it is a subterm and \(s\neq t\), we write \(s<t\). A \(V\)-subterm of a term \(t\) is a subterm which is a \(V\)-term, and similarly an \(R\)-subterm for a relational symbol \(R\) is a subterm which is an \(R\)-term. For example, the term \[t=\mathsf{edge}_{E}\big{(}\operatorname{pr}_{1}(\mathsf{edge}_{E}(\operatorname {vertex},\operatorname{vertex})),\operatorname{pr}_{1}(\mathsf{edge}_{E}( \operatorname{vertex},\operatorname{vertex}))\big{)}\] has four distinct subterms: two \(E\)-terms, which are \(t\) and \(\mathsf{edge}_{E}(\operatorname{vertex},\operatorname{vertex})\), and two \(V\)-terms, which are \(\operatorname{pr}_{1}(\mathsf{edge}_{E}(\operatorname{vertex},\operatorname{ vertex}))\) and vertex. We note that statements about terms can be proven by an inductive principle: showing the statement is true for the term vertex, and then showing that if it is true for all proper subterms of a term \(t\), it is also true for \(t\). ## 4. Prelude: duals to trees Before we get to the main construction of adjoint, let us briefly discuss a simpler construction of a dual of a tree. There are a few similarities between the construction of duals and right adjoints to central Pultr functors: as we mentioned before (see Theorem 2.8 and the remark following it), if \(\Gamma\) is a central Pultr functor that has a left adjoint \(\Lambda\) and a right adjoint \(\Omega\), and \((\mathbf{T},\mathbf{D})\) is a duality pair, then \((\Lambda(\mathbf{T}),\Omega(\mathbf{D}))\) is also a duality pair. Moreover, our construction of the dual uses the inductive construction of trees from the previous section in a similar way as the constructions of right adjoints in Sections 5 and 6, but the construction of a dual is conceptually easier, so we present it to create some intuition that will be useful below. Again, we fix a relational signature. A relational structure \(\mathbf{T}\) is said to have a dual \(\mathbf{D}\) if the structures that do not admit a homomorphism _from_\(\mathbf{T}\) are precisely those that admits a homomorphism _to_\(\mathbf{D}\). Nesetril and Tardi [17] showed that duality pairs of finite structures are relatively rare: A finite structure \(\mathbf{T}\) has a finite dual if and only if it is homomorphically equivalent to a tree. See [11, 17] for other characterisations of finite duality. Our construction is loosely inspired by the construction in [17]. Another way to look at duals is that for any structure \(\mathbf{A}\), a homomorphism \(\mathbf{A}\to\mathbf{D}\) should correspond to a '_proof_' that \(\mathbf{T}\not\to\mathbf{A}\). How can one prove that a tree does not map to a structure \(\mathbf{A}\) if that is the case? This can be done by an inductive argument. Let us outline this argument in the case \(\mathbf{T}\) and \(\mathbf{A}\) are digraphs. More precisely, we outline a procedure that shows that for a fixed root \(r\in T\) and some \(a\in A\), there is no homomorphism from \(\mathbf{T}\) to \(\mathbf{A}\) that maps \(r\) to \(a\). Pick a neighbour \(s\) of \(r\) in \(\mathbf{T}\), and assume that \((r,s)\in E^{\mathbf{T}}\), the other orientation is dealt with symmetrically. We can show that there is no homomorphism from \(\mathbf{T}\) to \(\mathbf{A}\) mapping \(r\) to \(a\) by showing that for none of the neighbours \(b\) of \(a\) in \(\mathbf{A}\), there is a homomorphism \(\mathbf{T}\to\mathbf{A}\) that maps the edge \((r,s)\) to \((a,b)\). In turn, removing the edge \((r,s)\) from \(\mathbf{T}\) splits the tree into two subtrees \(\mathbf{T}_{1}\), containing \(r\), and \(\mathbf{T}_{2}\), containing \(s\). A homomorphism \(\mathbf{T}\to\mathbf{A}\) that maps \((r,s)\) to \((a,b)\) is equivalent to a pair of homomorphisms \(\mathbf{T}_{1}\to\mathbf{A}\), that maps \(r\) to \(a\), and \(\mathbf{T}_{2}\to\mathbf{A}\), that maps \(s\) to \(b\). We have thus reduced the claim to proving that there is no homomorphism from two smaller trees, and can therefore recursively repeat our strategy for these two smaller trees. We further design a structure \(\mathbf{D}\) in which we can encode proofs of the above form. In particular, the image of an element \(a\in\mathbf{A}\) under a homomorphism \(\mathbf{A}\to\mathbf{D}\) will contain answers to questions of the form 'Is there a homomorphism \(\mathbf{T}^{\prime}\to\mathbf{A}\) that maps \(r\) to \(a\)?' for all trees \(\mathbf{T}^{\prime}\) and all roots \(r\in T^{\prime}\) that would appear in the above inductive argument. **Definition 4.1**.: Let \(t_{Q}\) be an \(S\)-term for some relational symbol \(S\). Let \(\mathscr{T}_{V}\) be the set of all \(V\)-subterms of \(t_{Q}\), and \(\mathscr{T}_{R}\) be the set of all \(R\)-subterms of \(t_{Q}\) for each relational symbol \(R\). We define a structure \(\mathbf{D}(t_{Q})\). Vertices of \(\mathbf{D}(t_{Q})\) are tuples \(v\in\{\mathsf{true},\mathsf{false}\}^{\mathscr{T}_{V}}\), indexed by \(V\)-subterms of \(t_{Q}\), such that \(v_{\mathsf{vertex}}=\mathsf{true}\). A tuple \((v^{1},\ldots,v^{k})\) of vertices is related in a relation \(R\) in \(\mathbf{D}(t_{Q})\) if there is a tuple \(e\in\{\mathsf{true},\mathsf{false}\}^{\mathscr{T}_{R}}\), such that 1. \(e_{t}=v^{1}_{t_{1}}\wedge\cdots\wedge v^{k}_{t_{k}}\) holds for all \(t\in\mathscr{T}_{R}\), where \(t=\mathsf{edge}_{R}(t_{1},\ldots,t_{k})\), 2. \(e_{t}\to v_{\mathsf{pr}_{t}(t)}\) is true for all \(i\in[k]\) and \(t\in\mathscr{T}_{R}\) such that \(\mathsf{pr}_{i}(t)\in\mathscr{T}_{V}\), and * \(e_{t_{Q}}=\text{false}\) if \(R=S\) (i.e., if \(t_{Q}\) is an \(R\)-term). Note that item (D1) defines the values of \(e_{t}\)'s for all \(R\)-terms \(t\) since any such term is of the form \(\text{edge}_{R}(t_{1},\ldots,t_{k})\) for some \(V\)-terms \(t_{1},\ldots,t_{k}\). Item (D2) is essentially quantified by the \(V\)-subterms of \(t_{Q}\) since all of such subterms \(t^{\prime}\), with the exception \(t^{\prime}=\text{vertex}\), are of the form \(\text{pr}_{i}(t)\) for some \(t\), and \(t\) and \(i\) are uniquely defined by \(t^{\prime}\). **Theorem 4.2**.: _For any relational structure \(\mathbf{Q}\) that is homomorphically equivalent to \(\mathbf{T}(t_{Q})\) for some \(S\)-term \(t_{Q}\) where \(S\) is a relational symbol, \(\mathbf{D}(t_{Q})\) is a dual of \(\mathbf{Q}\)._ Proof.: Without loss of generality, we may assume that \(\mathbf{Q}=\mathbf{T}(t_{Q})\). We need to show that for all structures \(\mathbf{A}\), \(\mathbf{A}\to\mathbf{D}(t_{Q})\) if and only if \(\mathbf{Q}\not\to\mathbf{A}\). First, assume that \(\mathbf{Q}\not\to\mathbf{A}\). We define \(f\colon A\to D(t_{Q})\) by putting, for each \(u\in A\) and each \(t\in\mathcal{T}_{V}\), \(f(u)_{t}=\text{true}\) if there is a homomorphism \(h\colon\mathbf{T}(t)\to\mathbf{A}\) that maps the root to \(u\), and \(f(u)_{t}=\text{false}\) otherwise. Clearly, \(f(u)_{\text{vertex}}=\text{true}\). Now, assume that \(e=(u_{1},\ldots,u_{k})\in R^{\mathbf{A}}\), and let \(f(e)\in\{\text{true},\text{false}\}^{\mathcal{T}_{R}}\) be defined by (D1), i.e., for each \(t=\text{edge}_{R}(t_{1},\ldots,t_{k})\), \[f(e)_{t}=f(u_{1})_{t_{1}}\wedge\cdots\wedge f(u_{k})_{t_{k}}.\] Observe that \(f(e)_{t}\) is true if and only if there is a homomorphism \(h\colon\mathbf{T}(t)\to\mathbf{A}\) mapping the root edge to \(e\). In other words, there is a homomorphism \(h\colon\mathbf{T}(t)\to\mathbf{A}\) mapping the root to \(e\) if and only if there are homomorphisms \(h_{i}\colon\mathbf{T}(t_{i})\to\mathbf{A}\) with \(h_{i}(r(t_{i}))=u_{i}\) for all \(i\). Assuming such a homomorphism \(h\), the homomorphisms \(h_{i}\) are defined as restrictions of \(h\) to the corresponding subtrees. Assuming homomorphisms \(h_{i}\), taking the union of \(h_{i}\) defines a mapping \(h\) on all vertices of \(\mathbf{T}(t)\). This mapping clearly preserves all edges different from the root since \(h_{i}\) are homomorphisms, and it also preserves the root edge, since it is mapped to \(e\in R^{\mathbf{A}}\). This means that \(h\) is indeed a homomorphism. We need to show that this \(f(e)\) satisfies (D2) and (D3). For (D2), we want \[f(e)_{t}\to f(u_{i})_{\text{pr}_{i}(t)},\] i.e., if there is a homomorphism \(\mathbf{T}(t)\to\mathbf{A}\) mapping the root to \(e\) then there is a homomorphism \(\mathbf{T}(\text{pr}_{i}(t))\to\mathbf{A}\) mapping the root to \(u_{i}\). This is trivial since \(\mathbf{T}(t)=\mathbf{T}(\text{pr}_{i}(t))\) and a homomorphism \(h\colon\mathbf{T}(t)\to\mathbf{A}\) that maps the root \(r_{t}\) to \((u_{1},\ldots,u_{k})\) necessarily maps \(r_{\text{pr}_{i}(t)}\) to \(u_{i}\). Finally, (D3) is clear from the definition since we assumed that \(\mathbf{Q}\not\to\mathbf{A}\). For the other implication, assume that \(f\colon\mathbf{A}\to\mathbf{D}(t_{Q})\) and that, for \(e\in R^{\mathbf{A}}\), \(f(e)\) denotes the witness of the \(R\)-edge \(f^{R}(e)\) in \(\mathbf{D}(t_{Q})\), which is the image of \(e\) under \(f\). We show the following by induction on the term \(t\leq t_{Q}\): **Claim 4.3**.: _If \(t\leq t_{Q}\) and there is \(h\colon\mathbf{T}(t)\to\mathbf{A}\), then \(f(h(r_{t}))_{t}=\text{true}\)._ 1. _The case_ \(t=\text{vertex}\) _is trivial._ 2. _Let_ \(t=\text{edge}_{R}(t_{1},\ldots,t_{\text{ar}_{R}})\)_. We assume that_ \(h\colon\mathbf{T}(t)\to\mathbf{A}\) _is a homomorphism and_ \(h(r_{t})=(v_{1},\ldots,v_{\text{ar}_{R}})\)_. This in particular means that for each_ \(i\)_,_ \(h\) _maps the root of_ \(\mathbf{T}(t_{i})\) _to_ \(v_{i}\)_. Hence, we can apply the inductive assumption on the restrictions of_ \(h\) _to_ \(\mathbf{T}(t_{i})\)_'s to get that that_ \(f(v_{i})_{t_{i}}=\text{true}\) _for all_ \(i\)_, and consequently_ \(f(h(r_{t}))_{t}=\text{true}\) _follows from (D1)._ 3. _Let_ \(t=\text{pr}_{i,R}(t^{\prime})\)_, and_ \(h\colon\mathbf{T}(t^{\prime})\to\mathbf{A}\)_. Note that_ \(\mathbf{T}(t)=\mathbf{T}(t^{\prime})\)_, so_ \(h\) _is also a homomorphism from_ \(\mathbf{T}(t)\)_. Let_ \(h(r_{r})=(u_{1},\ldots,u_{k})\)_, and observe that_ \(h(r_{t})=u_{i}\)_. By the inductive assumption, this implies that_ \(f(h(r_{r}))_{t^{\prime}}=\text{true}\)_. Consequently, we get that_ \(f(h(r_{t}))_{t}=f(u_{i})_{t}=\text{true}\) _from (D2)._ This concludes the proof of the claim. Since \(\mathbf{T}(t_{Q})=\mathbf{Q}\), the above claim applied to \(t=t_{Q}\) and a homomorphism \(h\colon\mathbf{Q}\to\mathbf{A}\), would imply that \(f(r_{t_{Q}})_{t_{Q}}=\operatorname{true}\), which would contradict (D3), and therefore there is no homomorphism \(\mathbf{Q}\to\mathbf{A}\). ### Example: Dual to a directed path It is well known (see, e.g., [10]) that a directed graph maps homomorphically to the graph \(\mathbf{L}_{k}=([k];<)\), where the edge relation is given by the strict order on the domain, if and only if it does not allow a homomorphism from a directed path with \(k\) edges (and \(k+1\) vertices) -- we denote this path by \(\mathbf{P}_{k}\). This means that \(\mathbf{L}_{k}\) is the dual of \(\mathbf{P}_{k}\). Let us compare this observation with our construction of the dual. First, let us fix a representation of \(\mathbf{P}_{k}\). We pick the term \[t_{k}=\operatorname{edge}_{E}\big{(}\operatorname{pr}_{2}\big{(} \operatorname{edge}_{E}(\ldots\operatorname{pr}_{2}(\operatorname{edge}_{E}( \operatorname{vertex},\operatorname{vertex})),\ldots,\operatorname{vertex}) \big{)},\operatorname{vertex}\big{)}\] where \(\operatorname{edge}_{E}\) appears \(k\) times. Note that this term represents the path \(\mathbf{P}_{k}\) rooted in its last edge, i.e., the following graph. Naturally, we also have terms \(t_{i}\) for \(i<k\) defined the same way with the exception that \(\operatorname{edge}_{E}\) appears \(i\) times. We also denote \(s_{0}=\operatorname{vertex}\), and \(s_{i}=\operatorname{pr}_{2}(t_{i})\). These terms satisfy the recursive equation \[t_{i}=\operatorname{edge}_{E}(s_{i-1},\operatorname{vertex}).\] Note that \(s_{i}\) also represents a path of length \(i\) -- the difference is that this time it is rooted in its last vertex. And finally note that \(s_{0}\),..., \(s_{k-1}\) and \(t_{1}\),..., \(t_{k}\) are the only subterms of \(t_{k}\). By definition, vertices of \(\mathbf{D}(t_{k})\) are tuples \[u\in\{\operatorname{true},\operatorname{false}\}^{\{s_{0},\ldots,s_{k-1}\}}\] such that \(u_{s_{0}}=\operatorname{true}\). This allow us to write them simply as ordered \(k\)-tuples whose \(i\)-th entry is the coordinate corresponding to \(s_{i-1}\). Now, \(u\) is connected by an edge to \(v\) if there is a tuple \(e\in\{\operatorname{true},\operatorname{false}\}^{\{t_{1},\ldots,t_{k}\}}\) such that 1. \(e_{t_{i}}=u_{s_{i-1}}\wedge s_{s_{0}}\) for all \(i\) (since \(t_{i}=\operatorname{edge}_{E}(s_{i-1},s_{0})\)), 2. \(e_{t_{i}}\to v_{s_{i}}\) for all \(i\) (since \(s_{i}=\operatorname{pr}_{2}(t_{i})\)), and 3. \(e_{t_{k}}=\operatorname{false}\). Since \(v_{s_{0}}=\operatorname{true}\), the condition (D1) simplifies to \(e_{t_{i}}=u_{s_{i-1}}\) which can be directly substituted to (D2) and (D3). Therefore, the conditions for \(u\) and \(v\) to form an edge are: 1. \(u_{s_{i-1}}\to v_{s_{i}}\), for all \(i<k\), and 2. \(u_{s_{k-1}}=\operatorname{false}\). This means that \(u\) has no out-edge if its last entry is true. It is not hard to see that \(\mathbf{L}_{k}\) maps to a dual constructed this way. We can construct a homomorphism \(h\) by mapping \(i\in[k]\) to the tuple starting with \(i+1\) true's and followed by all false's, i.e., \[h(0) =(\operatorname{true},\operatorname{false},\ldots,\operatorname{ false})\] \[h(1) =(\operatorname{true},\operatorname{true},\operatorname{false}, \ldots,\operatorname{false})\] \[\vdots\] \[h(k) =(\operatorname{true},\operatorname{true},\ldots,\operatorname{ true})\] Note that this is exactly the same homomorphism that is constructed in the proof of Theorem 4.2 (assuming \(\mathbf{A}=\mathbf{L}_{k}\)). And indeed, it is easy to check that if \(i<j\), then \(h(i)\) and \(h(j)\) satisfy the conditions for an edge given above. Naturally, there is also a homomorphism the other way. One such homomorphism maps a tuple \(u\) that begins with \(i+1\) true's followed by a false to \(i\). Again, it is easy to check that if \((u,v)\) is an edge that \(v\) has to begin with one more true. This establishes that our construction is homomorphically equivalent to \(\mathbf{P}_{k}\) (as it should be). _Remark 4.4_.: As a final remark concerning the above example, let us note that any homomorphism constructed according to the proof of Theorem 4.2 uses only the vertices in the image of \(h\) above, i.e., the tuples of the form \[(\operatorname{true},\ldots,\operatorname{true},\operatorname{false},\ldots, \operatorname{false})\] where \(\operatorname{true}\) appears at least once and false does not need to appear at all. This is quite easy to see: \(\mathbf{T}(t_{i})\) maps to \(\mathbf{T}(t_{j})\) for all \(i<j\) via a homomorphism preserving the roots, hence for any \(u\) in the image, we get that if \(u_{t_{j}}=\operatorname{true}\) then \(u_{t_{i}}=\operatorname{true}\) for all \(i<j\). We could force similar implications in the definition of the dual by requiring that \(u_{t}\to u_{s}\) whenever there is a homomorphism \(\mathbf{T}(s)\to\mathbf{T}(t)\) preserving roots. We did not include this condition in the definition because our goal is to get a simple construction and not necessary that the construction results in the smallest graph possible. Nevertheless, this raises a question: Would it be possible by enforcing such implications on our general construction to produce a dual that would be a _core_ (i.e., a structure that is not homomorphically equivalent to any of its proper substructures)? Finally, we note without a proof that our construction of the dual of a tree can be naturally extended to any (finite) tree duality, i.e., given a finite set of finite trees \(\mathscr{F}=\{\mathbf{T}_{1},\ldots,\mathbf{T}_{n}\}\), we can construct their dual \(\mathbf{D}\) that will satisfy that, for any structure \(\mathbf{A},\mathbf{A}\to\mathbf{D}\) if and only if for all \(i=1,\ldots,n\), \(\mathbf{T}_{i}\not\to\mathbf{A}\). This \(\mathbf{D}\) is constructed by following Definition 4.1 with the following changes: Pick a term \(t_{i}\) representing \(\mathbf{T}_{i}\) rooted in an edge for each \(i\) (we are assuming that none of \(\mathbf{T}_{i}\)'s consists of a single vertex). Let \(\mathscr{T}\) be a set of terms containing all \(t_{i}\)'s that is closed under taking subterms, let \(\mathscr{T}_{V}\) be the set of all \(V\)-terms in \(\mathscr{T}\), and let \(\mathscr{T}_{R}\) be the set of all \(R\)-terms in \(\mathscr{T}\) for each symbol \(R\). Finally, replace the condition (D3) with \(e_{t_{i}}=\operatorname{false}\) for all \(i=1,\ldots,n\) (naturally, this only applies if \(t_{i}\) is an \(R\)-term). ## 5. Adjoints to functors not changing the domain In this section, we describe the simpler of the cases of our construction of an adjoint. We consider \((\sigma,\tau)\)-Pultr templates where \(\mathbf{P}=\mathbf{V}_{1}\) is the \(\sigma\)-structure with a single vertex and empty relations, and for each \(\tau\)-symbol \(R\), \(\mathbf{Q}_{R}\) is a \(\sigma\)-tree. The homomorphisms \(\epsilon_{i,P}\colon\mathbf{P}\to\mathbf{Q}_{R}\) are given by picking elements \(x_{1},\ldots,x_{\operatorname{ar}R}\in Q_{R}\) that are the images of the unique vertex of \(\mathbf{P}\) under \(\epsilon_{i,R}\) for the respective \(i\)'s. This means that the structure \(\Gamma(\mathbf{A})\) can be equivalently described in the following way: the universe of \(\Gamma(\mathbf{A})\) coincides with the universe of \(\mathbf{A}\), and for every \(\tau\)-symbol \(R\) of arity \(k\), we have \[R^{\Gamma(\mathbf{A})}=\{(h(x_{1}),\ldots,h(x_{k}))\mid h\colon\mathbf{Q}_{R} \to\mathbf{A}\}.\] Given such a Pultr template, we will now construct a \(\sigma\)-structure \(\Omega(\mathbf{B})\) from a \(\tau\)-structure \(\mathbf{B}\). In this definition, we use the notation \(X\times Y=\{f\cup g\mid f\in X,g\in Y\}\) whenever \(X\) and \(Y\) are sets of functions defined on disjoint sets. We also use \(\mathcal{P}(X)\) to denote the power set of a set \(X\). **Definition 5.1**.: Fix a \((\sigma,\tau)\)-Pultr template with \(\mathbf{P}\) being a singleton structure with empty relations and with \(\mathbf{Q}_{R}\) being a \(\sigma\)-tree for each \(\tau\)-symbol \(R\). First, for each \(R\), we pick a term \(t_{R}\) representing \(\mathbf{Q}_{R}\), and we let \(\mathcal{T}\) be the set of all subterms of any of the \(t_{R}\)'s. We use notation \(\mathcal{T}_{V}\) and \(\mathcal{T}_{R}\) for the \(V\)-terms and \(R\)-terms, respectively, that belong to this set. The universe of \(\Omega(\mathbf{B})\) consists of elements \[U\in\prod_{t\in\mathcal{T}_{V}}\mathcal{P}(\hom(\Gamma(\mathbf{T}(t)), \mathbf{B}))\] such that \(U_{\mathrm{vertex}}\) is a singleton set, i.e., vertices of \(\Omega(\mathbf{B})\) are tuples \(U\) indexed by \(V\)-terms in \(\mathcal{T}\), where the \(t\)-th entry is a subset of homomorphisms from \(\Gamma(\mathbf{T}(t))\) to \(\mathbf{B}\). The vertex-th entry \(U_{\mathrm{vertex}}\) is then a set containing a single homomorphism which maps \(r_{\mathrm{vertex}}\) to some \(u\in B\). Note that the relations in \(\Gamma(\mathbf{T}(t))\) will be empty whenever none of the trees \(\mathbf{Q}_{R}\) maps to \(\mathbf{T}(t)\), in which case \(U_{t}\) is just some set of functions from the vertices of \(\mathbf{T}(t)\) to \(B\). For each \(\sigma\)-symbol \(R\) of arity \(k\), we define the relation \(R^{\Omega(\mathbf{B})}\) as the set of all tuples \((U^{1},\ldots,U^{k})\) of elements of \(\Omega(\mathbf{B})\) for which there is a tuple \[E\in\prod_{t\in\mathcal{T}_{R}}\mathcal{P}(\hom(\Gamma(\mathbf{T}(t)), \mathbf{B}))\] such that 1. for all \(t\in\mathcal{T}_{R}\) where \(t=\operatorname{\mathsf{edge}}_{R}(t_{1},\ldots,t_{k})\in\mathcal{T}_{R}\), we have \[E_{t}=U^{1}_{t_{1}}\times\cdots\times U^{k}_{t_{k}},\] 2. for all \(s\in\mathcal{T}_{V}\) of the form \(s=\operatorname{pr}_{i}(t)\) for some \(t\in\mathcal{T}_{R}\), we have \[E_{t}\subseteq U^{i}_{s}.\] We call this tuple \(E\) a _witness_ for the edge \((U_{1},\ldots,U_{k})\). Note that the conditions (A1) and (A2) for an edge make sense; in the case of (A1), observe that in order to define a mapping from the domain of \(\mathbf{T}(t)\) to \(B\), it is enough to specify the values on vertices of \(\mathbf{T}(t_{1}),\ldots,\mathbf{T}(t_{k})\), and in the case of (A2), the actual trees represented by \(t\) and \(s\) are identical, so any map from the domain of \(\mathbf{T}(t)\) is a map from the domain of \(\mathbf{T}(s)\). Also note that the condition (A1) essentially defines the components of the tuple \(E\) witnessing an edge \((U_{1},\ldots,U_{k})\) in terms of the components of the tuples \(U_{1}\),..., \(U_{k}\). This means that a witness for each edge is unique. We claim that this construction \(\Omega\) yields a right adjoint to the considered cases of central Pultr functors \(\Gamma\). **Theorem 5.2**.: _Assuming a \((\sigma,\tau)\)-Pultr template with \(\mathbf{P}\) being the \(\sigma\)-structure with a single vertex and empty relations, and \(\mathbf{Q}_{R}\) being a \(\sigma\)-tree for all \(\tau\)-symbols \(R\). Further, assume \(\Gamma\) is the central Pultr functor defined by this template, and \(\Omega\) is defined as in Definition 5.1._ _For every \(\sigma\)-structure \(\mathbf{A}\) and \(\tau\)-structure \(\mathbf{B}\), there is a homomorphism \(\Gamma(\mathbf{A})\to\mathbf{B}\) if and only if there is a homomorphism \(\mathbf{A}\to\Omega(\mathbf{B})\)._ We now proceed to prove the above theorem in several steps. The following lemma proves one of the implications and gives further insights to why \(U\)'s and \(E\)'s are defined as above. **Lemma 5.3**.: _If there is a homomorphism \(f\colon\Gamma(\mathbf{A})\to\mathbf{B}\), then there is a homomorphism \(g\colon\mathbf{A}\to\Omega(\mathbf{B})\)._ Proof.: We define a mapping \(g\colon A\to\Omega(B)\) by \[g(u)_{t}=\{f\circ h\mid h\colon\mathbf{T}(t)\to\mathbf{A},h(r_{t})=u\}.\] We claim that this mapping is a homomorphism. First, we show that \(g(u)\) is well-defined, i.e., that \(g(u)_{\mathrm{vertex}}\) is a singleton set, and the elements of \(g(u)_{t}\) are homomorphisms from \(\Gamma(\mathbf{T}(t))\) to \(\mathbf{B}\). For the former, observe that \(g(u)_{\mathrm{vertex}}=\{r_{\mathrm{vertex}}\mapsto f(u)\}\) since there is a single homomorphism \(h\colon\mathbf{T}(\mathrm{vertex})\to\mathbf{A}\) which maps the root (and only vertex) \(r_{\mathrm{vertex}}\) to \(u\). For the latter, assume \(t\) is a \(V\)-term. Since \(\mathbf{P}\) is a singleton, it is easy to check that any homomorphism \(h:\mathbf{T}(t)\to\mathbf{A}\) is also a homomorphism \(\Gamma(\mathbf{T}(t))\to\Gamma(\mathbf{A})\). Then \(f\circ h\colon\Gamma(\mathbf{T}(t))\to\mathbf{B}\) is a homomorphism because \(f\) is a homomorphism \(\Gamma(\mathbf{A})\to\mathbf{B}\). To prove that \(g\) preserves the relations, we define a witness for the image of an edge \(e\in R^{\mathbf{A}}\) under \(g^{R}\) (a component-wise action of \(g\) on \(R^{\mathbf{A}}\)). We denote this witness, with some abuse of notation, by \(g(e)\), and let \[g(e)_{t}=\{f\circ h\mid h\colon\mathbf{T}(t)\to\mathbf{A},h^{R}(r_{t})=e\}\] for \(t\in\mathscr{T}_{R}\). Note the similarity with the definition of \(g(u)_{t}\) -- this in particular means that \(g(e)\) is well-defined as all the components are sets of homomorphisms. We claim that if \(e=(u_{1},\ldots,u_{k})\in R^{\mathbf{A}}\), then \(g(e)\) witnesses that \((g(u_{1}),\ldots,g(u_{k}))\in R^{\Omega(\mathbf{B})}\). For (A1), we need to show \[g(e)_{t}=g(u_{1})_{t_{1}}\times\cdots\times g(u_{k})_{t_{k}}\] where \(t=\mathrm{edge}_{R}(t_{1},\ldots,t_{k})\). This is true since for any homomorphism \(h\colon\mathbf{T}(t)\to\mathbf{A}\) that maps the root edge to \(e=(u_{1},\ldots,u_{k})\), its restriction \(h_{i}\colon\mathbf{T}(t_{i})\to\mathbf{A}\) maps the roots to the respective \(u_{i}\) for all \(i\), and for any tuple of homomorphisms \(h_{i}\colon\mathbf{T}(t_{i})\to\mathbf{A}\) which maps the roots to the respective \(u_{i}\)'s, their union is a homomorphism \(\mathbf{T}(t)\to\mathbf{A}\). For (A2), we need to check that \(g(e)_{t}\subseteq g(u_{i})_{p_{t}(t)}\). This is easy to see since any homomorphism \(h\colon\mathbf{T}(t)\to\mathbf{A}\) that maps \(r_{t}\) to \(e\) maps \(r_{p_{t}(t)}\), which is the \(i\)-th component of \(r_{t}\), to \(u_{i}\). The above lemma concludes one of the implications that we need for the adjunction. We turn to the other implication which we prove in two steps, each provided by one of the following two lemmas. **Lemma 5.4**.: _Let \(t\in\mathscr{T}\) and let \(h\colon\mathbf{T}(t)\to\Omega(\mathbf{B})\) be a homomorphism._ 1. _If_ \(t\) _is a_ \(V\)_-term, and_ \(a\colon T(t)\to B\) _is the mapping such that_ \(a(v)\) _is the value of the single map in_ \(h(v)_{\mathrm{vertex}}\) _for all vertices_ \(v\) _of_ \(\mathbf{T}(t)\)_, then_ \(a\in h(r_{t})_{t}\)_._ 2. _If_ \(t\) _is an_ \(R\)_-term for a symbol_ \(R\)_,_ \(h(r_{t})\) _denotes the witness for the edge_ \(h^{R}(r_{t})\)_, and_ \(a\colon T(t)\to B\) _is the mapping such that_ \(a(v)\) _is the value of the single map in_ \(h(v)_{\mathrm{vertex}}\) _for all vertices_ \(v\) _of_ \(\mathbf{T}(t)\)_, then_ \(a\in h(r_{t})_{t}\)_._ Proof.: We prove the statement by induction on \(t\). * \(t=\mathrm{vertex}\) is a trivial case. * \(t=\mathrm{edge}_{R}(t_{1},\ldots,t_{k})\). Note that restrictions of \(h\) to subtrees \(\mathbf{T}(t_{i})\) are homomorphisms, so we know that, for all \(i\), \(h(r_{t_{i}})_{t_{i}}\) contains the restrictions of \(a\) by the inductive assumption. The claim then follows from (A1). * \(t=\operatorname{pr}_{i}(t^{\prime})\). Since \(h\) is a homomorphism from \(\operatorname{T}(t)=\operatorname{T}(t^{\prime})\) to \(\operatorname{B}\), we know that \(h(r_{t^{\prime}})_{t^{\prime}}\) contains \(a\), and the claim subsequently follows by (A2). **Lemma 5.5**.: _If \(g\colon\mathbf{A}\to\Omega(\operatorname{B})\) is a homomorphism, then there is a homomorphism \(f\colon\Gamma(\mathbf{A})\to\operatorname{B}\)._ Proof.: Without loss of generality, assume that \(\operatorname{Q}_{R}=\operatorname{T}(t_{R})\) for all \(\tau\)-symbols \(R\). We define \(f\) by setting \(f(a)\) to be the unique value attained by the single map in \(g(a)_{\text{vertex}}\). This is a well-defined mapping on the vertices of \(\Gamma(\mathbf{A})\). We need to show that it preserves the relations of \(\Gamma(\mathbf{A})\). To this end, assume that \(R\) is a \(\tau\)-symbol of arity \(k\) and \((u_{1},\dots,u_{k})\in R^{\mathbf{A}}\). This means that there is a homomorphism \(h\colon\operatorname{Q}_{R}\to\mathbf{A}\), s.t., \(h(x_{i})=u_{i}\) for all \(i\in[k]\). Observe that \(g\circ h\colon\operatorname{T}(t_{R})\to\Omega(\operatorname{B})\) is a homomorphism since it is obtained as a composition of two homomorphisms, so the previous lemma applies. Since \(fh(v)\) is the unique value attained by the single map in \(gh(v)_{\text{vertex}}\), the conclusion of the lemma gives that \(f\circ h\in gh(r_{t_{\operatorname{Q}_{R}}})_{t_{\operatorname{Q}_{R}}}\). In particular, \(f\circ h\) is a homomorphism from \(\Gamma(\operatorname{T}(t_{R}))\) to \(\operatorname{B}\), and therefore \[(f(u_{1}),\dots,f(u_{k}))=(fh(x_{1}),\dots,fh(x_{k}))\in R^{\operatorname{B}}\] since \((x_{1},\dots,x_{k})\in R^{\Gamma(\operatorname{T}(t_{R}))}\) and \(h(x_{i})=u_{i}\). Lemmas 5.3 and 5.5 together yield Theorem 5.2. ### Example: An oriented path In this example, we compare our construction to the construction introduced in [15, Definition 4.1]. Our goal is to construct the adjoint to the digraph Pultr functor \(\Gamma\) defined by the Pultr template where \(\operatorname{Q}_{E}\) is the following graph: The maps \(\epsilon_{1,E}\) and \(\epsilon_{2,E}\) map the singleton \(\operatorname{P}\) to \(x_{1}\) and \(x_{2}\), respectively. Let us start by fixing a term \(t_{E}\) representing \(\operatorname{Q}_{E}\). Namely, we let \[t_{E}=\operatorname{edge}_{E}(\operatorname{pr}_{1}(\operatorname{edge}_{E}( \operatorname{vertex},\operatorname{vertex})),\operatorname{pr}_{1}( \operatorname{edge}_{E}(\operatorname{vertex},\operatorname{vertex})))\] which represents \(\operatorname{Q}_{E}\) rooted in the middle edge. It has two \(V\)-subterms and two \(E\)-subterms (including itself) that represent the following trees: For a directed graph \(\operatorname{H}\), the definition of \(\Omega(\operatorname{H})\) is spelled out as follows. The vertices of \(\Omega(\operatorname{H})\) are pairs \(U=(U_{\operatorname{s}_{0}},U_{\operatorname{s}_{1}})\) where \(U_{\operatorname{s}_{0}}\) is a the set containing the map that sends the unique vertex of \(\operatorname{T}(\operatorname{vertex})\) to some \(u_{0}\in H\) and \(U_{\operatorname{s}_{1}}\subseteq H^{\Gamma(\operatorname{s}_{1})}\); this is because \(\Gamma(\operatorname{T}(\operatorname{s}_{1}))\) has no edges. There is an edge from \(U=(U_{\operatorname{s}_{0}},U_{\operatorname{s}_{1}})\) to \(V=(V_{\operatorname{s}_{0}},V_{\operatorname{s}_{1}})\) if there exists \(E=(E_{t_{1}},E_{t_{E}})\) where \(E_{t_{1}}\subseteq H^{T(\operatorname{s}_{1})}\) and \(E_{t_{E}}\subseteq\operatorname{hom}(\Gamma(\operatorname{Q}_{E}),\operatorname {H})\) such that 1. \(E_{t_{1}}=U_{\operatorname{s}_{0}}\times V_{\operatorname{s}_{0}}\), \(E_{t_{E}}=U_{\operatorname{s}_{1}}\times V_{\operatorname{s}_{1}}\), and 2. \(E_{t_{1}}\subseteq U_{\operatorname{s}_{1}}\). Let us simplify this definition. First, we will write homomorphisms from the above paths as tuples, writing the values of such a homomorphisms from left to right as the vertices appear on the picture above. In this way, we have \(U_{s_{0}}\subseteq H\), \(U_{s_{1}}\subseteq H\times H\) for each \(U\), and \(E_{t_{1}}=U_{s_{0}}\times V_{s_{0}}\), \[E_{t_{E}}=\{(u_{1},u_{0},v_{0},v_{1})\mid(u_{0},u_{1})\in U_{s_{1}}\text{ and }(v_{0},v_{1})\in V_{s_{1}}\}\] for each \(E\) witnessing that \((U,V)\in E^{\Omega(H)}\). Note that \(E_{t_{E}}\subseteq\hom(\Gamma(\Q_{E}),\H)\) which implies that \((u_{1},v_{1})\in E^{\H}\) for every \((u_{1},u_{0},v_{0},v_{1})\in E_{t_{E}}\). We claim that this construction results (on the same input) with a digraph that is homomorphically equivalent to the one obtained by [15, Definition 4.1]. Namely, the adjoint constructed there, let us call it \(\Omega^{\prime}\), is as follows: The vertices of \(\Omega^{\prime}(\H)\) are pairs \((a,A)\) where \(a\in H\) and \(A\subseteq H\), and there is an edge from \((a,A)\) to \((b,B)\) if \(b\in A\) and \(A\times B\subseteq E^{\H}\). We show that there is a homomorphism \(\alpha\colon\Omega^{\prime}(\H)\to\Omega(\H)\) for every graph \(\H\) defined by \(\alpha((a,A))=U\) where \(U_{s_{0}}=\{a\}\) and \(U_{s_{1}}=\{a\}\times A\). To show that \(\alpha\) preserves edges, assume that \((a,A)\) and \((b,B)\) are connected by an edge in \(\Omega^{\prime}(\H)\), i.e., \(b\in A\) and \(A\times B\subseteq E^{\H}\), and \(U=\alpha((a,A))\), \(V=\alpha((b,B))\). We use (1) to define \(E_{t_{1}}\) and \(E_{t_{E}}\), i.e., \[E_{t_{1}} =U_{s_{0}}\times V_{s_{0}}=\{(a,b)\}\] \[E_{t_{E}} =U_{s_{1}}\times V_{s_{1}}=A\times\{(a,b)\}\times B,\] and claim that * \(E_{t_{1}}\subseteq U_{s_{1}}\). This is true since \(b\in A\). * \(E_{t_{E}}\subseteq\hom(\Gamma(\Q_{E}),\H)\). This is true, since the only edge of \(\Gamma(\Q_{E})\) is \((x_{1},x_{2})\) and the projection of \(E_{t_{E}}\) to \(x_{1},x_{2}\) (the first and the last coordinate) is \(A\times B\subseteq E^{\H}\). A homomorphism \(\beta\colon\Omega(\H)\to\Omega^{\prime}(\H)\) is given by \(\beta(U)=(a,A)\) where \(a\) is the unique element of \(U_{s_{0}}\), and \[A=\{a^{\prime}\mid(a,a^{\prime})\in U_{1}\}.\] To show that it is a homomorphism, assume \((U,V)\in E^{\OH}\) is witnessed by \(E\), and let \(\beta(U)=(a,A)\) and \(\beta(V)=(b,B)\). Since \(E_{t_{1}}=U_{s_{0}}\times V_{s_{0}}=\{(a,b)\}\) and \(E_{t_{1}}\subseteq U_{s_{1}}\), we have \((a,b)\in U_{s_{1}}\), which implies that \(b\in A\). Also since \(E_{t_{E}}\subseteq\hom(\Gamma(\Q_{E}),\H)\), \(a\times A\subseteq U_{s_{1}}\), and \(b\times B\subseteq V_{s_{1}}\), we have that \[A\times\{(a,b)\}\times B\subseteq E_{t_{E}}\subseteq\hom(\Gamma(\Q_{E}),\H),\] which in particular implies that \(A\times B\subseteq E^{\H}\). This concludes the proof of the homomorphic eequivalence of \(\Omega(\H)\) and \(\Omega^{\prime}(\H)\). We note that our \(\Omega(\H)\) can be reduced to a smaller homomorphically equivalent structure by requiring that vertices \(U\in\Omega(\H)\) satisfy 1. for all \(s,s^{\prime}\in\mathcal{T}_{V}\) and homomorphisms \(h\colon\T(s)\to\T(s^{\prime})\) with \(h(r_{s})=r_{s^{\prime}}\), we have \[\{f\circ h\mid f\in U_{s^{\prime}}\}\subseteq U_{s}.\] Note that the elements constructed in the proof of Lemma 5.3 satisfy this property. In this particular example, this requirement would force that \(U_{s_{1}}=a\times A\) for some \(A\subseteq H\) and \(a\in U_{s_{0}}\) since \(s_{0}\) is embedded to \(s_{1}\) as the root. This would then make the two homomorphisms defined above into isomorphisms. ### Example: A 4-ary relation defined by an oriented path Our definition works also for Pultr templates that are not just digraph templates. As an example for comparison, let us consider a Pultr template that is similar to the previous example, but in this case maps digraphs to structures over a signature containing one 4-ary relational symbol \(R\). Specifically, \(\mathbf{P}\) is still a singleton with no edges, and \(\mathbf{Q}_{R}\) is the same digraph as \(\mathbf{Q}_{E}\) above, but we now have 4 homomorphisms \(\epsilon_{i,R}\colon\mathbf{P}\to\mathbf{Q}_{R}\) for \(i=0,1,2,3\) which map the vertex of \(\mathbf{P}\) to \(0,1,2\), or \(3\) respectively. Pictorially, the digraph \(\mathbf{Q}_{R}\) together with its distinguished vertices \(x_{0}\),..., \(x_{3}\) is as follows. We let \(t_{R}=t_{E}\) where \(t_{E}\) is as above, and we use the same notation as in the previous example. For a structure \(\mathbf{B}\) with a 4-ary relation \(R^{\mathbf{B}}\), vertices of \(\Omega(\mathbf{B})\) are defined in a similar way as above, i.e., they are pairs \((U_{s_{0}},U_{s_{1}})\) where \(U_{s_{0}}=\{b\}\) for some \(b\in B\) and \(U_{s_{1}}\subseteq B\times B\). Two such vertices \((U,V)\) are connected by an edge if there is \(E\) with 1. \(E_{t_{R}}=U_{s_{1}}\times V_{s_{1}}\subseteq R^{\mathbf{B}}\), and 2. \(E_{t_{1}}=U_{s_{0}}\times V_{s_{0}}\subseteq U_{s_{1}}\). More precisely, the first condition should be written as \[\{(u_{1},u_{0},v_{0},v_{1})\mid(u_{0},u_{1})\in U_{s_{1}},(v_{0},v_{1})\in V_{ s_{1}}\}\subseteq R^{\mathbf{B}}.\] This condition is given by (A1) and the fact that \(E_{t_{R}}\subseteq\hom(\Gamma(\mathbf{Q}_{R}),\mathbf{B})\). The condition (2) is given by a combination of (A1) for \(t_{1}=\operatorname{edge}_{E}(s_{0},s_{0})\) and (A2) for \(s_{1}=\operatorname{pr}_{1}(t_{1})\). The only real difference from the above example is that \(E_{t_{R}}\subseteq R^{\mathbf{B}}\) instead of requiring that the projection of \(E_{t_{E}}\) on the first and the last coordinates is a subset of \(E^{\mathbf{H}}\). As above, we could create a homomorphically equivalent \(\Omega^{\prime}(\mathbf{B})\) whose vertices would be pairs \((u,U)\) where \(u\in B\) and \(U\subseteq B\). Two such vertices \((u,U)\) and \((v,V)\) would be then connected by an edge if \(v\in U\) and \[U\times\{(u,v)\}\times V\subseteq R^{\mathbf{B}}.\] ### Duals from adjoints In [15], the authors claim that for digraph Pultr templates with the edge relation defined by \(\mathbf{Q}_{E}\), the image of the digraph with a single vertex and no edges is a dual to \(\mathbf{Q}_{E}\). We may be slightly more precise when talking about our constructions, namely, we claim that, if \(\tau\) contains a single symbol \(R\), and we consider a \((\sigma,\tau)\)-Pultr template, then image of the \(\tau\)-structure \(\mathbf{V}_{1}\) with a single vertex and no \(R\)-edges under \(\Omega\) is isomorphic to \(\mathbf{D}(t_{E})\) when \(\mathbf{Q}_{E}\) is a core and a tree represented by \(t_{E}\). Observe that there is only one function from any set \(T\) to \(V_{1}=\{1\}\). This in particular means that there is at most one homomorphism \(\Gamma(\mathbf{T}(t))\to\mathbf{V}_{1}\) for any term \(t\), and it is not hard to observe that the single function is a homomorphism if (and only if) \(\Gamma(\mathbf{T}(t))\) has no edges, in particular for all \(t<t_{E}\). This is a key observation that will allow us to show that \(\mathbf{D}(t_{E})\) and \(\Omega(\mathbf{V}_{1})\) are isomorphic since the only information in the components of an element \(U\in\Omega(V_{1})\) is whether this component is empty or not. Armed with this observation, we define a map \(h\colon\Omega(V_{1})\to D(t_{E})\) by \(h(U)=u\) where \[u_{t}=\begin{cases}\text{true}&\text{if $U_{t}\neq\emptyset$, and}\\ \text{false}&\text{if $U_{t}=\emptyset$.}\end{cases}\] The same definition can be used for witnesses of edges. Observe that the condition (A1) for a witness \(E\) of an edge \((U_{1},\ldots,U_{k})\) implies (D1) for the witness \(e\) of the edge \((u_{1},\ldots,u_{k})\) - a product of some sets is non-empty if and only if all of the sets are non-empty, and the same goes for (A2) and (D2), a subset of an empty-set is empty. Finally, (D3) is satisfied since \(\Gamma(\mathbf{T}(t_{E}))\), and therefore there is no homomorphism from this structure to \(\mathbf{V}_{1}\), and consequently \(E_{t_{E}}=\emptyset\). This concludes that \(h\) is a well-defined homomorphism. Finally, observe that this homomorphism is invertible since there is a unique non-empty subset of the singleton set \(\hom(\Gamma(\mathbf{T}(t)),\mathbf{V}_{1})\) for all \(t<t_{E}\). It is also easy to check that this inverse is a homomorphism. ## 6. Adjoints to functors with domains defined by a relation In this section, we construct an adjoint to the central Pultr functor defined by a \((\sigma,\tau)\)-Pultr template for which \(\mathbf{P}=\mathbf{S}_{1}\), for some choice of \(\sigma\)-symbol \(S\), i.e., \(\mathbf{P}\) is the \(\sigma\)-tree with \(\operatorname{ar}S\) elements related by \(S\). Again, for each \(k\)-ary \(\tau\)-symbol \(R\), we have a \(\sigma\)-tree \(\mathbf{Q}_{R}\) with a \(k\)-tuple \(\epsilon_{1,R}\),..., \(\epsilon_{k,R}\) of homomorphisms \(\mathbf{P}\to\mathbf{Q}_{R}\). Each of these homomorphisms selects an \(S\)-edge of \(\mathbf{Q}_{R}\), we denote these edges by \(x_{1}\),..., \(x_{k}\), respectively, i.e., \(x_{i}=\epsilon_{i,R}^{S}(e)\) for each \(i=1,\ldots,k\) where \(e\in S^{\mathbf{P}}\) is the unique \(S\)-edge of \(\mathbf{P}\). Let us repeat the definition of \(\Gamma(\mathbf{A})\) for a \(\sigma\)-structure \(\mathbf{A}\) using this notation. * The domain of \(\Gamma(\mathbf{A})\) is \(S^{\mathbf{A}}\); * for each \(\tau\)-symbol \(R\) of arity \(k\), the corresponding relation of \(\Gamma(\mathbf{A})\) is defined as \[R^{\Gamma(\mathbf{A})}=\{(h^{S}(x_{1}),\ldots,h^{S}(x_{k}))\mid h\colon \mathbf{Q}_{R}\to\mathbf{A}\}\] where \(h^{S}\colon S^{\mathbf{Q}_{R}}\to S^{\mathbf{A}}\) is the component-wise action of \(h\) on \(S\). We also note that, for each homomorphism \(h\colon\mathbf{A}\to\mathbf{B}\) between two \(\sigma\)-structures \(\mathbf{A}\) and \(\mathbf{B}\), \(h^{S}\colon\Gamma(\mathbf{A})\to\Gamma(\mathbf{B})\) is a homomorphism. The construction of \(\Omega\) in this case is very similar the construction in Section 5.2 (Definition 5.1), with a few small changes. **Definition 6.1**.: Fix a \((\sigma,\tau)\)-Pultr template with \(\mathbf{P}=\mathbf{S}_{1}\) for some \(\sigma\)-symbol \(S\), and assume \(\mathbf{Q}_{R}\) is a \(\sigma\)-tree for each \(\tau\)-symbol \(R\). For each \(\tau\)-symbol \(R\), we pick a term \(t_{R}\) representing \(\mathbf{Q}_{R}\), and we let \(\mathcal{T}\) be the set of all subterms of any of the \(t_{R}\)'s. We use notation \(\mathcal{T}_{V}\) and \(\mathcal{T}_{R}\) for the \(V\)-terms and \(R\)-terms that belong to this set. Now, we are ready to define \(\Omega(\mathbf{B})\). The universe of \(\Omega(\mathbf{B})\) consists of elements \[U\in\prod_{t\in\mathcal{T}_{V}}\mathcal{P}(\hom(\Gamma(\mathbf{T}(t)), \mathbf{B}))\] such that \(U_{\text{vertex}}=\{\emptyset\}\). We impose this trivial definition of the component \(U_{\text{vertex}}\) to avoid complicated case distinction below. For each \(\tau\)-symbol \(R\) of arity \(k\), different from \(S\), we define the relation \(R^{\Omega(\mathbf{B})}\) to be the set of all tuples \((U^{1},\ldots,U^{k})\) for which there exists a tuple \[E\in\prod_{t\in\mathcal{T}_{R}}\mathcal{P}(\hom(\Gamma(\mathbf{T}(t)), \mathbf{B}))\] such that 1. for all \(t=\operatorname{edge}_{R}(t_{1},\ldots,t_{k})\in\mathscr{T}_{R}\), we have \[E_{t}=U_{t_{1}}^{1}\times\cdots\times U_{t_{k}}^{k},\] 2. for all \(s\in\mathscr{T}_{V}\) of the form \(s=\operatorname{pr}_{i}(t)\) for some \(t\in\mathscr{T}_{R}\), we have \[E_{t}\subseteq U_{s}^{i}.\] (Up to this point, this definition is almost identical to Definition 5.1). Finally, the relation \(S^{\Omega(\mathbf{B})}\) is defined to be the set of all tuples \((U^{1},\ldots,U^{k})\), where \(k=\operatorname{ar}S\), for which there exist \(e_{\bullet}\in B\) and a tuple \(E\in\prod_{t\in\mathscr{T}_{S}}\mathcal{P}(\hom(\Gamma(\Gamma(t)),\mathbf{B}))\) satisfying 1. for all \(t\in\mathscr{T}_{S}\) where \(t=\operatorname{edge}_{S}(t_{1},\ldots,t_{k})\in\mathscr{T}_{S}\), we have \[E_{t}=U_{t_{1}}^{1}\times\cdots\times U_{t_{k}}^{k}\times\{r_{t}\mapsto e_{ \bullet}\},\] where \(r_{t}\mapsto e_{\bullet}\) denotes the map from \(\{r_{t}\}\) to \(B\) that maps \(r_{t}\) to \(e_{\bullet}\); and the condition (B2) as above with \(R\) replaced by \(S\). As before, we will call the tuples \(E\) that satisfy (B1) and (B2) a _witnesses_ for the \(R\)-edge \((U_{1},\ldots,U_{k})\). And similarly, we will call a pair \((e_{\bullet},E)\) satisfying (B1\({}_{\bullet}\)) and (B2) a witness for an \(S\)-edge \((U_{1},\ldots,U_{k})\). Note that a witness \((e_{\bullet},E)\) of an \(S\)-edge is not unique, nevertheless \(E\) is uniquely determined by \(e_{\bullet}\) and \((U_{1},\ldots,U_{k})\). This justifies calling \(e_{\bullet}\) a witness of the corresponding \(S\)-edge as well when we do not need to refer to \(E\). The reason that the condition (B1\({}_{\bullet}\)) differs from (A1) and (B1) is that, in this case, when \(t=\operatorname{edge}_{S}(t_{1},\ldots,t_{k})\), we have \[S^{\Gamma(t)}=S^{\Gamma(t_{1})}\cup\cdots\cup S^{\Gamma(t_{k})}\cup\{r_{t}\},\] and \(S^{\Gamma(t)}\) is the domain of \(\Gamma(\Gamma(s))\). This means that in order to define a homomorphism from \(\Gamma(\Gamma(t))\), we need to specify not only values on vertices of \(\Gamma(\Gamma(t_{i}))\)'s, as in (B1), but also at \(r_{t}\). This explains the necessity for the introduction of this new element \(e_{\bullet}\). We claim that this definition indeed constructs a right adjoint to \(\Gamma\). The proof is very similar to the proof of Theorem 5.2, and we leave it for the last subsection of this section. **Theorem 6.2**.: _Assuming a \((\sigma,\tau)\)-Pultr template with \(\mathbf{P}\) being the \(\sigma\)-tree with \(\operatorname{ar}S\) vertices connected by an \(S\)-edge for some \(\sigma\)-symbol \(S\), and \(\mathbf{Q}_{R}\) being a \(\sigma\)-tree for all \(\tau\)-symbols \(R\). Further assume \(\Gamma\) is the central Pultr functor defined by this template, and \(\Omega\) is defined as in Definition 6.1._ _For every \(\sigma\)-structure \(\mathbf{A}\) and \(\tau\)-structure \(\mathbf{B}\), there is a homomorphism \(\Gamma(\mathbf{A})\to\mathbf{B}\) if and only if there is a homomorphism \(\mathbf{A}\to\Omega(\mathbf{B})\)._ ### Example: The arc graph construction Consider the arc-graph construction that is usually denoted by \(\delta\); given a (directed) graph \(\mathbf{G}=(G,E^{\mathbf{G}})\), the digraph \(\delta(\mathbf{G})\) is defined as the graph with the vertex set \(E^{\mathbf{G}}\) and edges \(((u,v),(v,w))\in E^{\mathbf{G}}\times E^{\mathbf{G}}\), i.e., the domain is defined by \(\mathbf{P}=(\{0,1\};\{(0,1)\})\), and the binary relation \(E\) is defined by the graph \(\mathbf{Q}_{E}=(\{0,1,2\};\{(0,1),(1,2)\})\) with \(\epsilon_{1}(0)=0\) and \(\epsilon_{1}(1)=1)\), and \(\epsilon_{2}(0)=1\) and \(\epsilon_{2}(1)=2\). The right adjoint \(\Omega\) according to our definition above would be constructed in the following way: First, we choose a term \(t_{E}\) representing \(\mathbf{Q}\). We can pick \(t_{2}\) as in Section 4.1. \[t_{2} =\operatorname{edge}_{E}(\operatorname{pr}_{2}(\operatorname{edge}_{E} (\operatorname{vertex},\operatorname{vertex})),\operatorname{vertex})\] We also name all its subterms as in Section 4.1, i.e., \[s_{0} =\operatorname{vertex} t_{1} =\operatorname{edge}_{E}(s_{0},s_{0})\] \[s_{1} =\operatorname{pr}_{2}(t_{1})\] Vertices of \(\Omega(\operatorname{B})\) are defined as pairs \((U_{s_{0}},U_{s_{1}})\) such that \(U_{s_{0}}=\{\emptyset\}\) and \(U_{s_{1}}\subseteq\hom(\Gamma(\Gamma(s_{1})),\operatorname{B})\). Two such vertices \(U,V\) are connected by an edge if there is \(e_{\bullet}\in B\) and a pair of sets \((E_{t_{1}},E_{t_{2}})\) such that \(E_{t_{i}}\subseteq\hom(\Gamma(\Gamma(t_{i})),\operatorname{B})\) for \(i=1,2\), and 1. \(E_{t_{1}}=U_{s_{0}}\times\{r_{i_{1}}\mapsto e_{\bullet}\}\times V_{s_{0}}\) by (B1\(\bullet\)) since \(t_{1}=\operatorname{edge}_{E}(s_{0},s_{0})\); 2. \(E_{t_{1}}\subseteq V_{s_{1}}\) by (B2) since \(s_{1}=\operatorname{pr}_{2}(t_{1})\); 3. \(E_{t_{2}}=U_{s_{1}}\times\{r_{i_{1}}\mapsto e_{\bullet}\}\times V_{s_{0}}\) by (B1\(\bullet\)) since \(t_{2}=\operatorname{edge}_{E}(s_{1},s_{0})\). These conditions can be considerably simplified. First, since \(U_{s_{0}}=V_{s_{0}}=\{\emptyset\}\), both can be dropped from the above products, and also they carry no additional information. Further, since \(\Gamma(\Gamma(s_{1}))=\Gamma(\Gamma(t_{1}))\) is the graph with a single vertex and no edges, we can identify \(U_{s_{1}}\) and \(E_{t_{1}}\) with a subset of \(B\). Connecting this observation with the comment about \(U_{s_{0}}\), we can identify \(U\) with \(U_{s_{1}}\subseteq B\). Finally, the elements of \(E_{t_{2}}\) are homomorphisms from a directed edge to \(\operatorname{B}\) which correspond to the edges of \(\operatorname{B}\). Using this correspondence, we can identify \(E_{t_{2}}\) with a subset of \(E^{\operatorname{B}}\). Taking all of these into account the conditions above simplify to 1. \(E_{t_{1}}=\{e_{\bullet}\}\subseteq V_{s_{0}}\), and 2. \(E_{t_{2}}=U_{s_{0}}\times\{e_{\bullet}\}\subseteq E^{\operatorname{B}}\). So, we can say vertices of \(\Omega(\operatorname{B})\) are subsets \(U\) of \(B\), and \((U,V)\) is an edge of \(\Omega(\operatorname{B})\) if \[\exists e_{\bullet}\in V\quad\text{such that}\quad U\times\{e_{\bullet}\} \subseteq E^{\operatorname{B}}.\] We compare this construction with the functor \(\delta_{R}\) described in [11, Definition 3.1] as a right adjoint to \(\delta\). For a digraph \(\operatorname{B}\), the vertices of the digraph \(\delta_{R}(\operatorname{B})\) are the complete bipartite subgraphs of \(\operatorname{B}\), i.e., pairs \((U^{-},U^{+})\) of subsets of vertices of \(\operatorname{B}\) such that \(U^{-}\times U^{+}\subseteq E^{\operatorname{B}}\). There is an edge from \((U^{-},U^{+})\) to \((V^{-},V^{+})\) if \(U^{+}\cap V^{-}\neq\emptyset\). Below, we show that \(\delta_{R}(\operatorname{B})\) and \(\Omega(\operatorname{B})\) are homomorphically equivalent. We start with constructing a homomorphism \(h\colon\partial_{R}(\operatorname{B})\to\Omega(\operatorname{B})\). We let \[h(U^{-},U^{+})=U^{-}.\] To show that it preserves edges, assume \(U^{+}\cap V^{-}\neq\emptyset\), i.e., there exists \(e_{\bullet}\in U^{+}\cap V^{-}\). We claim that this \(e_{\bullet}\) witnesses that \(U^{-}\) and \(V^{-}\) is an edge in \(\Omega(\operatorname{B})\). Clearly, \(e_{\bullet}\in V^{-}\). Also, we have \[U^{-}\times\{e_{\bullet}\}\subseteq U^{-}\times U^{+}\subseteq E^{ \operatorname{B}}.\] A homomorphism \(g\colon\Omega(\operatorname{B})\to\delta_{R}(\operatorname{B})\) is a bit harder to construct. Guided by the above, it is natural to choose the first component of \(g(U)\) to be \(g(U)^{-}=U\). We need to define the second component \(g(U)^{+}\). We let \(g(U)^{+}\) to be the largest set such that \(g(U)^{-}\times g(U)^{+}\subseteq E_{\operatorname{B}}\), i.e., \[g(U)^{+}=\{v\in V^{\operatorname{B}}\mid\forall u\in U,(u,v)\in E^{ \operatorname{B}}\}.\] Now, assuming that \(U\) and \(V\) are connected by an edge in \(\Omega(\operatorname{B})\) witnessed by \(e_{\bullet}\). We claim that \(e_{\bullet}\in g(U)^{+}\cap g(V)^{-}\). By definition of \(\Omega(\operatorname{B})\), we have \(e_{\bullet}\in V=g(V)^{-}\), and \(U\times\{e_{\bullet}\}\subseteq E^{\operatorname{B}}\), which implies that \(e_{\bullet}\in g(U)^{+}\). Altogether, \(e_{\bullet}\in g(U)^{+}\cap g(V)^{-}\), and hence \((g(U),g(V))\in E^{\delta_{R}(\operatorname{B})}\). This completes the proof. This example suggests that \(U_{\text{vertex}}\) can be generally ignored in the definition of \(\Omega(\mathbf{B})\), and it indeed only serves as a placeholder that avoids case distinction between some \(E\)-terms, e.g., between \(\operatorname{edge}_{E}(t,s)\) and \(\operatorname{edge}_{E}(t,\text{vertex})\) for \(s\neq\text{vertex}\). ### Example: Arc structure In this subsection, we consider a certain variant of the arc graph construction which we will call _arc structure_ and which encodes more information than the arc graph: The domain of the arc structure coincides with the domain of the arc graph, i.e., the set of all edges of the input graph, and we extend the signature with two more binary symbols that will relate those pairs of edges that are incident in a different sense. We fix \(\phi\) to be a signature with three binary relations \(D\), \(I\), and \(O\), and we let \(y\) be the signature of digraphs. We define a central Pultr functor \(\partial\) using the \((y,\phi)\)-Pultr template defined as follows: The digraph defining vertices is the digraph with a single oriented edge, i.e., \(\mathbf{P}=\mathbf{E}_{1}\), and the digraphs \(\mathbf{Q}_{D}\), \(\mathbf{Q}_{I}\), \(\mathbf{Q}_{O}\) are the following where the images of \(\epsilon_{i}\)'s are highlighted and labeled by \(x_{i}\). Note that the digraph \(\mathbf{Q}_{D}\) with the two distinguished edges defines the arc-graph functor. This means that for each digraph \(\mathbf{G}\), the reduct \((A,D^{\mathbf{A}})\) where \(\mathbf{A}=\partial(\mathbf{G})\) is the arc-graph of \(\mathbf{G}\) (if \(D\) is interpreted as \(E\)), see Figure 1. Now, to construct the right adjoint to \(\partial\), which we denote by \(\omega_{\partial}\), we fix the following terms representing the graphs \(\mathbf{Q}_{D}\), \(\mathbf{Q}_{I}\), and \(\mathbf{Q}_{O}\). where \(s_{i}=\operatorname{pr}_{i}(\operatorname{edge}_{E}(\text{vertex},\text{ vertex}))\) for \(i=1,2\). We name all remaining subterms as follows, \(s_{0}=\text{vertex}\), \(t_{E}=\operatorname{edge}_{E}(\text{vertex},\text{vertex})\). The set of terms defining \(\omega_{\partial}\) is \(\mathscr{T}=\{s_{0},s_{1},s_{2},t_{E},t_{D},t_{I},t_{O}\}\). Of which, the \(V\)-terms are \(\mathscr{T}_{V}=\{s_{0},s_{1},s_{2}\}\). Which means that the vertices of \(\omega_{\partial}(\mathbf{B})\) are triples \((U_{s_{0}},U_{s_{1}},U_{s_{2}})\) where \(U_{s_{0}}=\{\emptyset\}\), and \(U_{s_{1}},U_{s_{2}}\) are sets of functions from a \(1\)-element set to \(B\). We identify such a triple with a pair \((U^{+},U^{-})\) where \(U^{+}\subseteq B\) is the set of images of function in \(U_{s_{1}}\) and \(U^{-}\subseteq B\) the set of images of functions in \(U_{s_{2}}\). Using obvious simplifications of conditions (B1) and (B2\({}_{\bullet}\)), we get that two such pairs \((U^{+},U^{-})\) and \((V^{+},V^{-})\) are connected by an edge if there exists \(e_{\bullet}\in B\), so that the sets \(E_{t_{E}}=\{e_{\bullet}\}\), \(E_{t_{D}}=U^{-}\times\{e_{\bullet}\}\), \(E_{t_{I}}=\{e_{\bullet}\}\times V^{-}\), and \(E_{t_{O}}=U^{+}\times\{e_{\bullet}\}\) satisfy \(E_{t_{E}}\subseteq U^{+}\cap V^{-}\) and \(E_{t_{R}}\subseteq R^{\mathsf{B}}\) for Figure 1. A digraph, its arc graph and its arc structure. The \(D\) and \(I\) relations of \(\partial(\mathbf{G})\) are symmetric, and \(D\) and \(I\) loops on all vertices of \(\partial(\mathbf{G})\) are omitted for readability. \(R\in\{D,I,O\}\). This means that \((U^{+},U^{-})\) and \((V^{+},V^{-})\) are connected by an edge if there exists \(e_{\bullet}\in U^{+}\cap V^{-}\) such that \[U^{-}\times\{e_{\bullet}\}\subseteq D^{\mathsf{B}},\{e_{\bullet}\}\times V^{-} \subseteq I^{\mathsf{B}},U^{+}\times\{e_{\bullet}\}\subseteq O^{\mathsf{B}}.\] This completes the definition, though we can further refine it by requiring that for each vertex, the sets \(U^{+}\) and \(U^{-}\) satisfy additional properties that would automatically imply the conditions above. Namely, we require \[U^{-}\times U^{+}\subseteq D^{\mathsf{B}},U^{-}\times U^{-}\subseteq I^{ \mathsf{B}},U^{+}\times U^{+}\subseteq O^{\mathsf{B}}.\] To sum up the refined definition, we let \(\omega_{\partial}(\mathsf{B})\) to be the digraph with vertex set \[\{(U^{+},U^{-})\mid U^{-}\times U^{+}\subseteq D^{\mathsf{B}},U^{-}\times U^ {-}\subseteq I^{\mathsf{B}},U^{+}\times U^{+}\subseteq O^{\mathsf{B}}\}\] where \((U^{+},U^{-})\) and \((V^{+},V^{-})\) form and edge if \(U^{+}\cap V^{-}\neq\emptyset\). It is not hard to check that even after the refinements, we still get the right adjoint to \(\partial\), i.e., that indeed there is a homomorphism \(\mathbf{A}\to\omega_{\partial}(\mathbf{B})\) if and only if there is a homomorphism \(\partial(\mathbf{A})\to\mathbf{B}\) for any digraph \(\mathbf{A}\) and \(\delta\)-structure \(\mathbf{B}\). Note how this compares to the right adjoint \(\delta_{R}\) of the arc digraph functor as defined in the previous subsection. ### Proof of Theorem 6.2 As noted above, the proof closely follows the proof of Theorem 5.2, and we present it with the same structure starting with the easier of the two implications. **Lemma 6.3**.: _If there is a homomorphism \(f\colon\Gamma(\mathbf{A})\to\mathbf{B}\) then there is a homomorphism \(g\colon\mathbf{A}\to\Omega(\mathbf{B})\)._ Proof.: Assuming \(f\colon\Gamma(\mathbf{A})\to\mathbf{B}\), we define a mapping \(g\colon A\to\Omega(B)\) by setting, for all \(u\in A\) and all \(t\in\mathcal{T}_{V}\), \[g(u)_{t}=\{f\circ h^{S}\mid h\colon\mathsf{T}(t)\to\mathbf{A},h(r_{t})=u\},\] and claim that it is a homomorphism from \(\mathbf{A}\) to \(\Omega(\mathbf{B})\). We need to check that \(g(u)\) is well-defined, i.e., that \(f\circ h^{S}\) is a homomorphism from \(\Gamma(\mathsf{T}(t))\) to \(\mathbf{B}\) for all \(t\in\mathcal{T}_{V}\) and that \(g(u)_{\mathrm{vertex}}=\{\emptyset\}\). Indeed, \(f\circ h^{S}\) is a homomorphism since it is a composition of homomorphisms \(h^{S}\colon\Gamma(\mathsf{T}(t))\to\Gamma(\mathbf{A})\) and \(f\colon\Gamma(\mathbf{A})\to\mathbf{B}\). And, \(g(u)_{\mathrm{vertex}}=\{\emptyset\}\) since \(\mathsf{T}(\mathrm{vertex})\) has no \(S\)-edges which implies that \(h^{S}=\emptyset\) (i.e., \(h^{S}\) is the empty map), and consequently \(f\circ h^{S}=\emptyset\), for the only mapping \(h\colon T(\mathrm{vertex})\to A\) such that \(h(r_{\mathrm{vertex}})=u\). To show that \(g\) is a homomorphism, assume first that \(R\neq S\) is a \(\sigma\)-symbol of arity \(k\), and \(e=(u_{1},\ldots,u_{k})\in R^{\mathbf{A}}\). We define the witness \(g(e)\) for the \(R\)-edge \(g^{R}(e)\) in \(\Omega(\mathbf{B})\) by \[g(e)_{t}=\{f\circ h^{S}\mid h\colon\mathsf{T}(t)\to\mathbf{A},h^{R}(r_{t})=u\}\] for \(t\in\mathcal{T}_{R}\). Again, we get that for each \(h\colon\mathsf{T}(t)\to\mathbf{A}\), \(f\circ h^{S}\) is a homomorphism by the same argument as in the previous paragraph. We claim that \(g(e)\) satisfies (B1) and (B2). For (B1), we need to show \[g(e)_{t}=g(u_{1})_{t_{1}}\times\cdots\times g(u_{k})_{t_{k}}\] where \(t=\operatorname{edge}_{R}(t_{1},\ldots,t_{k})\). Observe that homomorphisms \(h\colon\mathsf{T}(t)\to\mathbf{A}\) such that \(h^{R}(r_{t})=e\) are in \(1\)-to-\(1\) correspondence with \(k\)-tuples of homomorphisms \(h_{1},\ldots,h_{k}\) such that \(h_{i}\colon\mathsf{T}(t_{i})\to\mathbf{A}\) and \(h_{i}(r_{t_{i}})=u_{i}\) for all \(i\in[k]\) obtained as their restrictions to the respective subtrees. Finally, if \(h\) is the union of \(h_{i}\)'s then also \(h^{S}\) is the union of \(h^{S}_{i}\)'s. The claim then easily follows. For (B2), we want to check that \(g(e)_{t}\subseteq g(u_{i})_{s}\) where \(s=\operatorname{pr}_{i}(t)\). Observe that if \(h\colon\mathsf{T}(t)\to\mathbf{A}\) and \(h(r_{t})=e\) then \(h(r_{s})=u_{i}\) since the \(i\)-th component of \(r_{t}\) is \(r_{s}\). Again, the rest follows easily. Finally, for the case \(R=S\) and \(e=(u_{1},\ldots,u_{k})\in S^{\mathbf{A}}\), we need to define the element \(g(e)_{\bullet}\in B\). This is defined by \(g(e)_{\bullet}=f(e)\). The rest of the argument follows the previous paragraph with the following difference to (B1): \(h^{S}\) is no longer the union of \(h_{i}\)'s, but \[h^{S}=h^{S}_{1}\cup\cdots\cup h^{S}_{k}\cup(r_{t}\mapsto f(e)).\] This is because \(\mathbf{T}(t)\) contains an \(S\)-edge, namely \(r_{t}\), that is not contained in any of \(\mathbf{T}(t_{i})\)'s. This consequently gives (B1\({}_{\bullet}\)). The condition (B2) is proved the same way as above. The other implication is proved in the following two lemmas. **Lemma 6.4**.: _Let \(t\in\mathcal{T}\) and \(h\colon\Upsilon(t)\to\Omega(\mathbf{B})\)._ 1. _If_ \(t\) _is a_ \(V\)_-term, and_ \(a\colon S^{\Upsilon(t)}\to B\) _is such that, for all_ \(s\in S^{\Upsilon(t)}\)_,_ \(a(s)\) _is a witness for the edge_ \(h^{S}(s)\)_, then_ \(a\in h(r_{t})_{t}\)_._ 2. _If_ \(t\) _is an_ \(R\)_-term for a symbol_ \(R\neq S\)_, and_ \(a\colon S^{\Upsilon(t)}\to B\) _is such that, for all_ \(s\in S^{\Upsilon(t)}\)_,_ \(a(s)\) _is a witness for the edge_ \(h^{S}(s)\) _and_ \(h(r_{t})\) _is the witness for_ \(h^{R}(r_{t})\)_, then_ \(a\in h(r_{t})_{t}\)_._ 3. _If_ \(t\) _is an_ \(S\)_-term, and_ \(a\colon S^{\Upsilon(t)}\to B\) _is such that, for all_ \(s\in S^{\Upsilon(t)}\)_,_ \(a(s)\) _is a witness for the edge_ \(h^{S}(s)\) _and_ \((a(r_{t}),h(r_{t}))\) _is a witness for_ \(h^{S}(r_{t})\)_, then_ \(a\in h(r_{t})_{t}\)_._ Proof.: We prove all three parts simultaneously by induction on \(t\). * \(t=\text{vertex}\). Since \(S^{\text{Tvertex}}=\emptyset\), the only map \(a\colon S^{\Upsilon(\text{vertex})}\to B\) is the empty map which is in \(h(r_{\text{vertex}})_{\text{vertex}}\) by definition. * \(t=\text{edge}_{S}(t_{1},\ldots,t_{k})\). Note that restrictions of \(h\) to subtrees \(\mathbf{T}(t_{i})\)'s are homomorphisms, so we know that, for all \(i\), \(h(r_{t_{i}})_{t_{i}}\) contains the corresponding restrictions of \(a\) by inductive assumption. The claim then follows from (B1\({}_{\bullet}\)) since it claims that \[h(r_{t})_{t}=h(r_{t_{1}})_{t_{1}}\times\cdots\times h(r_{t_{k}})_{t_{k}} \times\{r_{t}\mapsto a(r_{t})\}\] and the restriction of \(a\) on \(S^{\Upsilon(t_{i})}\) is in \(h(r_{t_{i}})_{t_{i}}\) for all \(i\) by the inductive assumption. * \(t=\text{edge}_{R}(t_{1},\ldots,t_{k})\) where \(R\neq S\). This is proved in the same way as the above case using (B1) instead of (B1\({}_{\bullet}\)), with the exception that the last component of the product does not appear since \(r_{t}\notin S^{\Upsilon(t)}\). * \(t=\text{pr}_{i}(s)\). Since \(h\) is a homomorphism from \(\mathbf{T}(t)=\mathbf{T}(s)\) to \(\mathbf{B}\), we know that \(h(r_{s})_{s}\) contains \(a\) by the inductive assumption. The claim then follows from (B2). **Lemma 6.5**.: _If there is a homomorphism \(g\colon\mathbf{A}\to\Omega(\mathbf{B})\), then there is a homomorphism \(f\colon\Gamma(\mathbf{A})\to\mathbf{B}\)._ Proof.: Let us assume without loss of generality that \(\mathbf{Q}_{R}=\mathbf{T}(t_{R})\) for each \(\tau\)-symbol \(R\). We define a mapping \(f\colon\Gamma(A)\to B\) by setting, for all \(s\in S^{\mathbf{A}}\), \(f(s)=e_{\bullet}\) for some witness \(e_{\bullet}\) of the edge \(g^{S}(s)\in S^{\Omega(\mathbf{B})}\), and claim that this \(f\) is a homomorphism from \(\Gamma(\mathbf{A})\) to \(\mathbf{B}\). We need to show that \(f\) preserves each relation \(R\). Assume that \((u_{1},\ldots,u_{k})\in R^{\mathbf{A}}\), i.e., there is a homomorphism \(h\colon\mathbf{Q}_{R}\to\mathbf{A}\), s.t., \[(h^{S}(x_{1}),\ldots,h^{S}(x_{k}))=(u_{1},\ldots,u_{k}).\] The previous lemma applied to the homomorphism \(g\circ h\colon\Upsilon(t_{R})\to\Omega(\mathbf{B})\) implies that, if the witness \((E,e_{\bullet})\) for the edge \((g\circ h)^{S}(r_{t_{R}})\) is chosen so that \(e_{\bullet}=fh^{S}(r_{t_{R}})\), then \(E_{t_{R}}\) contains the map \(f\circ h^{S}\). Consequently, \(f\circ h^{S}\colon\Gamma(\mathbf{Q}_{R})\to\mathbf{B}\) is a homomorphism which in turn implies \[(f(u_{1}),\ldots,f(u_{k}))=(fh^{S}(x_{1}),\ldots,fh^{S}(x_{k}))\in R^{\mathbf{ B}}\] since \((x_{1},\ldots,x_{k})\in R^{\Gamma(\mathbf{Q}_{R})}\). This concludes the proof of Theorem 6.2. ## 7. Composition of adjoints In this section, we give an example of what can be achieved by composing functors defined in Sections 5 and 6. The power of composing two adjoints to obtain more complicated construction was observed in [15, Section 5] where the authors considered composition of digraph functors with adjoints. This section gives several examples that show that we can obtain adjoints to more digraph functors by composing functors that go outside of the scope of digraphs into general relational structures. Naturally, our constructions also give more adjoints between general relational structures. We start with a few general observations. The key fact that makes composition of adjoints useful is the following well-known categorical observation. **Lemma 7.1**.: _Assume that \(\Lambda_{1},\Gamma_{1}\) and \(\Lambda_{2},\Gamma_{2}\) are two pairs of (thin) adjoint functors, then \(\Lambda_{1}\circ\Lambda_{2}\) is a left adjoint to \(\Gamma_{2}\circ\Gamma_{1}\)._ Proof.: The proof is straightforward. We get the following string of equivalences from the two adjoints: \(\Lambda_{1}\Lambda_{2}(\mathbf{A})\to\mathbf{B}\) if and only if \(\Lambda_{2}(\mathbf{A})\to\Gamma_{1}(\mathbf{B})\) if and only if \(\mathbf{A}\to\Gamma_{2}\Gamma_{1}(\mathbf{B})\) for any two structures \(\mathbf{A}\) and \(\mathbf{B}\) of the right signatures. Let us also note that two right adjoints to a functor \(\Lambda\) give homomorphically equivalent structures on the same input, i.e., if \(\Gamma_{1}\) and \(\Gamma_{2}\) are both right adjoints of \(\Lambda\) then for all \(\mathbf{A}\), \(\Gamma_{1}(\mathbf{A})\) and \(\Gamma_{2}(\mathbf{A})\) are homomorphically equivalent. Finally, we can also observe that a composition of Pultr functors again gives a Pultr functor. The template defining the composition \(\Gamma_{2}\circ\Gamma_{1}\) for two central Pultr functors is essentially the image of the template of \(\Gamma_{2}\) under the left adjoint \(\Lambda_{1}\) of \(\Gamma_{1}\). The following theorem is obtained by composing adjoints constructed in Sections 5 and 6. Though, it is not an exhaustive list of adjunctions that can be constructed by such compositions, it provides more adjoints to digraph functors on top of those provided in [15]. The theorem concerns a relatively general case of Pultr templates where \(\mathbf{P}\) is an arbitrary tree. We only require that copies of \(\mathbf{P}\) in the respective \(\mathbf{Q}_{R}\)'s intersect in at most a vertex. We also note that this theorem covers the cases of central Pultr functors whose adjoints are provided by Theorems 5.2 and 6.2. **Theorem 7.2**.: _Assume a \((\sigma,\tau)\)-Pultr template with \(\mathbf{P}\) and all \(\mathbf{Q}_{R}\)'s being \(\sigma\)-trees such that, for each \(\tau\)-symbol \(R\),_ 1. \(\epsilon_{i,R}\) _is injective for all_ \(i\in\{1,\ldots,\operatorname{ar}R\}\)_, and_ 2. _for each_ \(i\neq j\)_,_ \(i,j\in\{1,\ldots,\operatorname{ar}R\}\)_, the images of_ \(\mathbf{P}\) _under_ \(\epsilon_{i,R}\) _and_ \(\epsilon_{j,R}\) _intersect in at most a vertex._ _Then the corresponding central Pultr functor \(\Gamma\) has a right adjoint._ Proof.: Assume that \(P=\{1,\ldots,p\}\). The goal is to decompose the functor \(\Gamma\) into two Pultr functors \(\Gamma_{1}\) and \(\Gamma_{2}\). The intermediate step is to construct a structure of a new signature. This new signature \(v\) is obtained from \(\sigma\) by adding a new relational symbol \(S\) of arity \(p\) (the size of the domain of \(\mathbf{P}\)) while retaining all symbols in \(\sigma\). We define the first functor, \(\Gamma_{1}\) that maps \(\sigma\)-structures to \(v\)-structures. Essentially this functor simply adds a new relation \(S\) that is defined by \(\mathbf{P}\), i.e., \(\Gamma_{1}(\mathbf{B})\) is the \(v\)-structure with domain \(B\), where the relations are defined as \[R^{\Gamma_{1}(\mathbf{B})} =R^{\mathbf{B}}\text{ for each $\sigma$-symbol $R$, and}\] \[S^{\Gamma_{1}(\mathbf{B})} =\{(h(1),\ldots,h(p))\mid h\colon\mathbf{P}\to\mathbf{B}\}.\] It is clear that this functor is defined by a \((\sigma,v)\)-Pultr template that satisfies the assumptions of Theorem 5.2. The second functor, \(\Gamma_{2}\) is defined by altering the original Pultr template for \(\Gamma\). We let \(\mathbf{P}^{2}=\mathbf{S}_{1}\), and for each \(\tau\)-symbol \(R\), we obtain \(\mathbf{Q}_{R}^{2}\) from \(\mathbf{Q}_{R}\) by replacing, for each \(i=1,\ldots,\) an \(R\), the copy of \(\mathbf{P}\) obtained as an image of \(\epsilon_{i,R}\) in \(\mathbf{Q}_{R}\) by a copy of \(\mathbf{P}^{2}=\mathbf{S}_{1}\) (keeping the vertices in place, but removing edges). Observe that since \(\epsilon_{i,R}\)'s are injective, this does not introduce reflexive tuples, and since the images of \(\epsilon_{i,R}\) and \(\epsilon_{j,R}\) intersect in at most one vertex for \(i\neq j\), this does not introduce cycles into \(\mathbf{Q}_{R}^{2}\). Hence, \(\mathbf{Q}_{R}^{2}\) is still a tree for each \(\tau\)-symbol \(R\), and therefore Theorem 6.2 applies. The above two paragraphs show that \(\Gamma_{1}\) and \(\Gamma_{2}\) have adjoints \(\Omega_{1}\) and \(\Omega_{2}\). And it is straightforward to check that \(\Gamma=\Gamma_{2}\circ\Gamma_{1}\). This concludes that \(\Omega_{1}\circ\Omega_{2}\) is a right adjoint to \(\Gamma\) by Lemma 7.1. ## 8. Conclusion We have studied the problem of characterising central Pultr functors for arbitrary relational structures that admit a right adjoint, and, for those that do, giving an explicit construction for such an adjoint. There is a necessary condition for the existence of such an adjoint (cf. Theorem 2.8 and comments after it). We gave a sufficient condition in Theorem 7.2. These two conditions do not match, there is a gap between them, and it is not quite clear what the necessary and sufficient condition should be (even in the case of digraphs). Apart from the requirement that \(\mathbf{P}\) and all \(\mathbf{Q}_{R}\)'s are trees, Theorem 7.2 has two additional assumptions. We believe that the second assumption (about intersection of images of \(\mathbf{P}\) in \(\mathbf{Q}_{R}\)) is a technicality that can be removed with some extra work. How essential is the first assumption (about injectivity of homomorphisms \(\epsilon_{i,R}\))? For example, is it true that for every central Pultr functor \(\Gamma\) that has a right adjoint, there is another central Pultr functor \(\Gamma^{\prime}\) such that (a) for every structure \(\mathbf{A}\) of appropriate signature, \(\Gamma(\mathbf{A})\) and \(\Gamma^{\prime}(\mathbf{A})\) are homomorphically equivalent, and (b) the Pultr template corresponding to \(\Gamma^{\prime}\) has all homomorphisms \(\epsilon_{i,R}\) injective?
2302.02982
Nearly toroidal, periodic and quasi-periodic motions of fluid particles driven by the Gavrilov solutions of the Euler equations
We consider the smooth, compactly supported solutions of the steady 3D Euler equations of incompressible fluids constructed by Gavrilov in 2019, and we study the corresponding fluid particle dynamics. This is an ode analysis, which contributes to the description of Gavrilov's vector field.
Pietro Baldi
2023-02-06T18:17:38Z
http://arxiv.org/abs/2302.02982v1
# Nearly toroidal, periodic and quasi-periodic motions # Nearly toroidal, periodic and quasi-periodic motions of fluid particles driven by the Gavrilov solutions of the Euler equations Pietro Baldi **Abstract.** We consider the smooth, compactly supported solutions of the steady 3D Euler equations of incompressible fluids contructed by Gavrilov in 2019, and we study the corresponding fluid particle dynamics. This is an ode analysis, which contributes to the description of Gavrilov's vector field. ###### Contents * 1 Introduction and main result * 1.1 The Gavrilov solutions of the steady Euler equations * 1.2 Main result: description of the fluid particle dynamics * 1.3 Related literature * 2 Dual role of the parameter \(R\) in Gavrilov's solutions * 3 Conjugation to a linear flow on \(\mathbb{T}^{2}\) * 3.1 Cilindrical coordinates * 3.2 Elimination of the factor \(1/\rho\) and canonical Hamiltonian structure * 3.3 Symplectic polar coordinates in the radial-vertical plane * 3.4 Level sets * 3.5 Integration of the system in the radial-vertical plane * 3.6 Rotation period in the radial-vertical plane * 3.7 Angle-action variables in the radial-vertical plane * 3.8 Reduction to a constant rotation in the tangential direction * 3.9 Ratio of the two rotation periods * 3.10 Motion of the fluid particles * 3.11 The pressure in terms of the action * 4 Taylor expansions and transversality * 4.1 Expansion of \(\alpha\) * 4.2 Expansion of \(\alpha_{2}\) * 4.3 Expansion of \(\gamma_{c}(\vartheta)\) * 4.4 Expansion of the average of \(\gamma_{c}(\vartheta)\) and \(\partial_{c}\gamma_{c}(\vartheta)\) * 4.5 Expansion of \(J(c)\) * 4.6 Expansion of \(h(I)\) and \(\Omega_{1}(I)\) * 4.7 Expansion of the frequency ratio \(\Omega_{2}(I)/\Omega_{1}(I)\) * 4.8 Expansion of \(\Phi(\sigma,\beta,I)\) * 4.9 Smallness conditions ## 1 Introduction and main result In the remarkable paper [7], Gavrilov proved the existence of a nontrivial solution of \(C^{\infty}\) class, with compact support, of the steady Euler equations of incompressible fluids in \(\mathbb{R}^{3}\). The result in [7] is important and surprising, because previously, on the basis of some negative partial results, it was conjectured that compactly supported, nontrivial, smooth solutions of the 3D steady Euler equations cannot exist: see the clear explanation at the beginning of [4] and the general discussion about existence of compactly supported smooth solutions in [10]. In addition, another reason of interest for the fruitful construction of [7] is that recently it has been used as a building block to produce other interesting solutions, both stationary and time-dependent, of the Euler equations of fluid dynamics, see Section 1.3 below. Now suppose that a fluid moves according to the Gavrilov solution of the Euler equations, that is, suppose that the fluid particles are driven by Gavrilov's velocity vector field. Which movement of the fluid do we observe? Of course the particles outside the support of the vector field do not move at all, but how do they move in the region where the field is nonzero? In the present paper we deal with this question. It turns out that every fluid particle travels along a trajectory that lies on a nearly toroidal surface, which is a level set of the pressure. The motion of every fluid particle is periodic or quasi-periodic in time; we prove that there are both periodic and quasi-periodic motions, and the value of the pressure determines whether the trajectories on its level set are all periodic or all quasi-periodic. In fact, the system of differential equations in \(\mathbb{R}^{3}\) describing the motion of the fluid particles turns out to be integrable, as it can be transformed into the system of a Hamiltonian system of one degree of freedom and a third equation that can be directly solved by integration. We write the Hamiltonian system in angle-action coordinates \((\sigma,I)\), and prove that there exists a change of variables on a neighborhood of the support of Gavrilov's vector field such that the equations of motion in the new coordinates \((\sigma,\beta,I)\) becomes \[\dot{\sigma}=\Omega_{1}(I),\ \ \ \ \dot{\beta}=\Omega_{2}(I),\ \ \ \ \dot{I}=0,\] where \(\sigma\) and \(\beta\) are angle variables rotating with constant angular velocities \(\Omega_{1}(I)\), \(\Omega_{2}(I)\), and \(I\) is a constant action variable, which is, in fact, a reparametrization of the pressure. The full statement is in Theorem 1.1. ### The Gavrilov solutions of the steady Euler equations In the main part of the construction in [7], given any \(R>0\), the circle \[\mathcal{C}:=\{(x,y,z)\in\mathbb{R}^{3}:x^{2}+y^{2}=R^{2},\ z=0\}\] in \(\mathbb{R}^{3}\) is considered, and, in an open neighborhood \(\mathcal{N}\) of \(\mathcal{C}\), two functions \(U,P\) are defined, \(U:\mathcal{N}\to\mathbb{R}^{3}\) and \(P:\mathcal{N}\to\mathbb{R}\), both in \(C^{\infty}(\mathcal{N}\setminus\mathcal{C})\), solving the steady Euler equations \[U\cdot\nabla U+\nabla P=0,\ \ \ \ \operatorname{div}U=0 \tag{1.1}\] in \(\mathcal{N}\setminus\mathcal{C}\), together with the fundamental "localizability condition" \[U\cdot\nabla P=0. \tag{1.2}\] As a final step of the proof, the functions \(U,P\) are multiplied by smooth cut-off functions to obtain \(C^{\infty}(\mathbb{R}^{3})\) functions \(\tilde{U},\tilde{P}\), where \(\tilde{U}\) and \(\nabla\tilde{P}\) have compact support contained in \(\mathcal{N}\setminus\mathcal{C}\), solving (1.1) (and also (1.2)) in \(\mathbb{R}^{3}\). Let us be more precise. Denote \(\rho:=\sqrt{x^{2}+y^{2}}\). For \(\delta\in(0,R)\), let \[\mathcal{N}=\{(x,y,z)\in\mathbb{R}^{3}:(\rho-R)^{2}+z^{2}<\delta^{2}\}. \tag{1.3}\] In \(\mathcal{N}\), the solution \((U,P)\) of [7] is given by \[U(x,y,z)=u_{\rho}(\rho,z)e_{\rho}(x,y)+u_{\varphi}(\rho,z)e_{\varphi}(x,y)+u_{ z}(\rho,z)e_{z},\ \ \ P(x,y,z)=p(\rho,z), \tag{1.4}\] where \[e_{\rho}(x,y) =\frac{1}{\rho}(x,y,0), e_{\varphi}(x,y) =\frac{1}{\rho}(-y,x,0), e_{z} =(0,0,1),\] \[u_{\rho}(\rho,z) =\frac{\partial_{z}p(\rho,z)}{\rho}, u_{\varphi}(\rho,z) =\frac{b(\rho,z)}{\rho}, u_{z}(\rho,z) =-\frac{\partial_{\rho}p(\rho,z)}{\rho},\] \[b(\rho,z) =\frac{R^{3}}{4}\sqrt{H(a(\rho,z))}, p(\rho,z) =\frac{R^{4}}{4}a(\rho,z), a(\rho,z) =\alpha\Big{(}\frac{\rho}{R},\frac{z}{R}\Big{)}, \tag{1.5}\] and \(\alpha,H\) are functions defined in [7] in terms of solutions of certain differential equations; \(H(0)=0\), and \(H\) is analytic in a neighborhood of \(0\); \(\alpha\) has a strict local minimum at \((1,0)\), with \(\alpha(1,0)=0\), and it is analytic in a neighborhood of \((1,0)\). Hence \(\alpha\) and \(H\circ\alpha\) are both well-defined and analytic in a disc of \(\mathbb{R}^{2}\) of center \((1,0)\) and radius \(r_{0}\), for some universal constant \(r_{0}>0\) (where "universal" means that \(r_{0}\) does not depend on anything). If \(\delta\) in (1.3) satisfies \[\delta\leq r_{0}R,\] then \(a(\rho,z)\) and \(H(a(\rho,z))\), where \(\rho=\sqrt{x^{2}+y^{2}}\), are well-defined and analytic in \(\mathcal{N}\) (note that, in \(\mathcal{N}\), one has \(0<R-\delta<\rho<R+\delta\); in particular, \(\rho\) is bounded away from zero). Also, \(b(\rho,z)\) is well-defined and continuous in \(\mathcal{N}\), and (because of the square root \(\sqrt{H}\)) it is analytic in \(\mathcal{N}\setminus\mathcal{C}\). Hence \(P\) is analytic in \(\mathcal{N}\), while \(U\) is continuous in \(\mathcal{N}\) and analytic in \(\mathcal{N}\setminus\mathcal{C}\). The solution \((\tilde{U},\tilde{P})\) in [7] is defined in \(\mathcal{N}\) as \[\tilde{U}(x,y,z)=\omega(P(x,y,z))U(x,y,z),\ \ \ \ \tilde{P}(x,y,z)=W(P(x,y,z)), \tag{1.6}\] where \(\omega:\mathbb{R}\to\mathbb{R}\) is any \(C^{\infty}\) function vanishing outside the interval \([\varepsilon,2\varepsilon]\), with \(\varepsilon>0\) small enough, and \(W:\mathbb{R}\to\mathbb{R}\) is a primitive of \(\omega^{2}\); for example, \[W(s)=\int_{0}^{s}\omega^{2}(\sigma)\,d\sigma. \tag{1.7}\] Then \((\tilde{U},\tilde{P})\) is extended to \(\mathbb{R}^{3}\) by defining \((\tilde{U},\tilde{P})=(0,W(2\varepsilon))\) in \(\mathbb{R}^{3}\setminus\mathcal{N}\). Note that \(\tilde{U}\) and \(\nabla\tilde{P}\) can be nonzero only in the set \[\mathcal{S}:=\{(x,y,z)\in\mathcal{N}:\varepsilon<P(x,y,z)<2\varepsilon\}, \tag{1.8}\] and, if \(\omega(s)\) is nonzero at some \(s\in(\varepsilon,2\varepsilon)\), then both \(\tilde{U}\) and \(\nabla\tilde{P}\) are actually nonzero at the corresponding points in \(\mathcal{S}\). Moreover, \(P=0\) on the circle \(\mathcal{C}\), and, for \(\varepsilon\) small enough, \(P>3\varepsilon\) at all points of \(\mathcal{N}\) sufficiently far from \(\mathcal{C}\); more precisely, to fix the details of the construction, we introduce a parameter \(\tau>0\) and assume that \[P(x,y,z)>\tau\ \ \ \ \forall(x,y,z)\in\mathcal{N}\setminus\mathcal{N}^{\prime} \tag{1.9}\] and \(\tau\geq 3\varepsilon\), where \[\mathcal{N}^{\prime}:=\{(x,y,z)\in\mathbb{R}^{3}:(\rho-R)^{2}+z^{2}<(\delta/4 )^{2}\}.\] Thus, the closure of \(\mathcal{S}\) is contained in the open set \[\mathcal{S}^{*}:=\{(x,y,z)\in\mathcal{N}:0<P(x,y,z)<\tau\}, \tag{1.10}\] and \(\mathcal{S}^{*}\subseteq(\mathcal{N}^{\prime}\setminus\mathcal{C})\subseteq( \mathcal{N}\setminus\mathcal{C})\). ### Main result: description of the fluid particle dynamics A preliminary, basic observation regarding the solutions of Section 1.1 is that any such solution with a given \(R>0\) can be obtained, by rescaling, from another one having \(R=1\). Even more, we show that Gavrilov's solutions are a \(1\)-parameter subset of a \(2\)-parameter family of solutions, where \(R\) plays a dual role related to two different scaling invariances of the Euler equations, both preserving the localizability condition (1.2). These basic observations are in Section 2 (see Lemma 2.1). Thanks to these properties, we study the motion of the fluid particles in the normalized case \(R=1\); the motion for any other \(R>0\) is immediately obtained by rescaling the amplitude and the time variable, as explained in Lemma 2.2. To study the motion of the fluid particles driven by the Gavrilov vector field \(\tilde{U}\) means to study the solutions \(\mathbb{R}\to\mathbb{R}^{3}\), \(t\mapsto(x(t),y(t),z(t))\) of the system \[(\dot{x}(t),\dot{y}(t),\dot{z}(t))=\tilde{U}(x(t),y(t),z(t)), \tag{1.11}\] which is an autonomous ode in \(\mathbb{R}^{3}\). The dot above a function denotes its time derivative. Before dealing with system (1.11), we recall some definitions about quasi-periodic functions. A vector \(\Omega=(\Omega_{1},\dots,\Omega_{n})\in\mathbb{R}^{n}\), \(n\geq 1\), is said to be _rationally independent_ if \(\Omega\cdot k=\Omega_{1}k_{1}+\dots+\Omega_{n}k_{n}\) is nonzero for all integer vectors \(k=(k_{1},\dots,k_{n})\in\mathbb{Z}^{n}\setminus\{0\}\). Given a set \(X\), a function \(v:\mathbb{R}\to X\), \(t\mapsto v(t)\) is said to be _quasi-periodic with frequency vector_\(\Omega=(\Omega_{1},\dots,\Omega_{n})\in\mathbb{R}^{n}\), \(n\geq 2\), if \(\Omega\) is rationally independent and there exists a function \(w:\mathbb{R}^{n}\to X\), \(2\pi\)-periodic in each real variable, such that \(v(t)=w(\Omega_{1}t,\dots,\Omega_{n}t)\) for all \(t\in\mathbb{R}\). Moreover, to ensure that the number \(n\) is not higher than necessary, we add the condition that there does not exist any vector \(\tilde{\Omega}=(\tilde{\Omega}_{1},\dots,\tilde{\Omega}_{m})\in\mathbb{R}^{m}\), with \(m<n\), and any function \(\tilde{w}:\mathbb{R}^{m}\to X\), \(2\pi\)-periodic in each real variable, such that \(v(t)=\tilde{w}(\tilde{\Omega}_{1}t,\dots,\tilde{\Omega}_{m}t)\) for all \(t\in\mathbb{R}\). For example, for \(X=\mathbb{R}\), \(n=3\), and \(\Omega=(\Omega_{1},\Omega_{2},\Omega_{3})\in\mathbb{R}^{3}\) a rationally independent vector, if \(w(\vartheta_{1},\vartheta_{2},\vartheta_{3})=\cos(\vartheta_{1}+\vartheta_{2} )+\cos(\vartheta_{3})\), then \(n=3\) is not minimal, as it can be reduced to \(n=2\) by taking \(\tilde{w}(\vartheta_{1},\vartheta_{2})=\cos(\vartheta_{1})+\cos(\vartheta_{2})\), \(\tilde{\Omega}=(\tilde{\Omega}_{1},\tilde{\Omega}_{2})\in\mathbb{R}^{2}\), with \(\tilde{\Omega}_{1}=\Omega_{1}+\Omega_{2}\) and \(\tilde{\Omega}_{2}=\Omega_{3}\), while \(n=2\) cannot be further reduced. Hence the function \(v(t)=\tilde{w}(\tilde{\Omega}_{1}t,\tilde{\Omega}_{2}t)=\cos(\tilde{\Omega}_{ 1}t)+\cos(\tilde{\Omega}_{2}t)\) is quasi-periodic with frequency vector \(\tilde{\Omega}\in\mathbb{R}^{2}\). For \(n=2\), a vector \(\Omega=(\Omega_{1},\Omega_{2})\in\mathbb{R}^{2}\) is rationally independent if and only if \(\Omega_{1}\) is nonzero and the ratio \(\Omega_{2}/\Omega_{1}\) is irrational. Hence a function \(v(t)=w(\Omega_{1}t,\Omega_{2}t)\) is quasi-periodic with frequency vector \(\Omega=(\Omega_{1},\Omega_{2})\) if \(w(\vartheta_{1},\vartheta_{2})\) is \(2\pi\)-periodic in \(\vartheta_{1}\) and in \(\vartheta_{2}\), \(\Omega_{1}\) is nonzero, \(\Omega_{2}/\Omega_{1}\) is irrational, and \(v(t)\) is not a periodic function. The main result of this paper is the following description of Gavrilov's fluid particle dynamics. **Theorem 1.1**.: _There exist universal positive constants \(\delta_{0},\tau_{0},\varepsilon_{0},I^{*}\) with the following properties. Let_ \[\mathcal{C},\delta,\mathcal{N},U,P,\varepsilon,\omega,W,\tilde{U},\tilde{P}, \mathcal{S},\tau,\mathcal{N}^{\prime},\mathcal{S}^{*}\] _be the sets, constants, and functions defined in Section 1.1 for \(R=1\), with \(\delta=\delta_{0}\), \(\tau=\tau_{0}\), and \(0<\varepsilon\leq\varepsilon_{0}\)._ \((i)\) _There exists an analytic diffeomorphism_ \[\Phi:\mathbb{T}\times\mathbb{T}\times(0,I^{*})\to\mathcal{S}^{*},\] _where \(\mathbb{T}:=\mathbb{R}/2\pi\mathbb{Z}\), such that the change of variable \((x,y,z)=\Phi(\sigma,\beta,I)\) transforms system (1.11) into a system of the form_ \[\dot{\sigma}=\Omega_{1}(I),\ \ \ \ \dot{\beta}=\Omega_{2}(I),\ \ \ \ \dot{I}=0. \tag{1.12}\] _As a consequence, the solution \((x(t),y(t),z(t))\) of the Cauchy problem (1.11) with initial datum_ \[(x(0),y(0),z(0))=(x_{0},y_{0},z_{0})=\Phi(\sigma_{0},\beta_{0},I_{0})\in \mathcal{S}^{*} \tag{1.13}\] _is the function_ \[(x(t),y(t),z(t))=\Phi(\sigma(t),\beta(t),I(t)), \tag{1.14}\] _defined for all \(t\in\mathbb{R}\), where_ \[\sigma(t)=\sigma_{0}+\Omega_{1}(I_{0})t,\ \ \ \ \beta(t)=\beta_{0}+\Omega_{2}(I_{0})t,\ \ \ \ I(t)=I_{0}. \tag{1.15}\] _The angle variables \(\sigma(t),\beta(t)\in\mathbb{T}\) rotate with constant angular frequencies \(\Omega_{1}(I_{0}),\Omega_{2}(I_{0})\) respectively, and the variable \(I(t)=I_{0}\) is constant in time._ \((ii)\) _The first and third equations of the transformed system (1.12) form a Hamiltonian system_ \[\dot{\sigma}=\partial_{I}\mathcal{H}(\sigma,I),\quad\dot{I}=-\partial_{\sigma} \mathcal{H}(\sigma,I) \tag{1.16}\] _with Hamiltonian \(\mathcal{H}(\sigma,I)=\mathcal{H}(I)\) independent of the angle variable \(\sigma\); hence (1.16) is a Hamiltonian system in angle-action variables._ \((iii)\) _The frequency \(\Omega_{1}(I)\) is given by_ \[\Omega_{1}(I)=\omega(\mathcal{K}(I))\,\mathcal{K}^{\prime}(I)\] _where \(\mathcal{K}(I)\) is the restriction to the interval \((0,I^{*})\) of a function defined and analytic in the interval \((-I^{*},I^{*})\), with Taylor expansion_ \[\mathcal{K}(I)=I+\frac{1065}{1024}I^{3}+O(I^{4})\] _around \(I=0\), and strictly increasing in \((-I^{*},I^{*})\). The frequency \(\Omega_{2}(I)\) is given by_ \[\Omega_{2}(I)=\sqrt{I}\,\mathcal{R}(I)\Omega_{1}(I)\] _where \(\mathcal{R}(I)\) is the restriction to the interval \((0,I^{*})\) of a function defined and analytic in the interval \((-I^{*},I^{*})\), with Taylor expansion_ \[\mathcal{R}(I)=1+\frac{7}{4}I+O(I^{2})\] _around \(I=0\). If \(\omega(\mathcal{K}(I))\neq 0\), then both \(\Omega_{1}(I)\) and \(\Omega_{2}(I)\) are nonzero, with ratio_ \[\frac{\Omega_{2}(I)}{\Omega_{1}(I)}=\sqrt{I}\,\mathcal{R}(I).\] _The function \(I\mapsto\sqrt{I}\,\mathcal{R}(I)\) is strictly increasing and analytic in \((0,I^{*})\). Therefore it is rational for infinitely many values of \(I\), and irrational for infinitely many other values of \(I\). More precisely, the set \(\{I\in(0,I^{*}):\sqrt{I}\,\mathcal{R}(I)\in\mathbb{Q}\}\) is a countable set, while the set \(\{I\in(0,I^{*}):\sqrt{I}\,\mathcal{R}(I)\notin\mathbb{Q}\}\) has full Lebesgue measure._ \((iv)\) _For \(\omega(\mathcal{K}(I_{0}))\neq 0\), the solution (1.14) of the Cauchy problem (1.11), (1.13) is periodic in time if \(\sqrt{I_{0}}\,\mathcal{R}(I_{0})\) is rational, and it is quasi-periodic in time with frequency vector \((\Omega_{1}(I_{0}),\Omega_{2}(I_{0}))\) if \(\sqrt{I_{0}}\,\mathcal{R}(I_{0})\) is irrational._ \((v)\) _The map \(\Phi(\sigma,\beta,I)\) admits a converging expansion in powers of \(\sqrt{I}\) around \(I=0\); more precisely, there exists a map \(\Psi(\sigma,\beta,\mu)\), defined and analytic in \(\mathbb{T}^{2}\times(-\mu_{0},\mu_{0})\), where \(\mu_{0}=\sqrt{I^{*}}\), such that \(\Phi(\sigma,\beta,I)=\Psi(\sigma,\beta,\sqrt{I})\) for all \((\sigma,\beta,I)\in\mathbb{T}^{2}\times(0,I^{*})\). The map \(\Phi(\sigma,\beta,I)\) has the form_ \[\Phi(\sigma,\beta,I)=\begin{pmatrix}\rho(\sigma,I)\cos(\beta+\eta(\sigma,I)) \\ \rho(\sigma,I)\sin(\beta+\eta(\sigma,I))\\ \zeta(\sigma,I)\end{pmatrix}\] _where the functions \(\rho(\sigma,I),\eta(\sigma,I),\zeta(\sigma,I)\) have expansion_ \[\rho(\sigma,I)=1+\sqrt{2I}\sin\sigma+O(I),\quad\ \eta(\sigma,I)=O(I),\quad\ \zeta( \sigma,I)=\sqrt{2I}\cos\sigma+O(I).\] \((vi)\) _The action variable \(I\) and the pressure \(P\) in (1.4) are related by the identity_ \[P(\Phi(\sigma,\beta,I))=\mathcal{K}(I)\quad\ \forall(\sigma,\beta,I)\in \mathbb{T}^{2}\times(0,I^{*}).\] _The pressure level and the action are in bijective correspondence; thus, the action is a reparametrization of the pressure. The frequencies \(\Omega_{1}(I),\Omega_{2}(I)\) could also be expressed in terms of the pressure level. The pressure \(\tilde{P}\) in (1.6) satisfies \(\tilde{P}(\Phi(\sigma,\beta,I))=W(\mathcal{K}(I))\)._ \((vii)\) The trajectory \(\{(x(t),y(t),z(t)):t\in\mathbb{R}\}\) of the solution (1.14) lies in the level set_ \[\mathcal{T}_{\ell}=\{(x,y,z)\in\mathbb{R}^{3}:P(x,y,z)=\ell\}\] _of the pressure, where the value \(\ell=P(x_{0},y_{0},z_{0})\) is determined by the initial datum (1.13). The level set \(\mathcal{T}_{\ell}\) has a nearly toroidal shape, because \(P(x,y,z)\) is given by (1.4), (1.5), and_ \[\alpha(\rho,z)=2(\rho-1)^{2}+2z^{2}+O((|\rho-1|+|z|)^{3})\] _around \((\rho,z)=(1,0)\). More precisely, \(\mathcal{T}_{\ell}\) is the diffeomorphic image_ \[\mathcal{T}_{\ell}=\Phi(\mathbb{T}^{2}\times\{I\})\] _of the 2-dimensional torus \(\mathbb{T}^{2}\times\{I\}=\{(\sigma,\beta,I):(\sigma,\beta)\in\mathbb{T}^{2}\}\), where \(I\) is determined by the identity_ \[\ell=\mathcal{K}(I). \tag{1.17}\] _The pressure level \(\ell\) determines whether the solution (1.14), (1.15) of the Cauchy problem (1.11) with initial datum (1.13) on the surface \(\mathcal{T}_{\ell}\) is periodic or quasi-periodic, depending on the rationality/irrationality of the ratio \(\Omega_{2}(I)/\Omega_{1}(I)\), where \(I\) and \(\ell\) are related by (1.17). Different solutions of system (1.11) lying on the same surface \(\mathcal{T}_{\ell}\), i.e., having the same pressure level, share the same frequencies \(\Omega_{1}(I),\Omega_{2}(I)\) and, therefore, the same frequency ratio \(\Omega_{2}(I)/\Omega_{1}(I)\)._ **Remark 1.2**.: Theorem 1.1 is stated for cut-off functions \(\omega\) supported in \([\varepsilon,2\varepsilon]\), like those in [7]; however, Theorem 1.1, as well as the result of [7], also holds for \(\omega\) supported in any interval \([\varepsilon_{1},\varepsilon_{2}]\) with \(0<\varepsilon_{1}<\varepsilon_{2}\leq\varepsilon_{0}\), without changing anything in the proof. **Remark 1.3**.: By (1.16), system (1.12) is equivalent to the integrable Hamiltonian system with two degrees of freedom in angle-action variables \[\dot{\sigma}=\partial_{I}\mathcal{H}^{+},\quad\dot{I}=-\partial_{\sigma} \mathcal{H}^{+},\quad\dot{\beta}=\partial_{K}\mathcal{H}^{+},\quad\dot{K}=- \partial_{\beta}\mathcal{H}^{+}\] with Hamiltonian \(\mathcal{H}^{+}(\sigma,I,\beta,K)=\mathcal{H}^{+}(I,K)=\mathcal{H}(I)+\Omega_ {2}(I)K\), restricted to the invariant set \(K=0\). Theorem 1.1 is proved in Sections 3 and 4, splitting the proof into several short simple steps. The proof uses only basic tools from the classical theory of odes and dynamical systems. ### Related literature A general discussion about the existence of compactly supported smooth solutions of pdes, also in comparison with the result of Gavrilov for the steady Euler equations, can be found in the recent preprint [10]; in particular, for Navier-Stokes equations, see [9]. The nice, explicit construction of Gavrilov's original paper [7] has been revisited and generalized by Constantin, La and Vicol in [4]. The paper [4] also uses the result by Gavrilov as a building block to prove the existence of compactly supported solutions of the steady Euler equations that have a given Holder regularity \(C^{\alpha}\) and are not in \(C^{\beta}\) for any \(\beta>\alpha\). The proof employs the invariances of the Euler equations and the fact that the sum of compactly supported solutions with disjoint supports is itself a solution. The result by Gavrilov has also been used recently by Enciso, Peralta-Salas and Torres de Lizaur in [6] to produce time-quasi-periodic solutions of the 3D Euler equations. In [6] the authors extend to the 3D case, and to the \(n\)-dimensional case for all \(n\) even, the construction by Crouseilles and Faou [5] of time-quasi-periodic solutions of the 2D Euler equations. Both [5] and [6] use in a clever way the compactly supported solutions of the steady equations as the main ingredients of the construction. The time-quasi-periodic solutions in [5] and [6] are not of kam type, that is, they are constructed outside the context of the Kolmogorov-Arnold-Moser perturbation theory of nearly integrable dynamical systems, where small divisor problems typically appear. Time-quasi-periodic solutions of the 3D Euler equations of kam type are obtained in [3] by Montalto and the author in presence of a forcing term, using pseudo-differential calculus and techniques of kam theory for pdes. By Theorem 1.1, the domain \(\mathcal{S}^{*}\) is foliated by the 2-dimensional tori \(\mathcal{T}_{\ell}\), invariant for the vector field \(\tilde{U}\), on which the motion is periodic or quasi-periodic. This is true not only for the Gavrilov vector field, but for all steady 3D Euler flows under suitable assumptions, by a theorem of Arnold [1], [2]. Arnold's theorem, and its role in the study of the mixing property for Euler flows, is discussed by Khesin, Kuksin and Peralta-Salas in [8]. ## 2 Dual role of the parameter \(R\) in Gavrilov's solutions Two different roles are played simultaneously by the parameter \(R\) in the Gavrilov's solutions, because \(R\) is * both a rescaling factor for the independent variable \((x,y,z)\in\mathbb{R}^{3}\), appearing as \(R^{-1}\) in the argument of \(\alpha\) in (1.5), * and an amplitude factor multiplying the vector fields \(U,\tilde{U}\) and the pressures \(P,\tilde{P}\), appearing as powers \(R^{3}\) and \(R^{4}\) in the definition of \(b\) and \(p\) in (1.5). Separating these two different scalings helps to clarify the role of the parameters in Gavrilov's construction. In fact, we observe that there exists a family of solutions described by two real parameters \((\lambda,\mu)\) such that the solutions of Section 1.1 are obtained in the special case \((\lambda,\mu)=(R,R^{2})\). This means that, regarding the parameter \(R\), Gavrilov's solutions form a 1-parameter subset of a 2-parameter family of solutions. Each element of the family is obtained from any other element of the family by two simple rescalings, which correspond to two basic invariances of the Euler equations, also preserving the localizability condition. This allows us to study only one element of the family (in particular, a normalized one), obtaining directly a description of all the other elements. Given \(R>0\), let \[\mathcal{U}:=(\mathcal{C},\delta,\mathcal{N},U,P,\varepsilon,\omega,W,\tilde {U},\tilde{P},\mathcal{S},\tau,\mathcal{N}^{\prime},\mathcal{S}^{*}) \tag{2.1}\] denote the list of the elements (sets, constants, and functions) defined in Section 1.1. For every \(\lambda>0\) and \(\mu>0\), we define a rescaled version of each element of the list \(\mathcal{U}\) in the following way. We define \[\mathcal{A}_{\lambda,\mu}\mathcal{C} :=\{(x,y,z)\in\mathbb{R}^{3}:\rho=\lambda R,\ z=0\},\] \[\mathcal{A}_{\lambda,\mu}\delta :=\lambda\delta,\] \[\mathcal{A}_{\lambda,\mu}\mathcal{N} :=\{(x,y,z)\in\mathbb{R}^{3}:(\rho-\lambda R)^{2}+z^{2}<(\lambda \delta)^{2}\},\] \[(\mathcal{A}_{\lambda,\mu}U)(x,y,z) :=\mu U\Big{(}\frac{x}{\lambda},\frac{y}{\lambda},\frac{z}{\lambda }\Big{)}\quad\forall(x,y,z)\in\mathcal{A}_{\lambda,\mu}\mathcal{N},\] \[(\mathcal{A}_{\lambda,\mu}P)(x,y,z) :=\mu^{2}P\Big{(}\frac{x}{\lambda},\frac{y}{\lambda},\frac{z}{ \lambda}\Big{)}\quad\forall(x,y,z)\in\mathcal{A}_{\lambda,\mu}\mathcal{N},\] \[\mathcal{A}_{\lambda,\mu}\varepsilon :=\mu^{2}\varepsilon,\] \[(\mathcal{A}_{\lambda,\mu}\omega)(s) :=\omega\Big{(}\frac{s}{\mu^{2}}\Big{)}\quad\forall s\in\mathbb{R},\] \[(\mathcal{A}_{\lambda,\mu}W)(s) :=\mu^{2}W\Big{(}\frac{s}{\mu^{2}}\Big{)}\quad\forall s\in\mathbb{ R},\] \[(\mathcal{A}_{\lambda,\mu}\tilde{U})(x,y,z) :=\mu\tilde{U}\Big{(}\frac{x}{\lambda},\frac{y}{\lambda},\frac{z} {\lambda}\Big{)}\quad\forall(x,y,z)\in\mathbb{R}^{3},\] \[(\mathcal{A}_{\lambda,\mu}\tilde{P})(x,y,z) :=\mu^{2}\tilde{P}\Big{(}\frac{x}{\lambda},\frac{y}{\lambda},\frac {z}{\lambda}\Big{)},\quad\forall(x,y,z)\in\mathbb{R}^{3},\] \[\mathcal{A}_{\lambda,\mu}\mathcal{S} :=\{(x,y,z)\in\mathcal{A}_{\lambda,\mu}\mathcal{N}:\mathcal{A}_{ \lambda,\mu}\varepsilon<(\mathcal{A}_{\lambda,\mu}P)(x,y,z)<2\mathcal{A}_{ \lambda,\mu}\varepsilon\},\] \[\mathcal{A}_{\lambda,\mu}\tau :=\mu^{2}\tau,\] \[\mathcal{A}_{\lambda,\mu}\mathcal{N}^{\prime} :=\{(x,y,z)\in\mathbb{R}^{3}:(\rho-\lambda R)^{2}+z^{2}<(\lambda \delta/4)^{2}\},\] \[\mathcal{A}_{\lambda,\mu}\mathcal{S}^{*} :=\{(x,y,z)\in\mathcal{A}_{\lambda,\mu}\mathcal{N}:0<(\mathcal{A} _{\lambda,\mu}P)(x,y,z)<3\mathcal{A}_{\lambda,\mu}\varepsilon\}, \tag{2.2}\] where \(\rho=\sqrt{x^{2}+y^{2}}\). We denote by \(\mathcal{A}_{\lambda,\mu}\mathcal{U}:=(\mathcal{A}_{\lambda,\mu}\mathcal{C},\dots,\mathcal{A}_{\lambda,\mu}\mathcal{S}^{*})\) the list of the rescaled elements. The properties of \(\mathcal{U}\) in Section 1.1 become the following properties for \(\mathcal{A}_{\lambda,\mu}\mathcal{U}\). * The constant \(\mathcal{A}_{\lambda,\mu}\delta=\lambda\delta\) satisfies \(0<\lambda\delta<\lambda R\) and \(\lambda\delta\leq\lambda r_{0}R\). * The rescaled pressure \(\mathcal{A}_{\lambda,\mu}P\) is analytic in \(\mathcal{A}_{\lambda,\mu}\mathcal{N}\). * The rescaled vector field \(\mathcal{A}_{\lambda,\mu}U\) is continuous in \(\mathcal{A}_{\lambda,\mu}\mathcal{N}\) and analytic in \((\mathcal{A}_{\lambda,\mu}\mathcal{N})\setminus(\mathcal{A}_{\lambda,\mu} \mathcal{C})\). * The pair \((\mathcal{A}_{\lambda,\mu}U,\,\mathcal{A}_{\lambda,\mu}P)\) satisfies the Euler equations and the localizability condition in \((\mathcal{A}_{\lambda,\mu}\mathcal{N})\setminus(\mathcal{A}_{\lambda,\mu} \mathcal{C})\). * The function \(\mathcal{A}_{\lambda,\mu}\omega\) is \(C^{\infty}(\mathbb{R},\mathbb{R})\) with support contained in \([\mathcal{A}_{\lambda,\mu}\varepsilon,2\mathcal{A}_{\lambda,\mu}\varepsilon] =[\mu^{2}\varepsilon,2\mu^{2}\varepsilon]\). * The function \(\mathcal{A}_{\lambda,\mu}W\) satisfies \[(\mathcal{A}_{\lambda,\mu}W)(s)=\int_{0}^{s}(\mathcal{A}_{\lambda,\mu}\omega) ^{2}(\sigma)\,d\sigma\ \ \ \ \forall s\in\mathbb{R}.\] * The vector field \(\mathcal{A}_{\lambda,\mu}\tilde{U}\) satisfies \[(\mathcal{A}_{\lambda,\mu}\tilde{U})(x,y,z) =\mu\omega\Big{(}P\Big{(}\frac{x}{\lambda},\frac{y}{\lambda}, \frac{z}{\lambda}\Big{)}\Big{)}U\Big{(}\frac{x}{\lambda},\frac{y}{\lambda}, \frac{z}{\lambda}\Big{)}\] \[=(\mathcal{A}_{\lambda,\mu}\omega)\big{(}(\mathcal{A}_{\lambda,\mu }P)(x,y,z)\big{)}\cdot(\mathcal{A}_{\lambda,\mu}U)(x,y,z)\ \ \ \ \forall(x,y,z)\in\mathcal{A}_{\lambda,\mu} \mathcal{N}\] and \((\mathcal{A}_{\lambda,\mu}\tilde{U})(x,y,z)=0\) for all \((x,y,z)\in\mathbb{R}^{3}\setminus(\mathcal{A}_{\lambda,\mu}\mathcal{N})\). * The pressure \(\mathcal{A}_{\lambda,\mu}\tilde{P}\) satisfies \[(\mathcal{A}_{\lambda,\mu}\tilde{P})(x,y,z)=\mu^{2}W\Big{(}P\Big{(}\frac{x}{ \lambda},\frac{y}{\lambda},\frac{z}{\lambda}\Big{)}\Big{)}=(\mathcal{A}_{ \lambda,\mu}W)\big{(}(\mathcal{A}_{\lambda,\mu}P)(x,y,z)\big{)}\ \ \ \ \forall(x,y,z)\in\mathcal{A}_{\lambda,\mu} \mathcal{N}\] and \((\mathcal{A}_{\lambda,\mu}\tilde{P})(x,y,z)=\mu^{2}W(2\varepsilon)=( \mathcal{A}_{\lambda,\mu}W)(2\mathcal{A}_{\lambda,\mu}\varepsilon)\) for all \((x,y,z)\in\mathbb{R}^{3}\setminus(\mathcal{A}_{\lambda,\mu}\mathcal{N})\). * Both \(\mathcal{A}_{\lambda,\mu}\tilde{U}\) and \(\mathcal{A}_{\lambda,\mu}\tilde{P}\) are \(C^{\infty}(\mathbb{R}^{3})\) and satisfy the Euler equations and the localizability condition in \(\mathbb{R}^{3}\). * Both \(\mathcal{A}_{\lambda,\mu}\tilde{U}\) and \(\nabla(\mathcal{A}_{\lambda,\mu}\tilde{P})\) vanish outside the bounded set \(\mathcal{A}_{\lambda,\mu}\mathcal{S}\). * One has \(\mathcal{A}_{\lambda,\mu}P>\mathcal{A}_{\lambda,\mu}\tau\) in \((\mathcal{A}_{\lambda,\mu}\mathcal{N})\setminus(\mathcal{A}_{\lambda,\mu} \mathcal{N}^{\prime})\). * One has \(\mathcal{A}_{\lambda,\mu}\tau\geq 3\mathcal{A}_{\lambda,\mu}\varepsilon\). Thus, we have obtained the 2-parameter family \(\{\mathcal{A}_{\lambda,\mu}\mathcal{U}\}_{\lambda,\mu}\). One has the group property \[\mathcal{A}_{\lambda_{1},\mu_{1}}(\mathcal{A}_{\lambda_{2},\mu_{2}}\mathcal{U}) =\mathcal{A}_{\lambda_{1}\lambda_{2},\,\mu_{1}\mu_{2}}\mathcal{U},\ \ \ \ \mathcal{A}_{\lambda,\mu}(\mathcal{A}_{\frac{1}{\lambda},\frac{1}{\mu} \mathcal{U}})=\mathcal{A}_{1,1}\mathcal{U}=\mathcal{U} \tag{2.3}\] for all \(\lambda_{1},\lambda_{2},\lambda,\mu_{1},\mu_{2},\mu\in(0,\infty)\). The check of (2.3) is straightforward. **Lemma 2.1**.: _Given \(R>0\), let \(\mathcal{U}_{R}\) be the list (2.1) of the elements (sets, constants, and functions) defined in Section 1.1. Then_ \[\mathcal{U}_{R}=\mathcal{A}_{R,R^{2}}\,\mathcal{U}_{1}\] _where \(\mathcal{U}_{1}\) is a list of elements constructed in Section 1.1 for \(R=1\), and \(\mathcal{A}_{R,R^{2}}\) is the rescaling operator \(\mathcal{A}_{\lambda,\mu}\), defined in (2.2), at \(\lambda=R\), \(\mu=R^{2}\)._ Proof.: Let \(\mathcal{U}_{R}\) be the list (2.1) of elements constructed in Section 1.1 for a given \(R>0\). We observe that the list \(\mathcal{A}_{\lambda,\mu}\mathcal{U}_{R}\) with \(\lambda=1/R\) and \(\mu=1/R^{2}\) coincides with a list of elements that one obtains by choosing \(R=1\) in Section 1.1, which we call \(\mathcal{U}_{1}\). The check is elementary; for example, regarding the vector field \(U\) and the pressure \(P\), by (1.4) and (1.5) one has \[\frac{u_{\rho}(R\rho,Rz)}{R^{2}}=\frac{\partial_{z}\alpha(\rho,z)} {4\rho},\qquad\qquad\frac{u_{\varphi}(R\rho,Rz)}{R^{2}}=\frac{\sqrt{H(\alpha( \rho,z))}}{4\rho},\] \[\frac{u_{z}(R\rho,Rz)}{R^{2}}=-\frac{\partial_{\rho}\alpha(\rho,z )}{4\rho},\qquad\frac{P(Rx,Ry,Rz)}{R^{4}}=\frac{\alpha(\rho,z)}{4}.\] Then, by (2.3), \(\mathcal{U}_{R}=\mathcal{A}_{R,R^{2}}(\mathcal{A}_{\frac{1}{R},\frac{1}{R^{2} }}\,\mathcal{U}_{R})=\mathcal{A}_{R,R^{2}}\,\mathcal{U}_{1}\). Regarding the fluid particle system (1.11), the consequence of Lemma 2.1 is the following lemma, whose proof is trivial. **Lemma 2.2**.: _Let \(\tilde{U}_{R}=\mathcal{A}_{R,R^{2}}\tilde{U}_{1}\), where \(\tilde{U}_{R}\) is given by Section 1.1 for some \(R>0\) and \(\tilde{U}_{1}\) is given by Section 1.1 for \(R=1\). Then a function \((x(t),y(t),z(t))\) solves the fluid particle system (1.11) with velocity field \(\tilde{U}=\tilde{U}_{R}\) if and only if_ \[x(t)=Rx_{1}(Rt),\quad\ y(t)=Ry_{1}(Rt),\quad\ z(t)=Rz_{1}(Rt),\] _where \((x_{1}(t),y_{1}(t),z_{1}(t))\) solves_ \[(\dot{x}_{1}(t),\dot{y}_{1}(t),\dot{z}_{1}(t))=\tilde{U}_{1}(x_{1}(t),y_{1}(t ),z_{1}(t)).\] ## 3 Conjugation to a linear flow on \(\mathbb{T}^{2}\) In this section and in Section 4 we prove Theorem 1.1. Thus, assume that \(\mathcal{U}\) in (2.1) is given by Section 1.1 for \(R=1\). Hence, in particular, \[\mathcal{C} =\{(x,y,z)\in\mathbb{R}^{3}:\rho=1,\ z=0\},\] \[\mathcal{N} =\{(x,y,z)\in\mathbb{R}^{3}:(\rho-1)^{2}+z^{2}<\delta^{2}\},\] \[\mathcal{N}^{\prime} =\{(x,y,z)\in\mathbb{R}^{3}:(\rho-1)^{2}+z^{2}<(\delta/4)^{2}\}, \tag{3.1}\] where \(\rho=\sqrt{x^{2}+y^{2}}\). The constant \(\delta\) satisfies \(0<\delta<1\) and \(\delta\leq r_{0}\). The vector field \(U\) and the pressure \(P\) in \(\mathcal{N}\) are given by (1.4), (1.5), where \(a(\rho,z)=\alpha(\rho,z)\) and \[p(\rho,z)=\frac{\alpha(\rho,z)}{4},\quad\ b(\rho,z)=\frac{1}{4}\sqrt{H( \alpha(\rho,z))}. \tag{3.2}\] The constants \(\tau\) and \(\varepsilon\) satisfy (1.9) and \(\tau\geq 3\varepsilon\). The function \(\omega\) is supported in \([\varepsilon,2\varepsilon]\), and \(W\) is in (1.7). The vector field \(\tilde{U}\) and the pressure \(\tilde{P}\) are given by (1.6) in \(\mathcal{N}\), and \((\tilde{U},\tilde{P})=(0,W(2\varepsilon))\) in \(\mathbb{R}^{3}\setminus\mathcal{N}\). The sets \(\mathcal{S},\mathcal{S}^{*}\) are given by (1.8), (1.10). Outside the set \(\mathcal{S}\), the vector field \(\tilde{U}\) is zero, and the solutions of system (1.11) are constant in time. Hence, by the uniqueness property of the solution of Cauchy problems, any solution of (1.11) that is in \(\mathcal{S}\) at some time \(t_{0}\) remains in \(\mathcal{S}\) for its entire lifespan; in other words, \(\mathcal{S}\) is an invariant set for (1.11). Moreover \(\mathcal{S}\) is bounded, and therefore, by basic ode theory, the solutions of (1.11) in \(\mathcal{S}\) are all global in time. Trivially, any subset of \(\mathbb{R}^{3}\setminus\mathcal{S}\) is also an invariant set for (1.11). Hence any subset of \(\mathbb{R}^{3}\) containing \(\mathcal{S}\) is invariant for (1.11). In particular, \(\mathcal{S}^{*}\) and \(\mathcal{N}\) are invariant for (1.11). ### Cilindrical coordinates To study system (1.11) in \(\mathcal{N}\), first of all we move on to cilindrical coordinates, that is, we consider the diffeomorphism \[\Phi_{1}:\mathcal{N}_{1}\to\mathcal{N},\quad\ \Phi_{1}(\rho, \varphi,z):=(\rho\cos\varphi,\,\rho\sin\varphi,\,z),\] \[\mathcal{N}_{1}:=\{(\rho,\varphi,z)\in\mathbb{R}\times\mathbb{T} \times\mathbb{R}:(\rho-1)^{2}+z^{2}<\delta^{2}\}, \tag{3.3}\] and the change of variables \(\tilde{v}=\Phi_{1}(v)\), with \(\tilde{v}=(x,y,z)\), \(v=(\rho,\varphi,z)\), i.e., \(x=\rho\cos\varphi\), \(y=\rho\sin\varphi\). Using cilindrical coordinates is a natural choice because the quantities \(u_{\rho},u_{\varphi},u_{z},p\) in (1.5) are already expressed in terms of \(\rho,z\). Now a function \(\tilde{v}(t)=\Phi_{1}(v(t))\) solves (1.11) in \(\mathcal{N}\) if and only if \(v(t)\) solves \[\hat{v}(t)=V_{1}(v(t)) \tag{3.4}\] in \(\mathcal{N}_{1}\), where \(V_{1}\) is the vector field \[V_{1}(v):=(D\Phi_{1}(v))^{-1}\tilde{U}(\Phi_{1}(v)),\ \ \ \ v=(\rho,\varphi,z)\in \mathcal{N}_{1}.\] The Jacobian matrix \(D\Phi_{1}(v)\) and its inverse matrix are \[D\Phi_{1}(v)=\begin{bmatrix}\cos\varphi&-\rho\sin\varphi&0\\ \sin\varphi&\rho\cos\varphi&0\\ 0&0&1\end{bmatrix},\ \ \ \ (D\Phi_{1}(v))^{-1}=\begin{bmatrix}\cos\varphi&\sin \varphi&0\\ -\frac{1}{\rho}\sin\varphi&\frac{1}{\rho}\cos\varphi&0\\ 0&0&1\end{bmatrix},\] the composition \(\tilde{U}(\Phi_{1}(v))\) is \[\tilde{U}(\Phi_{1}(v))=\omega(p(\rho,z))U(\Phi_{1}(v)),\ \ \ \ U(\Phi_{1}(v))= \begin{pmatrix}u_{\rho}(\rho,z)\cos\varphi-u_{\varphi}(\rho,z)\sin\varphi\\ u_{\rho}(\rho,z)\sin\varphi+u_{\varphi}(\rho,z)\cos\varphi\\ u_{z}(\rho,z)\end{pmatrix},\] and therefore \[V_{1}(\rho,\varphi,z) =\omega(p(\rho,z))\Big{(}u_{\rho}(\rho,z),\ \frac{u_{\varphi}(\rho,z)}{\rho},\ u_{z}(\rho,z)\Big{)}\] \[=\Big{(}\frac{\omega(p(\rho,z))\partial_{z}p(\rho,z)}{\rho},\ \frac{\omega(p(\rho,z))b(\rho,z)}{\rho^{2}},\ -\frac{\omega(p(\rho,z))\partial_{\rho}p(\rho,z)}{\rho}\Big{)}. \tag{3.5}\] Since \(\alpha\) and \(H\) are the functions constructed and studied in [7], it is convenient to express the other quantities in terms of them. Hence, we define \[\chi(s):=\frac{1}{4}\omega\Big{(}\frac{s}{4}\Big{)}\ \ \ \ \forall s\in\mathbb{R}, \tag{3.6}\] and, recalling (3.2), we rewrite (3.5) as \[V_{1}(\rho,\varphi,z)=\chi(\alpha(\rho,z))\Big{(}\frac{\partial_{z}\alpha( \rho,z)}{\rho},\ \frac{\sqrt{H(\alpha(\rho,z))}}{\rho^{2}},\ -\frac{\partial_{\rho}\alpha(\rho,z)}{\rho}\Big{)}. \tag{3.7}\] The curve \(\mathcal{C}\) and the sets \(\mathcal{S}^{*},\mathcal{N}^{\prime}\) become \[\mathcal{C}_{1}:=\Phi_{1}^{-1}(\mathcal{C})=\{(1,\varphi,0): \varphi\in\mathbb{T}\},\] \[\mathcal{S}_{1}^{*}:=\Phi_{1}^{-1}(\mathcal{S}^{*})=\{(\rho, \varphi,z)\in\mathcal{N}_{1}:0<\alpha(\rho,z)<4\tau\},\] \[\mathcal{N}_{1}^{\prime}:=\Phi_{1}^{-1}(\mathcal{N}^{\prime})=\{( \rho,\varphi,z)\in\mathcal{N}_{1}:(\rho-1)^{2}+z^{2}<(\delta/4)^{2}\}.\] By (1.9) and (3.2), one has \[\alpha(\rho,z)>4\tau\ \ \ \ \forall(\rho,z)\in\mathcal{N}_{1}\setminus \mathcal{N}_{1}^{\prime}. \tag{3.8}\] Moreover, the map \(\Phi_{1}\) is analytic in \(\mathcal{N}_{1}\). The vector field \(U\) satisfies the localizability condition (1.2) in \(\mathcal{N}\setminus\mathcal{C}\), and therefore \(\tilde{U}\cdot\nabla P=0\) in \(\mathcal{N}\). Hence the pressure \(P\) is a prime integral of system (1.11) in \(\mathcal{N}\). In cilindrical coordinates, this means that \(p(\rho,z)\), and therefore \(\alpha(\rho,z)\) too, are prime integrals of (3.4) in \(\mathcal{N}_{1}\). This can also be verified directly: by (3.4) and (3.7), \[\frac{d}{dt}\big{\{}\alpha(\rho(t),z(t))\big{\}}=\partial_{\rho}\alpha(\rho,z) \dot{\rho}+\partial_{z}\alpha(\rho,z)\dot{z}=0.\] Hence every trajectory \(\{v(t)=(\rho(t),\varphi(t),z(t)):t\in\mathbb{R}\}\) of system (3.4) in \(\mathcal{N}_{1}\) lies in a level set \[\mathcal{P}_{c}:=\{(\rho,\varphi,z)\in\mathcal{N}_{1}:\alpha(\rho,z)=c\}. \tag{3.9}\] In particular, every trajectory of (3.4) in \(\mathcal{S}_{1}^{*}\) lies in a level set \(\mathcal{P}_{c}\) with \(0<c<4\tau\). In fact, the set \(\mathcal{S}_{1}^{*}\) is exactly the union of all the level sets \(\mathcal{P}_{c}\) with \(c\in(0,4\tau)\). ### Elimination of the factor \(1/\rho\) and canonical Hamiltonian structure We want to remove the factor \(1/\rho\) appearing the first and third component of \(V_{1}\) in (3.5). We consider the diffeomorphism \[\Phi_{2}:\mathcal{N}_{2}\to\mathcal{N}_{1},\ \ \ \ \Phi_{2}(\rho,\varphi,z)= \Big{(}\rho,\varphi,\frac{z}{\rho}\Big{)},\] \[\mathcal{N}_{2}:=\{(\rho,\varphi,z)\in\mathbb{R}\times\mathbb{T} \times\mathbb{R}:(\rho-1)^{2}+z^{2}\rho^{-2}<\delta^{2}\}, \tag{3.10}\] and the change of variables \(\tilde{v}=\Phi_{2}(v)\), with \(\tilde{v}=(\tilde{\rho},\tilde{\varphi},\tilde{z})\), \(v=(\rho,\varphi,z)\). A function \(\tilde{v}(t)=\Phi_{2}(v(t))\) solves (3.4) in \(\mathcal{N}_{1}\) if and only if \(v(t)\) solves \[\dot{v}(t)=V_{2}(v(t)) \tag{3.11}\] in \(\mathcal{N}_{2}\), where \[V_{2}(v):=(D\Phi_{2}(v))^{-1}V_{1}(\Phi_{2}(v)),\ \ \ \ v=(\rho,\varphi,z)\in \mathcal{N}_{2}. \tag{3.12}\] The Jacobian matrix \(D\Phi_{2}(v)\) and its inverse are \[D\Phi_{2}(v)=\begin{bmatrix}1&0&0\\ 0&1&0\\ -z\rho^{-2}&0&\rho^{-1}\end{bmatrix},\ \ \ \ (D\Phi_{2}(v))^{-1}=\begin{bmatrix}1&0&0\\ 0&1&0\\ z\rho^{-1}&0&\rho\end{bmatrix}.\] We define \[\alpha_{2}(\rho,z):=\alpha\Big{(}\rho,\frac{z}{\rho}\Big{)} \tag{3.13}\] for all \((\rho,z)\in\mathcal{D}_{2}\). Hence \[\partial_{z}\alpha_{2}(\rho,z)=(\partial_{z}\alpha)\Big{(}\rho,\frac{z}{\rho} \Big{)}\frac{1}{\rho},\ \ \ \ \partial_{\rho}\alpha_{2}(\rho,z)=(\partial_{\rho}\alpha)\Big{(}\rho,\frac{z}{ \rho}\Big{)}-(\partial_{z}\alpha)\Big{(}\rho,\frac{z}{\rho}\Big{)}\frac{z}{ \rho^{2}},\] and, recalling the second identity in (3.2), \[V_{1}(\Phi_{2}(\rho,\varphi,z))=\chi(\alpha_{2}(\rho,z))\Big{(}\partial_{z} \alpha_{2}(\rho,z),\,\frac{\sqrt{H(\alpha_{2}(\rho,z))}}{\rho^{2}},\,-\frac{ \partial_{\rho}\alpha_{2}(\rho,z)}{\rho}-\frac{z\partial_{z}\alpha_{2}(\rho,z )}{\rho^{2}}\Big{)}.\] Then the vector field \(V_{2}\) in (3.12) is \[V_{2}(\rho,\varphi,z)=\chi(\alpha_{2}(\rho,z))\Big{(}\partial_{z}\alpha_{2}( \rho,z),\,\frac{\sqrt{H(\alpha_{2}(\rho,z))}}{\rho^{2}},\,-\partial_{\rho} \alpha_{2}(\rho,z)\Big{)}.\] The sets \(\mathcal{C}_{1},\mathcal{S}_{1}^{*},\mathcal{N}_{1}^{\prime}\) become \[\mathcal{C}_{2} :=\Phi_{2}^{-1}(\mathcal{C}_{1})=\{(1,\varphi,0):\varphi\in\mathbb{ T}\}=\mathcal{C}_{1},\] \[\mathcal{S}_{2}^{*} :=\Phi_{2}^{-1}(\mathcal{S}_{1}^{*})=\{(\rho,\varphi,z)\in \mathcal{N}_{2}:0<\alpha_{2}(\rho,z)<4\tau\},\] \[\mathcal{N}_{2}^{\prime} :=\Phi_{2}^{-1}(\mathcal{N}_{1}^{\prime})=\{(\rho,\varphi,z)\in \mathcal{N}_{2}:(\rho-1)^{2}+z^{2}\rho^{-2}<(\delta/4)^{2}\}.\] Note that \(\Phi_{2}\) leaves \(\mathcal{C}_{2}=\mathcal{C}_{1}\) invariant, because \(z/\rho=z\) at \(\rho=1\). By (3.8), one has \[\alpha_{2}(\rho,z)>4\tau\ \ \ \ \forall(\rho,\varphi,z)\in\mathcal{N}_{2} \setminus\mathcal{N}_{2}^{\prime}. \tag{3.14}\] The level set \(\mathcal{P}_{c}\) of \(\alpha\) in (3.9) becomes the level set \[\mathcal{P}_{2,c}=\{(\rho,\varphi,z)\in\mathcal{N}_{2}:\alpha_{2}(\rho,z)=c\} \tag{3.15}\] of \(\alpha_{2}\). The set \(\mathcal{S}_{2}^{*}\) is the union of the level sets \(\mathcal{P}_{2,c}\) with \(c\in(0,4\tau)\). The map \(\Phi_{2}\) is analytic in \(\mathcal{N}_{2}\). The function \(\alpha_{2}(\rho,z)\) is a prime integral of system (3.11), and it is also analytic; its Taylor expansion around \((1,0)\) is in (4.14). The first and third equation of system (3.11) are the Hamiltonian system \[\dot{\rho}=\partial_{z}\mathcal{H}_{2}(\rho,z),\ \ \ \ \dot{z}=-\partial_{\rho} \mathcal{H}_{2}(\rho,z), \tag{3.16}\] where \[\mathcal{H}_{2}(\rho,z):=\Gamma(\alpha_{2}(\rho,z)),\ \ \ \ \Gamma(s):=\int_{0}^{s}\chi(\sigma)\,d\sigma. \tag{3.17}\] ### Symplectic polar coordinates in the radial-vertical plane Now we move on to polar coordinates (in their symplectic version) to describe the pair \((\rho,z)\). The pairs \((\rho,z)\) such that \((\rho,\varphi,z)\in\mathcal{N}_{2}\) do not form a disc; thus, it is convenient to consider a subset of \(\mathcal{N}_{2}\) that fits with polar coordinates better than how \(\mathcal{N}_{2}\) does. We consider the open sets \[\mathcal{B}_{2} :=\{(\rho,\varphi,z)\in\mathbb{R}\times\mathbb{T}\times\mathbb{R} :0<(\rho-1)^{2}+z^{2}<\delta_{2}^{2}\},\] \[\mathcal{B}_{2}^{\prime} :=\{(\rho,\varphi,z)\in\mathbb{R}\times\mathbb{T}\times\mathbb{R} :0<(\rho-1)^{2}+z^{2}<(\delta_{2}/2)^{2}\}, \tag{3.18}\] where \(\delta_{2}=C\delta\), \(C>0\), and we observe that \[(\mathcal{B}_{2}\setminus\mathcal{B}_{2}^{\prime})\subseteq( \mathcal{N}_{2}\setminus\mathcal{N}_{2}^{\prime}) \tag{3.19}\] if \(C(1+\delta)\leq 1\) and \(1+(\delta/4)\leq 2C\). This holds, for example, for \(C=2/3\) and all \(0<\delta\leq 1/2\). _Proof of (3.19)._ Let \((\rho,\varphi,z)\in\mathcal{B}_{2}\setminus\mathcal{B}_{2}^{\prime}\), where \(\delta_{2}=C\delta\), with \(C(1+\delta)\leq 1\) and \(1+(\delta/4)\leq 2C\). Since \((\rho,\varphi,z)\in\mathcal{B}_{2}\), one has \((\rho-1)^{2}<\delta_{2}^{2}\), whence \(\rho>1-\delta_{2}\). Moreover \(1-\delta_{2}=1-C\delta>0\) because \(C\delta\leq 1-C<1\). Hence \[(\rho-1)^{2}+\frac{z^{2}}{\rho^{2}}\leq(\rho-1)^{2}+\frac{z^{2}}{(1-\delta_{2 })^{2}}\leq\frac{(\rho-1)^{2}+z^{2}}{(1-\delta_{2})^{2}}<\frac{\delta_{2}^{2} }{(1-\delta_{2})^{2}}\leq\delta^{2},\] where the last inequality holds because \(C(1+\delta)\leq 1\). Hence \((\rho,\varphi,z)\in\mathcal{N}_{2}\). Now assume, by contradiction, that \((\rho,\varphi,z)\in\mathcal{N}_{2}^{\prime}\). Then \((\rho-1)^{2}<(\delta/4)^{2}\), whence \(\rho<1+(\delta/4)\), and \[(\rho-1)^{2}+z^{2}\leq\Big{(}(\rho-1)^{2}+\frac{z^{2}}{\rho^{2}} \Big{)}\Big{(}1+\frac{\delta}{4}\Big{)}^{2}<\frac{\delta^{2}}{16}\Big{(}1+ \frac{\delta}{4}\Big{)}^{2}\leq\frac{\delta_{2}^{2}}{4},\] where the last inequality holds because \(1+(\delta/4)\leq 2C\). Also, \(0<(\rho-1)^{2}+z^{2}\) because \((\rho,\varphi,z)\in\mathcal{B}_{2}\). Therefore \((\rho,\varphi,z)\in\mathcal{B}_{2}^{\prime}\), a contradiction. This proves that \((\rho,\varphi,z)\notin\mathcal{N}_{2}^{\prime}\). By (3.19) and (3.14), one has \[\alpha_{2}(\rho,z)>4\tau\ \ \ \ \forall(\rho,\varphi,z)\in\mathcal{B}_{2} \setminus\mathcal{B}_{2}^{\prime}. \tag{3.20}\] Thus \(\mathcal{S}_{2}^{*}\subseteq\mathcal{B}_{2}^{\prime}\subseteq\mathcal{B}_{2}\), and they are invariant sets for system (3.11). We consider the diffeomorphism \[\Phi_{3}:\mathcal{B}_{3}\to\mathcal{B}_{2},\ \ \ \ \Phi_{3}(\vartheta,\varphi,\xi)=(1+ \sqrt{2\xi}\sin\vartheta,\,\varphi,\,\sqrt{2\xi}\cos\vartheta),\ \ \ \ \mathcal{B}_{3}:=\mathbb{T}\times\mathbb{T}\times(0,\xi_{3}), \tag{3.21}\] where \(\xi_{3}:=\delta_{2}^{2}/2=2\delta^{2}/9\), and the change of variables \(\tilde{v}=\Phi_{3}(v)\), with \(\tilde{v}=(\tilde{\rho},\tilde{\varphi},\tilde{z})\), \(v=(\vartheta,\varphi,\xi)\). A function \(\tilde{v}(t)=\Phi_{3}(v(t))\) solves (3.11) in \(\mathcal{B}_{2}\) if and only if \(v(t)\) solves \[\dot{v}(t)=V_{3}(v(t)) \tag{3.22}\] in \(\mathcal{B}_{3}\), where \[V_{3}(v):=(D\Phi_{3}(v))^{-1}V_{2}(\Phi_{3}(v)),\ \ \ \ v=(\vartheta,\varphi,\xi)\in \mathcal{B}_{3}.\] The Jacobian matrix \(D\Phi_{3}(v)\) and its inverse are \[D\Phi_{3}(v)=\begin{bmatrix}\sqrt{2\xi}\cos\vartheta&0&\frac{1}{\sqrt{2\xi}} \sin\vartheta\\ 0&1&0\\ -\sqrt{2\xi}\sin\vartheta&0&\frac{1}{\sqrt{2\xi}}\cos\vartheta\end{bmatrix}, \ \ \ (D\Phi_{3}(v))^{-1}=\begin{bmatrix}\frac{1}{\sqrt{2\xi}}\cos\vartheta&0&-\frac{1}{ \sqrt{2\xi}}\sin\vartheta\\ 0&1&0\\ \sqrt{2\xi}\sin\vartheta&0&\sqrt{2\xi}\cos\vartheta\end{bmatrix}.\] We define \[\alpha_{3}(\vartheta,\xi):=\alpha_{2}\big{(}1+\sqrt{2\xi}\sin \vartheta,\sqrt{2\xi}\cos\vartheta\big{)}, \tag{3.23}\] for all \((\vartheta,\xi)\in\mathbb{T}\times[0,\xi_{3})\). In fact, in the set \(\mathcal{B}_{3}\), \(\xi\) varies in the interval \((0,\xi_{3})\), but it is convenient to consider \(\alpha_{3}\) also for \(\xi=0\); one has \(\alpha_{3}(\vartheta,0)=\alpha_{2}(1,0)=\alpha(1,0)=0\) (see (3.13)). From (3.23) it follows that \[\partial_{\vartheta}\alpha_{3}(\vartheta,\xi) =(\partial_{\rho}\alpha_{2})(\rho,z)\sqrt{2\xi}\cos\vartheta-( \partial_{z}\alpha_{2})(\rho,z)\sqrt{2\xi}\sin\vartheta,\] \[\partial_{\xi}\alpha_{3}(\vartheta,\xi) =(\partial_{\rho}\alpha_{2})(\rho,z)\frac{1}{\sqrt{2\xi}}\sin \vartheta+(\partial_{z}\alpha_{2})(\rho,z)\frac{1}{\sqrt{2\xi}}\cos\vartheta \tag{3.24}\] for all \((\vartheta,\xi)\in\mathbb{T}\times(0,\xi_{3})\), where \((\rho,z)=(1+\sqrt{2\xi}\sin\vartheta,\sqrt{2\xi}\cos\vartheta)\). Hence \[V_{3}(\vartheta,\varphi,\xi)=\chi(\alpha_{3}(\vartheta,\xi))\Big{(}\partial_{ \xi}\alpha_{3}(\vartheta,\xi),\ \frac{\sqrt{H(\alpha_{3}(\vartheta,\xi))}}{(1+\sqrt{2\xi}\sin \vartheta)^{2}},\ -\partial_{\vartheta}\alpha_{3}(\vartheta,\xi)\Big{)}. \tag{3.25}\] The set \(\mathcal{C}_{2}\) is out of \(\mathcal{B}_{2}\); the sets \(\mathcal{S}_{2}^{*}\), \(\mathcal{B}_{2}^{\prime}\) become \[\mathcal{S}_{3}^{*} :=\Phi_{3}^{-1}(\mathcal{S}_{2}^{*})=\{(\vartheta,\varphi,\xi) \in\mathcal{B}_{3}:0<\alpha_{3}(\vartheta,\xi)<4\tau\},\] \[\mathcal{B}_{3}^{\prime} :=\Phi_{3}^{-1}(\mathcal{B}_{2}^{\prime})=\mathbb{T}\times \mathbb{T}\times(0,\xi_{3}/4).\] By (3.20), one has \[\alpha_{3}(\vartheta,\xi)>4\tau\ \ \ \ \forall(\vartheta,\varphi,\xi)\in \mathcal{B}_{3}\setminus\mathcal{B}_{3}^{\prime}=\mathbb{T}\times\mathbb{T} \times[\xi_{3}/4,\xi_{3}). \tag{3.26}\] Thus \(\mathcal{S}_{3}^{*}\subset\mathcal{B}_{3}^{\prime}\). The level set \(\mathcal{P}_{2,c}\) of \(\alpha_{2}\) in (3.15) becomes the level set \[\mathcal{P}_{3,c}=\{(\vartheta,\varphi,\xi)\in\mathcal{B}_{3}:\alpha_{3}( \vartheta,\xi)=c\}\] of \(\alpha_{3}\). The set \(\mathcal{S}_{3}^{*}\) is the union of the level sets \(\mathcal{P}_{3,c}\) with \(c\in(0,4\tau)\). The map \(\Phi_{3}\) is analytic in \(\mathcal{B}_{3}\). The function \(\alpha_{3}(\vartheta,\xi)\) is a prime integral of system (3.22); its behavior near \(\xi=0\) is studied in Sections 3.4 and 4.3. The first and third equation of system (3.22) are the Hamiltonian system \[\dot{\vartheta}=\partial_{\xi}\mathcal{H}_{3}(\vartheta,\xi),\ \ \ \ \dot{\xi}=-\partial_{\vartheta}\mathcal{H}_{3}(\vartheta,\xi), \tag{3.27}\] where \[\mathcal{H}_{3}(\vartheta,\xi):=\mathcal{H}_{2}(1+\sqrt{2\xi}\sin\vartheta, \sqrt{2\xi}\cos\vartheta)=\Gamma(\alpha_{3}(\vartheta,\xi)), \tag{3.28}\] where \(\mathcal{H}_{2}\) and \(\Gamma\) are defined in (3.17). Moreover \(\vartheta\) (as well as \(\varphi\)) is an angle variable, i.e., it varies in \(\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}\). **Remark 3.1**.: \((i)\) We use the symplectic transformation \((\rho,z)=(1+\sqrt{2\xi}\sin\vartheta,\sqrt{2\xi}\cos\vartheta)\), instead of the simpler polar coordinates \((\rho,z)=(1+r\cos\vartheta,r\sin\vartheta)\) or \((\rho,z)=(1+r\sin\vartheta,r\cos\vartheta)\), in order to preserve the canonical Hamiltonian structure of system (3.16). \((ii)\) Using \(\cos\vartheta\) for \(z\) and \(\sin\vartheta\) for \(\rho\) in the definition (3.21) of \(\Phi_{3}\), instead of vice versa, is just a matter of convenience: in this way, we get a positive angular velocity for the solution \(\vartheta(t)\). ### Level sets Since the square root function \(\xi\mapsto\sqrt{\xi}\) is analytic in \((0,\infty)\), the function \(\alpha_{3}(\vartheta,\xi)\) defined in (3.23) is analytic in \((\vartheta,\xi)\in\mathbb{T}\times(0,\xi_{3})\). By the Taylor expansion (4.14) of \(\alpha_{2}(\rho,z)\), by (3.23) and (3.24), one has \[\alpha_{3}(\vartheta,\xi)=4\xi+O(\xi^{\frac{3}{2}}),\ \ \ \ \partial_{\xi}\alpha_{3}( \vartheta,\xi)=4+O(\xi^{\frac{1}{2}})\ \ \ \ \text{as }\xi\to 0,\ \xi>0, \tag{3.29}\] uniformly in \(\vartheta\in\mathbb{T}\). Note that \(\alpha_{3}(\vartheta,\xi)\) is not analytic around \(\xi=0\), namely \(\alpha_{3}(\vartheta,\xi)\), as a function of \(\xi\), is not a power series of \(\xi\) centered at zero; in fact, (3.29) is deduced from the analyticity of \(\alpha_{2}(\rho,z)\) (and of its partial derivatives \(\partial_{\rho}\alpha_{2}(\rho,z)\), \(\partial_{z}\alpha_{2}(\rho,z)\)) around \((1,0)\), and then from the evaluation at \((\rho,z)=(1+\sqrt{2\xi}\sin\vartheta,\sqrt{2\xi}\cos\vartheta)\). See Section 4.3 for more details. Moreover, \(\alpha_{3}(\vartheta,0)=\alpha_{2}(1,0)=0\), and, by (3.29), \[\partial_{\xi}\alpha_{3}(\vartheta,0)=\lim_{\xi\to 0^{+}}\frac{\alpha_{3}( \vartheta,\xi)-\alpha_{3}(\vartheta,0)}{\xi}=4=\lim_{\xi\to 0^{+}}\partial_{\xi} \alpha_{3}(\vartheta,\xi). \tag{3.30}\] Thus \(\alpha_{3}\) is \(C^{1}\) in \(\mathbb{T}\times[0,\xi_{3})\). Taking \(\xi_{3}\) smaller (i.e., \(\delta\) smaller) if necessary, we have \(\partial_{\xi}\alpha_{3}(\vartheta,\xi)>0\) for all \((\vartheta,\xi)\in\mathbb{T}\times[0,\xi_{3})\). Hence the function \(\xi\mapsto\alpha_{3}(\vartheta,\xi)\) is strictly increasing on \([0,\xi_{3})\). Moreover \(\alpha_{3}(\vartheta,0)=0\) and, by (3.26), \(\alpha_{3}(\vartheta,\xi_{3}/4)>4\tau\) for all \(\vartheta\in\mathbb{T}\). As a consequence, for every \(c\in[0,4\tau]\), for every \(\vartheta\in\mathbb{T}\), there exists a unique \(\xi\in[0,\xi_{3}/4)\) such that \(\alpha_{3}(\vartheta,\xi)=c\). We denote \(\gamma_{c}(\vartheta)\) the unique solution \(\xi\) of the equation \(\alpha_{3}(\vartheta,\xi)=c\). Thus, \[\alpha_{3}(\vartheta,\gamma_{c}(\vartheta))=c\ \ \ \ \forall(\vartheta,c)\in \mathbb{T}\times[0,4\tau]. \tag{3.31}\] Moreover, \(\gamma_{0}(\vartheta)=0\). Since \(\alpha_{3}\) is analytic in \((\vartheta,\xi)\in\mathbb{T}\times(0,\xi_{3})\), by the implicit function theorem, the function \(\gamma_{c}(\vartheta)\) is analytic in \((\vartheta,c)\in\mathbb{T}\times(0,4\tau)\). The behavior of \(\gamma_{c}(\vartheta)\) around \(c=0\) is studied in Section 4.3. From (3.31), \[(\partial_{\vartheta}\alpha_{3})(\vartheta,\gamma_{c}(\vartheta)) +(\partial_{\xi}\alpha_{3})(\vartheta,\gamma_{c}(\vartheta)) \partial_{\vartheta}\gamma_{c}(\vartheta)=0, \tag{3.32}\] \[(\partial_{\xi}\alpha_{3})(\vartheta,\gamma_{c}(\vartheta)) \partial_{c}\gamma_{c}(\vartheta)=1 \tag{3.33}\] for all \((\vartheta,c)\in\mathbb{T}\times(0,4\tau)\). Since \(\partial_{\xi}\alpha_{3}(\vartheta,\xi)\) is positive for all \((\vartheta,\xi)\in\mathbb{T}\times[0,\xi_{3})\), by (3.33) it follows that \[\partial_{c}\gamma_{c}(\vartheta)>0\ \ \ \ \forall(\vartheta,c)\in\mathbb{T} \times(0,4\tau). \tag{3.34}\] Thus, for all \(c\in[0,4\tau]\), the level set \(\mathcal{P}_{3,c}\) is globally described as the graph \[\mathcal{P}_{3,c}=\{(\vartheta,\varphi,\xi)\in\mathbb{T}\times\mathbb{T} \times[0,\xi_{3}):\xi=\gamma_{c}(\vartheta)\}=\{(\vartheta,\varphi,\gamma_{c}( \vartheta)):(\vartheta,\varphi)\in\mathbb{T}^{2}\}. \tag{3.35}\] ### Integration of the system in the radial-vertical plane We consider the Cauchy problem of system (3.22) with initial datum \((\vartheta_{0},\varphi_{0},\xi_{0})\in\mathcal{S}_{3}^{*}\) at the initial time \(t=0\). The components \(\vartheta_{0}\) and \(\xi_{0}\) of the initial datum determine the level \(c=\alpha_{3}(\vartheta_{0},\xi_{0})\); then the solution \((\vartheta(t),\varphi(t),\xi(t))\) of the Cauchy problem is in the level set \(\mathcal{P}_{3,c}\) for all \(t\in\mathbb{R}\), and, by (3.35), \(\xi(t)=\gamma_{c}(\vartheta(t))\) for all \(t\in\mathbb{R}\). The first and third equations of system (3.22) are the Hamiltonian system (3.27), i.e., \[\dot{\vartheta}=\chi(\alpha_{3}(\vartheta,\xi))\partial_{\xi}\alpha_{3}( \vartheta,\xi),\ \ \ \ \dot{\xi}=-\chi(\alpha_{3}(\vartheta,\xi))\partial_{\vartheta}\alpha_{3}( \vartheta,\xi). \tag{3.36}\] Since \(\xi(t)=\gamma_{c}(\vartheta(t))\), (3.36) becomes \[\dot{\vartheta}=\chi(c)(\partial_{\xi}\alpha_{3})(\vartheta,\gamma_{c}( \vartheta)),\ \ \ \ \partial_{\vartheta}\gamma_{c}(\vartheta)\dot{\vartheta}=-\chi(c)(\partial_{ \vartheta}\alpha_{3})(\vartheta,\gamma_{c}(\vartheta)). \tag{3.37}\] By (3.32), the second equation in (3.37) is equal to the first equation in (3.37) multiplied by \(\partial_{\vartheta}\gamma_{c}(\vartheta)\), and therefore system (3.37) is equivalent to its first equation alone. Moreover, by (3.33), the first equation in (3.37) is also \[\dot{\vartheta}=\frac{\chi(c)}{\partial_{c}\gamma_{c}(\vartheta)}, \tag{3.38}\] where the denominator is nonzero by (3.33) or (3.34). Equation (3.38) with initial datum \(\vartheta(0)=\vartheta_{0}\in\mathbb{T}\) is an autonomous Cauchy problem for the function \(\vartheta(t)\) taking values in \(\mathbb{T}\), and it can be integrated by basic calculus. The function \(\gamma_{c}(\vartheta)\) and its partial derivative \(\partial_{c}\gamma_{c}(\vartheta)\) are defined for \(\vartheta\in\mathbb{T}\), and hence they can also be considered as functions defined for \(\vartheta\in\mathbb{R}\) that are \(2\pi\)-periodic in \(\vartheta\). Thus, we first solve (3.38) considered as an equation for a function \(\vartheta^{r}(t)\) taking values in \(\mathbb{R}\); then the equivalence class of \(\vartheta^{r}(t)\) mod \(2\pi\) will be a function \(\vartheta(t)\) taking values in \(\mathbb{T}\) and solving (3.38). For all \(c\in(0,4\tau)\), we define \[F_{c}:\mathbb{R}\to\mathbb{R},\ \ \ \ F_{c}(\vartheta):=\int_{0}^{\vartheta} \partial_{c}\gamma_{c}(\sigma)\,d\sigma. \tag{3.39}\] By (3.34), \(F_{c}\) is a diffeomorphism of \(\mathbb{R}\). Let \(\vartheta_{0}^{r}\in\mathbb{R}\) be a representative of the equivalence class \(\vartheta_{0}\in\mathbb{T}\). If \(\vartheta^{r}:\mathbb{R}\to\mathbb{R}\), \(t\mapsto\vartheta^{r}(t)\) solves (3.38) with \(\vartheta^{r}(0)=\vartheta_{0}^{r}\), then \[\frac{d}{dt}\{F_{c}(\vartheta^{r}(t))\}=F_{c}^{\prime}(\vartheta^{r}(t)) \vartheta^{r}(t)=\chi(c),\ \ \ \ F_{c}(\vartheta^{r}(t))=F_{c}(\vartheta_{0}^{r})+\chi(c)t, \tag{3.40}\] and \[\vartheta^{r}(t)=F_{c}^{-1}\big{(}F_{c}(\vartheta_{0}^{r})+\chi(c)t\big{)}. \tag{3.41}\] Hence \(\vartheta^{r}(t)\) in (3.41) is the unique solution of equation (3.38) with initial datum \(\vartheta^{r}(0)=\vartheta_{0}^{r}\). Now let \(\vartheta(t)\) be the equivalence class mod \(2\pi\) of \(\vartheta^{r}(t)\), i.e., \[\vartheta(t):=\{\vartheta^{r}(t)+2k\pi:k\in\mathbb{Z}\}, \tag{3.42}\] for all \(t\in\mathbb{R}\). Then the function \(\vartheta(t)\) in (3.42) is a function of time taking values in \(\mathbb{T}\), and it solves (3.38) with \(\vartheta(0)=\vartheta_{0}\). ### Rotation period in the radial-vertical plane We show that the function \(\vartheta(t)\) in (3.42) is periodic, and we calculate its period. We begin with observing that the function \(F_{c}\) defined in (3.39) satisfies \[F_{c}(2\pi+\vartheta)=F_{c}(2\pi)+F_{c}(\vartheta)\ \ \ \ \forall\vartheta\in \mathbb{R}. \tag{3.43}\] _Proof of (3.43)_. Consider the definition (3.39), split the integral over \([0,2\pi+\vartheta]\) into the sum of \((i)\) the integral over \([0,2\pi]\) and \((ii)\) the integral over \([2\pi,2\pi+\vartheta]\); then \((i)=F(2\pi)\) by definition, while \((ii)=F(\vartheta)\) because the function \(\partial_{c}\gamma(\sigma,c)\) is \(2\pi\)-periodic in \(\sigma\) and therefore \((ii)\) is equal to the integral over \([0,\vartheta]\). Suppose that \(\chi(c)\) is nonzero. Then the function \(\vartheta^{r}(t)\) defined in (3.41) satisfies \[\vartheta^{r}(t+T_{c})=\vartheta^{r}(t)+2\pi\ \ \ \forall t\in\mathbb{R},\ \ \ \ \ \ \ \ \ \ \ \ T_{c}:=\frac{F_{c}(2\pi)}{\chi(c)}. \tag{3.44}\] _Proof of (3.44)_. Applying (3.40) twice, one has \[F_{c}(\vartheta^{r}(t+T_{c}))=F_{c}(\vartheta_{0}^{r})+\chi(c)(t+T_{c})=F_{c}( \vartheta^{r}(t))+\chi(c)T_{c}.\] By the definition of \(T_{c}\) in (3.44) and by identity (3.43), \[F_{c}(\vartheta^{r}(t))+\chi(c)T_{c}=F_{c}(\vartheta^{r}(t))+F_{c}(2\pi)=F_{c }(\vartheta^{r}(t)+2\pi).\] Hence \(F_{c}(\vartheta^{r}(t+T_{c}))=F_{c}(\vartheta^{r}(t)+2\pi)\), and, since \(F_{c}\) is invertible, we obtain (3.44). From (3.44) it follows that \(\vartheta(t)\) defined in (3.42) is periodic in time with period \(T_{c}\). ### Angle-action variables in the radial-vertical plane The Hamiltonian system (3.27) has one degree of freedom, and therefore, by the classical theory of Hamiltonian systems, it is completely integrable both in the sense that the differential equations can be solved by quadrature (i.e., they can be transformed into a problem of finding primitives of functions), and in the sense that they admit "angle-action variables" (this is the one-dimensional, simplest case of the Liouville-Arnold theorem). System (3.27) has been integrated in Section 3.5; now we construct its angle-action variables. Given a point \((\vartheta^{*},\varphi^{*},\xi^{*})\in\mathcal{S}_{3}^{*}\), we calculate its level \(c=\alpha_{4}(\vartheta^{*},\xi^{*})\), which is in the interval \((0,12\varepsilon)\), and we consider the Cauchy problem \[\dot{\vartheta}=\frac{1}{\partial_{c}\gamma_{c}(\vartheta)},\ \ \ \ \vartheta(0)=0. \tag{3.45}\] The first equation in (3.45) is equation (3.38) in which \(\chi(c)\) is replaced by \(1\). Hence, following Section 3.5 with \(1\) instead of \(\chi(c)\), the \(\mathbb{R}\)-valued solution of (3.45) is given by (3.41) in which \(\chi(c)\) is replaced by \(1\) (note that the functions \(\alpha_{3}(\vartheta,\xi)\), \(\gamma_{c}(\vartheta)\), \(F_{c}(\vartheta)\) are all independent of \(\chi(c)\)). We indicate \(\vartheta_{c}^{r}(t)\) the \(\mathbb{R}\)-valued solution of (3.45). Thus, since \(F_{c}(0)=0\), \[\vartheta_{c}^{r}(t)=F_{c}^{-1}(t)\quad\ \forall t\in\mathbb{R}. \tag{3.46}\] Moreover, following Section 3.6 with \(1\) instead of \(\chi(c)\), one has \[\vartheta_{c}^{r}(t+T_{c}^{*})=\vartheta_{c}^{r}(t)+2\pi\quad\forall t\in \mathbb{R},\qquad\qquad T_{c}^{*}:=F_{c}(2\pi). \tag{3.47}\] We fix the representative \(\vartheta_{1}^{*}\) of the class \(\vartheta^{*}\in\mathbb{T}\) such that \(\vartheta_{1}^{*}\in[0,2\pi)\), and we take the unique real number \(s_{1}^{*}\in[0,T_{c}^{*})\) such that \(\vartheta_{c}^{r}(s_{1}^{*})=\vartheta_{1}^{*}\), i.e., by (3.46), \(s_{1}^{*}=F_{c}(\vartheta_{1}^{*})\). Then we define \(\sigma_{1}^{*}=s_{1}^{*}2\pi/T_{c}^{*}\), and we note that \(\sigma_{1}^{*}\in[0,2\pi)\) because \(s_{1}^{*}\in[0,T_{c}^{*})\). By the definition of \(\sigma_{1}^{*},s_{1}^{*},T_{c}^{*}\), one has \(\sigma_{1}^{*}=f_{c}(\vartheta_{1}^{*})\), where \[f_{c}:\mathbb{R}\to\mathbb{R},\quad\ f_{c}(\vartheta):=\frac{F_{c}(\vartheta) 2\pi}{F_{c}(2\pi)}\quad\forall\vartheta\in\mathbb{R}. \tag{3.48}\] Also, \(\vartheta_{1}^{*}=\vartheta_{c}^{r}(s_{1}^{*})\) and \(s_{1}^{*}=\sigma_{1}^{*}T_{c}^{*}/2\pi\), whence \(\vartheta_{1}^{*}=g_{c}(\sigma_{1}^{*})\), where \[g_{c}:\mathbb{R}\to\mathbb{R},\quad\ g_{c}(\sigma):=\vartheta_{c}^{r}\Big{(} \frac{\sigma T_{c}^{*}}{2\pi}\Big{)}\quad\forall\sigma\in\mathbb{R}. \tag{3.49}\] The function \(f_{c}\) is a diffeomorphism of \(\mathbb{R}\) because it is a multiple of \(F_{c}\), and it satisfies \[f_{c}(\vartheta+2\pi)=f_{c}(\vartheta)+2\pi\quad\ \forall\vartheta\in \mathbb{R} \tag{3.50}\] because \(F_{c}\) satisfies (3.43). The function \(g_{c}\) also satisfies \[g_{c}(\sigma+2\pi)=g_{c}(\sigma)+2\pi\quad\ \forall\sigma\in\mathbb{R} \tag{3.51}\] because \(\vartheta_{c}^{r}\) satisfies (3.47). By (3.46), (3.48), (3.49), \(g_{c}(f_{c}(\vartheta))=\vartheta\) for all \(\vartheta\in\mathbb{R}\) and \(f_{c}(g_{c}(\sigma))=\sigma\) for all \(\sigma\in\mathbb{R}\), i.e., \(g_{c}\) is the inverse diffeomorphism of \(f_{c}\). By (3.50) and (3.51), \(f_{c}\) and \(g_{c}\) induce diffeomorphisms of the torus \[f_{c}:\mathbb{T}\to\mathbb{T},\quad\ g_{c}:\mathbb{T}\to\mathbb{T},\quad\ g_{c}(f_{c}( \vartheta))=\vartheta\quad\forall\vartheta\in\mathbb{T},\quad\ f_{c}(g_{c}( \sigma))=\sigma\quad\forall\sigma\in\mathbb{T}\] (with a common little abuse, we use the same notation for the diffeomorphisms of \(\mathbb{R}\) and for the corresponding induced diffeomorphism of \(\mathbb{T}\)). Now \(\sigma_{1}^{*}\) and \(\vartheta_{1}^{*}\) are related by the identities \(\vartheta_{1}^{*}=g_{c}(\sigma_{1}^{*})\), \(\sigma_{1}^{*}=f_{c}(\vartheta_{1}^{*})\). By the definition of \(\vartheta_{1}^{*}\), \(\vartheta^{*}\in\mathbb{T}\) is the equivalence class of \(\vartheta_{1}^{*}\) mod \(2\pi\); let \(\sigma^{*}\in\mathbb{T}\) be the equivalence class of \(\sigma_{1}^{*}\) mod \(2\pi\). Then \[\vartheta^{*}=g_{c}(\sigma^{*}),\quad\ \sigma^{*}=f_{c}(\vartheta^{*}),\quad\ \vartheta^{*},\sigma^{*}\in\mathbb{T}.\] By (3.31), \(\xi^{*}=\gamma_{c}(\vartheta^{*})\). Thus, we have expressed \((\vartheta^{*},\xi^{*})\) in terms of \((\sigma^{*},c)\). From now on, we write \(\sigma\) instead of \(\sigma^{*}\). Now we introduce a variable \(I\) and a function \(h\) to express the level \(c\) in terms of \(I\), i.e., \(c=h(I)\); the function \(h\) is a diffeomorphism (to be determined) of some interval \((0,I^{*})\) (to be determined) onto the interval \((0,4\tau)\) to which \(c\) belongs. We consider the map \[\Phi_{4}(\sigma,\varphi,I):=(g_{c}(\sigma),\varphi,\gamma_{c}(g_{c}(\sigma)))|_{ c=h(I)}=(g_{h(I)}(\sigma),\,\varphi,\,\gamma_{h(I)}(g_{h(I)}(\sigma))), \tag{3.52}\] defined on the set \[\mathcal{S}_{4}^{*}:=\{(\sigma,\varphi,I):\sigma,\varphi\in\mathbb{T},\,I\in(0, I^{*})\}=\mathbb{T}\times\mathbb{T}\times(0,I^{*}), \tag{3.53}\] and we want to calculate its Jacobian determinant. One has \[\partial_{\sigma}\{g_{c}(\sigma)\} =g_{c}^{\prime}(\sigma),\quad\ \partial_{I}\{g_{c}(\sigma)\}=\partial_{c}g_{c}(\sigma)h^{\prime}(I),\quad\ \partial_{\sigma}\{\gamma_{c}(g_{c}(\sigma))\}=\gamma_{c}^{\prime}(g_{c}(\sigma))g_{c }^{\prime}(\sigma),\] \[\partial_{I}\{\gamma_{c}(g_{c}(\sigma))\} =\big{\{}(\partial_{c}\gamma_{c})(g_{c}(\sigma))+\gamma_{c}^{ \prime}(g_{c}(\sigma))\partial_{c}g_{c}(\sigma)\big{)}\}h^{\prime}(I)\] where \(c=h(I)\). Hence the Jacobian matrix is \[D\Phi_{4}(\sigma,\varphi,I)=\begin{bmatrix}g_{c}^{\prime}(\sigma)&0&\partial_{c} g_{c}(\sigma)h^{\prime}(I)\\ 0&1&0\\ \gamma_{c}^{\prime}(g_{c}(\sigma))g_{c}^{\prime}(\sigma)&0&\big{\{}(\partial_{c }\gamma_{c})(g_{c}(\sigma))+\gamma_{c}^{\prime}(g_{c}(\sigma))\partial_{c}g_{c }(\sigma))\big{\}}h^{\prime}(I)\end{bmatrix},\] and its determinant is \[\det D\Phi_{4}(\sigma,\varphi,I)=g_{c}^{\prime}(\sigma)\,(\partial_{c}\gamma_ {c})(g_{c}(\sigma))\,h^{\prime}(I)\] where \(c=h(I)\). By (3.49) and (3.45), \[g_{c}^{\prime}(\sigma)=\frac{1}{(\partial_{c}\gamma_{c})(g_{c}(\sigma))} \frac{T_{c}^{*}}{2\pi},\] and \(T_{c}^{*}=F_{c}(2\pi)\), see (3.47). Hence \[\det D\Phi_{4}(\sigma,\varphi,I)=\frac{F_{c}(2\pi)}{2\pi}h^{\prime}(I) \tag{3.54}\] where \(c=h(I)\). We want the determinant (3.54) to be \(=1\), so that the transformation \(\Phi_{4}\) is symplectic and the Hamiltonian structure is preserved. The average \[h_{1}(c):=\frac{1}{2\pi}\int_{0}^{2\pi}\gamma_{c}(\vartheta)\,d\vartheta \tag{3.55}\] is a strictly increasing function of \(c\in[0,4\tau]\) because its derivative \(h_{1}^{\prime}(c)=F_{c}(2\pi)/2\pi\) is positive for all \(c\in(0,4\tau)\) by (3.34) and (3.39). Since \(\gamma_{0}(\vartheta)=0\), one has \(h_{1}(0)=0\). We define \[I^{*}:=h_{1}(4\tau), \tag{3.56}\] and we note that, by the definition of \(\gamma_{c}(\vartheta)\), one has \(0<I^{*}<\xi_{3}/4\). Thus, the interval \((0,I^{*})\) is the image \(\{h_{1}(c):c\in(0,4\tau)\}\) of the interval \((0,4\tau)\). We define \(h:(0,I^{*})\to(0,4\tau)\) as the inverse of \(h_{1}:(0,4\tau)\to(0,I^{*})\). Thus, \[h^{\prime}(I)>0\ \ \ \ \forall I\in(0,I^{*}). \tag{3.57}\] This concludes the definition of the map \(\Phi_{4}\) in (3.52), which is a diffeomorphism of \(\mathcal{S}_{4}^{*}\) onto \(\mathcal{S}_{3}^{*}\). Now that the transformation \(\Phi_{4}\) has been defined, we use it to transform system (3.22). We consider the change of variables \(\tilde{v}=\Phi_{4}(v)\), where \(\tilde{v}=(\vartheta,\varphi,\xi)\in\mathcal{S}_{3}^{*}\), \(v=(\sigma,\varphi,I)\in\mathcal{S}_{4}^{*}\). This means that \(\vartheta=g_{h(I)}(\sigma)\) and \(\xi=\gamma_{h(I)}(g_{h(I)}(\sigma))\), that is, \(\vartheta=g_{c}(\sigma)\) and \(\xi=\gamma_{c}(g_{c}(\sigma))\) where \(c=h(I)\). A function \(\tilde{v}(t)=\Phi_{4}(v(t))\) solves (3.22) in \(\mathcal{S}_{3}^{*}\) if and only if \(v(t)\) solves \[\dot{v}(t)=V_{4}(v(t)) \tag{3.58}\] in \(\mathcal{S}_{4}^{*}\), where \[V_{4}(v):=(D\Phi_{4}(v))^{-1}V_{3}(\Phi_{4}(v)),\ \ \ \ v=(\sigma,\varphi,I) \in\mathcal{S}_{4}^{*}. \tag{3.59}\] The second row of the inverse matrix \((D\Phi_{4}(v))^{-1}\) is \((0,1,0)\), and therefore the second component of the vector field \(V_{4}\) is simply the second component of \(V_{3}\) (see (3.25)) evaluated at \(\Phi_{4}(\sigma,\varphi,I)\), which is \(\chi(h(I))Q(\sigma,I)\), where \[Q(\sigma,I):=\frac{\sqrt{H(h(I))}}{\big{(}1+\sqrt{2\gamma_{h(I)}(g_{h(I)}( \sigma))}\sin(g_{h(I)}(\sigma))\big{)}^{2}}. \tag{3.60}\] Since the first and third equations of (3.58) are the Hamiltonian system (3.27) and \(\Phi_{4}\) is symplectic in the \((\sigma,I)\) variables, the first and third components of the vector field \(V_{4}\) are \(\partial_{I}\mathcal{H}_{4}(\sigma,I)\) and \(-\partial_{\sigma}\mathcal{H}_{4}(\sigma,I)\) respectively, where \(\mathcal{H}_{4}:=\mathcal{H}_{3}\circ\Phi_{4}\) (of course this can also be checked directly, without using the properties of the symplectic transformations). Now \(\mathcal{H}_{3}\) is defined in (3.28), with \(\Gamma\) defined in (3.17). Hence \[\mathcal{H}_{4}(\sigma,I)=\mathcal{H}_{4}(I)=\Gamma(h(I)),\ \ \ \ \partial_{I}\mathcal{H}_{4}(I)=\chi(h(I))h^{\prime}(I), \tag{3.61}\] and system (3.58) is \[\dot{\sigma}=\partial_{I}\mathcal{H}_{4}(I),\ \ \ \ \dot{\varphi}=\chi(h(I))Q(\sigma,I),\ \ \ \ \dot{I}=0 \tag{3.62}\] in \(\mathcal{S}_{4}^{*}\), with \(Q(\sigma,I)\) defined in (3.60). The action \(I\) is constant in time. The angle \(\sigma\) rotates with constant angular velocity \(\partial_{I}\mathcal{H}_{4}(I)=\chi(h(I))h^{\prime}(I)\). ### Reduction to a constant rotation in the tangential direction Now that the equations of the motion in the radial-vertical plane have been written in angle-action variables \((\sigma,I)\) in (3.62), we want to obtain a similar simplification for the equation of the motion in the tangential direction \(\dot{\varphi}\). Recalling the definition (3.53) of \(\mathcal{S}_{4}^{*}\), we consider the diffeomorphism \[\Phi_{5}:\mathcal{S}_{4}^{*}\to\mathcal{S}_{4}^{*},\ \ \ \ \Phi_{5}(\sigma,\beta,I)=( \sigma,\beta+\eta(\sigma,I),I), \tag{3.63}\] where \(\eta:\mathbb{T}\times(0,I^{*})\to\mathbb{R}\) is a function to be determined. We consider the change of variables \(\tilde{v}=\Phi_{5}(v)\), where \(\tilde{v}=(\sigma,\varphi,I)\in\mathcal{S}_{4}^{*}\), \(v=(\sigma,\beta,I)\in\mathcal{S}_{4}^{*}\). A function \(\tilde{v}(t)=\Phi_{5}(v(t))\) solves (3.58) in \(\mathcal{S}_{4}^{*}\) if and only if \(v(t)\) solves \[\dot{v}(t)=V_{5}(v(t)) \tag{3.64}\] in \(\mathcal{S}_{4}^{*}\), where \[V_{5}(v):=(D\Phi_{5}(v))^{-1}V_{4}(\Phi_{5}(v)),\ \ \ \ v=(\sigma,\beta,I) \in\mathcal{S}_{4}^{*}.\] The Jacobian matrix and its inverse are \[D\Phi_{5}(v)=\begin{bmatrix}1&0&0\\ \partial_{\sigma}\eta&1&\partial_{I}\eta\\ 0&0&1\end{bmatrix},\ \ \ \ (D\Phi_{5}(v))^{-1}=\begin{bmatrix}1&0&0\\ -\partial_{\sigma}\eta&1&-\partial_{I}\eta\\ 0&0&1\end{bmatrix}.\] By (3.61) and (3.62), the vector field \(V_{4}\) in (3.59) is \[V_{4}(\sigma,\varphi,I)=\chi(h(I))\,(h^{\prime}(I),\ Q(\sigma,I),\ 0).\] Hence \[V_{5}(\sigma,\beta,I)=\chi(h(I))\,(h^{\prime}(I),\ Q(\sigma,I)-\partial_{ \sigma}\eta(\sigma,I)h^{\prime}(I),\ 0).\] We decompose \(Q(\sigma,I)\) as the sum of its average in \(\sigma\in\mathbb{T}\) and its zero-average remainder, that is, \[Q(\sigma,I)=Q_{0}(I)+\tilde{Q}(\sigma,I),\ \ \ \ Q_{0}(I):=\frac{1}{2\pi}\int_{0 }^{2\pi}Q(\sigma,I)\,d\sigma,\ \ \ \ \tilde{Q}:=Q-Q_{0}, \tag{3.65}\] and define \[\eta(\sigma,I):=\frac{1}{h^{\prime}(I)}\int_{0}^{\sigma}\tilde{Q}(s,I)\,ds. \tag{3.66}\] Note that \(h^{\prime}(I)\) is nonzero by (3.57). The function \(\eta\) is \(2\pi\)-periodic in \(\sigma\) because \(\tilde{Q}\) is \(2\pi\)-periodic in \(\sigma\) with zero average on \(\mathbb{T}\). By (3.66) and (3.65), \[\partial_{\sigma}\eta(\sigma,I)h^{\prime}(I)=\tilde{Q}(\sigma,I),\ \ \ \ Q( \sigma,I)-\partial_{\sigma}\eta(\sigma,I)h^{\prime}(I)=Q_{0}(I).\] Thus, the vector field \(V_{5}\) depends only on the action variable \(I\), and it is \[V_{5}(\sigma,\beta,I)=V_{5}(I)=(\Omega_{1}(I),\Omega_{2}(I),0),\] where \[\Omega_{1}(I):=\partial_{I}\mathcal{H}_{4}(I)=\chi(h(I))h^{\prime}(I),\ \ \ \ \Omega_{2}(I):=\chi(h(I))Q_{0}(I). \tag{3.67}\] Then system (3.64) is \[\dot{\sigma}=\Omega_{1}(I),\ \ \ \ \dot{\beta}=\Omega_{2}(I),\ \ \ \ \dot{I}=0. \tag{3.68}\] ### Ratio of the two rotation periods Let \((\sigma_{0},\beta_{0},I_{0})\in\mathcal{S}_{4}^{*}=\mathbb{T}^{2}\times(0,I^{*})\). Let \(\sigma_{0}^{r}\in\mathbb{R}\) be a representative of the equivalence class \(\sigma_{0}\in\mathbb{T}\), and let \(\beta_{0}^{r}\in\mathbb{R}\) be a representative of \(\beta_{0}\in\mathbb{T}\). The solution of the Cauchy problem for (3.68) in \(\mathbb{R}^{2}\times(0,I^{*})\) with initial data \((\sigma_{0}^{r},\beta_{0}^{r},I_{0})\) -- i.e., system (3.68) where the first two equations are considered as equations for functions \(\sigma^{r}(t)\), \(\beta^{r}(t)\) taking values in \(\mathbb{R}\) with initial data \(\sigma_{0}^{r},\beta_{0}^{r}\) -- is \[\sigma^{r}(t)=\sigma_{0}^{r}+\Omega_{1}(I_{0})t,\ \ \ \ \beta^{r}(t)=\beta_{0}^{r}+\Omega_{2}(I_{0})t,\ \ \ \ I(t)=I_{0}\ \ \ \ \forall t\in\mathbb{R}. \tag{3.69}\] Let \(\sigma(t),\beta(t)\) be the equivalence classes of \(\sigma^{\tau}(t),\beta^{\tau}(t)\) mod \(2\pi\), i.e., \[\sigma(t):=\{\sigma^{\tau}(t)+2k\pi:k\in\mathbb{Z}\},\ \ \ \ \beta(t):=\{\beta^{\tau}(t)+2k\pi:k\in\mathbb{Z}\}. \tag{3.70}\] Then \((\sigma(t),\beta(t),I(t))\) is a function of time, taking values in \(\mathcal{S}_{4}^{*}=\mathbb{T}^{2}\times(0,I^{*})\), solving (3.68) with initial datum \((\sigma_{0},\beta_{0},I_{0})\). If \(\chi(h(I_{0}))\) is zero, then both \(\Omega_{1}(I_{0})\) and \(\Omega_{2}(I_{0})\) are zero by (3.67), and the solution \((\sigma(t),\beta(t),\)\(I(t))\) is constant in time. If, instead, \(\chi(h(I_{0}))\) is nonzero, then \(\Omega_{1}(I_{0})\) is also nonzero by (3.67) and (3.57), and the function \(\sigma(t)\) is periodic in time, with frequency \(\Omega_{1}(I_{0})\) and period \(T_{1}(I_{0}):=2\pi/\Omega_{1}(I_{0})\). By (3.67), and recalling that the determinant in (3.54) is \(=1\), the period \(T_{1}(I_{0})\) is equal to the period \(T_{c}\) defined in (3.44) with \(c=h(I_{0})\). If \(\Omega_{2}(I_{0})\) is nonzero, then the function \(\beta(t)\) is also periodic in time, with frequency \(\Omega_{2}(I_{0})\) and period \(T_{2}(I_{0}):=2\pi/\Omega_{2}(I_{0})\). We observe that the factor \(\chi(h(I))\) cancels out in the frequency ratio \[\frac{\Omega_{2}(I)}{\Omega_{1}(I)}=\frac{Q_{0}(I)}{h^{\prime}(I)}, \tag{3.71}\] and \(Q_{0}(I)/h^{\prime}(I)\) is well-defined even if \(\chi(h(I))\) is zero. Hence \(Q_{0}(I)/h^{\prime}(I)\) is well-defined for all \(I\in(0,I^{*})\). By (3.65) and (3.60), \[\frac{Q_{0}(I)}{h^{\prime}(I)}=\frac{\sqrt{H(h(I))}}{h^{\prime}(I)2\pi}\int_{ 0}^{2\pi}\frac{1}{\left(1+\sqrt{2\gamma_{h(I)}(g_{h(I)}(\sigma))}\sin(g_{h(I)} (\sigma))\right)^{2}}\,d\sigma.\] We make the change of variable \(g_{h(I)}(\sigma)=\vartheta\) in the integral, that is, \(\sigma=f_{c}(\vartheta)\), where \(c=h(I)\) and \(f_{c}\) is defined in (3.48). Then \(d\sigma=f_{c}^{\prime}(\vartheta)d\vartheta\), and, by (3.48) and (3.39), \(f_{c}^{\prime}(\vartheta)=\partial_{c}\gamma_{c}(\vartheta)2\pi/F_{c}(2\pi)\). Hence \[\int_{0}^{2\pi}\frac{1}{\left(1+\sqrt{2\gamma_{h(I)}(g_{h(I)}(\sigma))}\sin( g_{h(I)}(\sigma))\right)^{2}}\,d\sigma=\frac{2\pi}{F_{c}(2\pi)}\int_{0}^{2\pi} \frac{\partial_{c}\gamma_{c}(\vartheta)}{[1+\sqrt{2\gamma_{c}(\vartheta)} \sin\vartheta]^{2}}\,d\vartheta\] where \(c=h(I)\). Moreover \(h^{\prime}(I)F_{c}(2\pi)=2\pi\) because \(h^{\prime}(I)F_{c}(2\pi)/2\pi\) is the determinant in (3.54), which is \(1\). Therefore \[\frac{Q_{0}(I)}{h^{\prime}(I)} =A(h(I)),\ \ \ \ \ \ \ A(c):=\sqrt{H(c)}\,J(c),\] \[J(c) :=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{\partial_{c}\gamma_{c}( \vartheta)}{[1+\sqrt{2\gamma_{c}(\vartheta)}\sin\vartheta]^{2}}\,d\vartheta. \tag{3.72}\] ### Motion of the fluid particles We consider the composition \[\Phi:=\Phi_{1}\circ\Phi_{2}\circ\Phi_{3}\circ\Phi_{4}\circ\Phi_{5} \tag{3.73}\] of the transformations defined in (3.3), (3.10), (3.21), (3.52), (3.63). The map \(\Phi\) is defined on \(\mathcal{S}_{4}^{*}=\mathbb{T}^{2}\times(0,I^{*})\), its image \(\Phi(\mathcal{S}_{4}^{*})\) is the set \(\mathcal{S}^{*}\) in (1.10), and \(\Phi\) is a diffeomorphism of \(\mathcal{S}_{5}^{*}\) onto \(\mathcal{S}^{*}\). The image \((x,y,z)=\Phi(\sigma,\beta,I)\in\mathcal{S}^{*}\) of a point \((\sigma,\beta,I)\in\mathcal{S}_{4}^{*}\) is \[x=\rho(\sigma,I)\cos(\beta+\eta(\sigma,I)),\ \ \ \ y=\rho(\sigma,I)\sin(\beta+\eta( \sigma,I)),\ \ \ \ z=\zeta(\sigma,I),\] with \[\rho(\sigma,I):=1+\sqrt{2\gamma_{c}(g_{c}(\sigma))}\sin(g_{c}(\sigma)),\ \ \ \ \zeta( \sigma,I):=\frac{\sqrt{2\gamma_{c}(g_{c}(\sigma))}\cos(g_{c}(\sigma))}{1+\sqrt{2 \gamma_{c}(g_{c}(\sigma))}\sin(g_{c}(\sigma))}, \tag{3.74}\] where \(c=h(I)\). The analytic regularity of the map \(\Phi\) and its expansion around \(I=0\) are studied in Section 4.8. By construction, a function \(\tilde{v}(t)=\Phi(v(t))\) is the solution of the Cauchy problem (1.11) with initial datum \(\tilde{v}_{0}=\Phi(v_{0})\in\mathcal{S}^{*}\) if and only if the function \(v(t)\) is the solution of (3.64), i.e., (3.68), with initial datum \(v_{0}\in\mathcal{S}^{*}_{4}\). As a consequence, recalling (3.69), (3.70), the solution \(\tilde{v}(t)\) of the Cauchy problem (1.11) with initial datum \(\tilde{v}_{0}=(x_{0},y_{0},z_{0})=\Phi(\sigma_{0},\partial_{0},I_{0})\) is the function \[\tilde{v}(t)=(x(t),y(t),z(t))=\Phi(\sigma(t),\beta(t),I(t))=\Phi(\sigma_{0}+ \Omega_{1}(I_{0})t,\beta_{0}+\Omega_{2}(I_{0})t,I_{0}). \tag{3.75}\] The function in (3.75) has the form \(\tilde{v}(t)=w(\Omega_{1}t,\Omega_{2}t)\) where \(w:\mathbb{T}^{2}\to\mathcal{S}^{*}\) is the function \(w(\vartheta_{1},\vartheta_{2})=\Phi(\sigma_{0}+\vartheta_{1},\beta_{0}+ \vartheta_{2},I_{0})\) and \(\Omega_{i}=\Omega_{i}(I_{0})\), \(i=1,2\). Hence \(\tilde{v}(t)\) is quasi-periodic with frequency vector \((\Omega_{1},\Omega_{2})\) if \(\Omega_{1}\neq 0\), the ratio \(\Omega_{2}/\Omega_{1}\) is irrational, and the number of frequencies cannot be reduced, i.e., if \(\tilde{v}(t)\) is not a periodic function. Now suppose that \(\Omega_{1}\) is nonzero, that \(\Omega_{2}/\Omega_{1}\) is irrational, and that \(\tilde{v}(t)\) in (3.75) is periodic with a certain period \(T>0\). Then, by (3.75), \[\begin{pmatrix}\sigma_{0}+\Omega_{1}(t+T)\\ \beta_{0}+\Omega_{2}(t+T)\\ I_{0}\end{pmatrix}=\Phi^{-1}(\tilde{v}(t+T))=\Phi^{-1}(\tilde{v}(t))=\begin{pmatrix} \sigma_{0}+\Omega_{1}t\\ \beta_{0}+\Omega_{2}t\\ I_{0}\end{pmatrix}\quad\forall t\in\mathbb{R}.\] Hence \(\sigma_{0}+\Omega_{1}(t+T)\) and \(\sigma_{0}+\Omega_{1}t\) are the same element of \(\mathbb{T}\). This means that \(\Omega_{1}T=2\pi n\) for some \(n\in\mathbb{Z}\). Similarly, \(\beta_{0}+\Omega_{2}(t+T)=\beta_{0}+\Omega_{2}t\) in \(\mathbb{T}\), and \(\Omega_{2}T=2\pi m\) for some \(m\in\mathbb{Z}\). Since \(\Omega_{1}\) is nonzero, \(n\) is also nonzero, and \(\Omega_{2}/\Omega_{1}=m/n\) is rational, a contradiction. This proves that, for \(\Omega_{1}\) nonzero and \(\Omega_{2}/\Omega_{1}\) irrational, the function \(\tilde{v}(t)\) in (3.75) is not periodic, and therefore it is quasi-periodic with frequency vector \((\Omega_{1},\Omega_{2})\). ### The pressure in terms of the action The pressure and the action are related in the following way. By (1.4), (3.2) and (3.3), one has \(P(\Phi_{1}(\rho,\varphi,z))=p(\rho,z)=\frac{1}{4}\alpha(\rho,z)\). By (3.10), (3.13), \[P(\Phi_{1}(\Phi_{2}(\rho,\varphi,z)))=P(\Phi_{1}(\rho,\varphi,z\rho^{-1}))= \frac{1}{4}\alpha(\rho,z\rho^{-1})=\frac{1}{4}\alpha_{2}(\rho,z).\] By (3.21) and (3.23), \[P(\Phi_{1}(\Phi_{2}(\Phi_{3}(\vartheta,\varphi,\xi))))=\frac{1}{4}\alpha_{2} \big{(}1+\sqrt{2\xi}\sin\vartheta,\sqrt{2\xi}\cos\vartheta\big{)}=\frac{1}{4} \alpha_{3}(\vartheta,\xi).\] By (3.31) and (3.52), \[P(\Phi_{1}(\Phi_{2}(\Phi_{3}(\Phi_{4}(\sigma,\varphi,I)))))=\frac{1}{4}\alpha_ {3}(g_{c}(\sigma),\gamma_{c}(g_{c}(\sigma)))|_{c=h(I)}=\frac{1}{4}c|_{c=h(I)}= \frac{1}{4}h(I).\] By (3.63) and (3.73), \[P(\Phi(\sigma,\beta,I))=\frac{1}{4}h(I). \tag{3.76}\] ## 4 Taylor expansions and transversality In this section we prove that the frequency \(\Omega_{1}(I)\) and the ratio \(\Omega_{2}(I)/\Omega_{1}(I)\), see (3.67) and (3.71), admit a Taylor expansion around \(I=0\) (which, in principle, is not obvious because of the square roots in the construction), and we calculate the first nonzero coefficient in their expansion after the constant term. This forces us to expand the function \(\alpha\) to degree \(6\). As a consequence, we obtain that \(\Omega_{1}(I)\) and \(\Omega_{2}(I)/\Omega_{1}(I)\) really change as \(I\) changes, and they vary in a smooth, strictly monotonic way, passing across every value of an interval exactly one time. This can be geometrically viewed as a transversality property. We also calculate the expansion of \(\Phi(\sigma,\beta,I)\) around \(I=0\). ### Expansion of \(\alpha\) The function \(\alpha\) is constructed in [7] by solving the pde system (4.4), where the functions \(F\) and \(G\) are expressed in terms of a function \(\psi\), see (4.2) and (4.3). The function \(\psi\) is defined in [7] as the solution of a degenerate ode problem. In Section 2.1 of [7] one finds the Taylor expansion of \(\psi\) of order \(5\) around zero; here we only use its expansion of order \(3\), which is \[\psi(s)=1-\frac{3}{4}s+\frac{9}{128}s^{2}-\frac{21}{1024}s^{3}+O(s^{4})\quad \text{as $s\to 0$.} \tag{4.1}\] In Section 2.1 of [7] the functions \(H,F,G\) are defined in terms of \(\psi\) as \[H(s) =6s\Big{(}\frac{1}{\psi^{\prime}(s)}+2\psi(s)\Big{)},\qquad F(x,s )=-2x\psi(s)+2x^{3}, \tag{4.2}\] \[G(x,s) =12x^{2}s-F^{2}(x,s)-H(s). \tag{4.3}\] In Lemma 3 of [7] the analytic function \(\alpha(x,y)\) is defined as the unique solution of the system \[\partial_{x}\alpha(x,y)=F(x,\alpha(x,y)),\quad\ \big{(}\partial_{y}\alpha(x,y) \big{)}^{2}=G(x,\alpha(x,y)) \tag{4.4}\] in a neighborhood of \((x,y)=(1,0)\) such that \(\alpha(1,0)=0\), with \(\partial_{y}\alpha\) not identically zero. In Remark 2 of [7] it is observed that \(\alpha\) is even in \(y\), i.e., \(\alpha(x,y)=\alpha(x,-y)\). The coefficients of the monomials of degree \(2\) and \(3\) in the Taylor series \[\alpha(x,y)=2(x-1)^{2}+2y^{2}+3(x-1)^{3}+3(x-1)y^{2}+\sum_{\begin{subarray}{ c}k,j\geq 0\\ k+2j\geq 4\end{subarray}}\alpha_{k,2j}(x-1)^{k}y^{2j} \tag{4.5}\] are given in Remark 4 of [7]; here we want to calculate the coefficients of the monomials of degree \(4,5\) and \(6\). Monomials with odd exponent \(2j+1\) are not present in (4.5) because \(\alpha\) is even in \(y\). One has \[\partial_{x}\alpha(x,y) =4(x-1)+9(x-1)^{2}+3y^{2}+4\alpha_{40}(x-1)^{3}+2\alpha_{22}(x-1) y^{2}\] \[\quad+5\alpha_{50}(x-1)^{4}+3\alpha_{32}(x-1)^{2}y^{2}+\alpha_{1 4}y^{4}+6\alpha_{60}(x-1)^{5}\] \[\quad+4\alpha_{42}(x-1)^{3}y^{2}+2\alpha_{24}(x-1)y^{4}+O_{6}, \tag{4.6}\] where \(O_{n}\) denotes terms with homogeneity \(\geq n\) in \((x-1,y)\). Since \(\alpha(x,y)=O_{2}\), by (4.1) and (4.5) one has \[\psi(\alpha(x,y)) =1-\frac{3}{4}\alpha(x,y)+\frac{9}{128}\alpha^{2}(x,y)+O_{6}\] \[=1-\frac{3}{2}(x-1)^{2}-\frac{3}{2}y^{2}-\frac{9}{4}(x-1)^{3}- \frac{9}{4}(x-1)y^{2}+\Big{(}\frac{9}{32}-\frac{3}{4}\alpha_{40}\Big{)}(x-1) ^{4}\] \[\quad+\Big{(}\frac{9}{16}-\frac{3}{4}\alpha_{22}\Big{)}(x-1)^{2} y^{2}+\Big{(}\frac{9}{32}-\frac{3}{4}\alpha_{04}\Big{)}y^{4}+\Big{(}\frac{27}{32}- \frac{3}{4}\alpha_{50}\Big{)}(x-1)^{5}\] \[\quad+\Big{(}\frac{27}{16}-\frac{3}{4}\alpha_{32}\Big{)}(x-1)^{3} y^{2}+\Big{(}\frac{27}{32}-\frac{3}{4}\alpha_{14}\Big{)}(x-1)y^{4}+O_{6}.\] Expanding \(x\) and \(x^{3}\) around \(x=1\), and recalling the definition (4.2) of \(F\), we calculate \[F(x,\alpha(x,y)) =-2x\psi(\alpha(x,y))+2x^{3}\] \[=-2\psi(\alpha(x,y))-2(x-1)\psi(\alpha(x,y))+2+6(x-1)+6(x-1)^{2}+2 (x-1)^{3}\] \[=4(x-1)+9(x-1)^{2}+3y^{2}+\frac{19}{2}(x-1)^{3}+\frac{15}{2}(x-1) y^{2}\] \[\quad+\Big{(}\frac{63}{16}+\frac{3}{2}\alpha_{40}\Big{)}(x-1)^{4}+ \Big{(}\frac{27}{8}+\frac{3}{2}\alpha_{22}\Big{)}(x-1)^{2}y^{2}+\Big{(}-\frac {9}{16}+\frac{3}{2}\alpha_{04}\Big{)}y^{4}\] \[\quad+\Big{(}-\frac{9}{4}+\frac{3}{2}\alpha_{50}+\frac{3}{2} \alpha_{40}\Big{)}(x-1)^{5}+\Big{(}\frac{9}{4}+\frac{3}{2}\alpha_{32}+\frac{3 }{2}\alpha_{22}\Big{)}(x-1)^{3}y^{2}\] \[\quad+\Big{(}\frac{9}{8}+\frac{3}{2}\alpha_{14}+\frac{3}{2}\alpha_ {04}\Big{)}(x-1)y^{4}+O_{6}. \tag{4.7}\] From (4.6), (4.7) and the identity of each monomial in the differential equation \(\partial_{x}\alpha=F(x,\alpha)\) we get \[\alpha_{40}=\frac{19}{8},\ \ \ \ \alpha_{22}=\frac{15}{4},\ \ \ \ \alpha_{50}=\frac{3}{2},\ \ \ \ \alpha_{32}=3,\ \ \ \ \alpha_{14}=-\frac{9}{16}+\frac{3}{4}\alpha_{04},\] \[\alpha_{60}=\frac{19}{32},\ \ \ \ \alpha_{42}=\frac{99}{32},\ \ \ \ \alpha_{24}=\frac{9}{16}+\frac{3}{4}(\alpha_{14}+\alpha_{04}).\] To find the value of \(\alpha_{04}\) and \(\alpha_{06}\), we consider \(\partial_{y}\alpha(x,y)\) and \(G(x,\alpha(x,y))\) at \(x=1\) and we expand them around \(y=0\). By (4.5), since \(\alpha\) is even in \(y\), one has \[\alpha(1,y)=2y^{2}+\alpha_{04}y^{4}+\alpha_{06}y^{6}+O(y^{8}), \tag{4.8}\] whence \[\partial_{y}\alpha(1,y)=4y+4\alpha_{04}y^{3}+6\alpha_{06}y^{5}+O (y^{7}),\] \[(\partial_{y}\alpha(1,y))^{2}=16y^{2}+32\alpha_{04}y^{4}+(48 \alpha_{06}+16\alpha_{04}^{2})y^{6}+O(y^{8}). \tag{4.9}\] By (4.7), \[F(1,\alpha(1,y)) =3y^{2}+\Big{(}-\frac{9}{16}+\frac{3}{2}\alpha_{04}\Big{)}y^{4}+ O(y^{6}),\] \[F^{2}(1,\alpha(1,y)) =9y^{4}+\Big{(}-\frac{27}{8}+9\alpha_{04}\Big{)}y^{6}+O(y^{8}).\] By (4.1) and (4.2), \[H(s)=4s-\frac{21}{2}s^{2}+\frac{39}{32}s^{3}+O(s^{4}) \tag{4.10}\] (it is to obtain (4.10) that we use the coefficient of \(s^{3}\) in (4.1)). Therefore, by (4.8), \[H(\alpha(1,y))=8y^{2}+(4\alpha_{04}-42)y^{4}+\Big{(}4\alpha_{06}-42\alpha_{04 }+\frac{39}{4}\Big{)}y^{6}+O(y^{8}).\] Hence \[G(1,\alpha(1,y)) =12\alpha(1,y)-F^{2}(1,\alpha(1,y))-H(\alpha(1,y))\] \[=16y^{2}+(8\alpha_{04}+33)y^{4}+\Big{(}8\alpha_{06}+33\alpha_{04} +\frac{105}{8}\Big{)}y^{6}+O(y^{8}). \tag{4.11}\] By (4.9), (4.11) and the identity of each monomial in the differential equation \((\partial_{y}\alpha)^{2}=G(x,\alpha)\) at \(x=1\) we get \(\alpha_{04}=11/8\) and \(\alpha_{06}=113/160\). Hence \(\alpha_{14}=15/32\), \(\alpha_{24}=249/128\), and \[\alpha(x,y) =2(x-1)^{2}+2y^{2}+3(x-1)^{3}+3(x-1)y^{2}+\frac{19}{8}(x-1)^{4}+ \frac{15}{4}(x-1)^{2}y^{2}\] \[\quad+\frac{11}{8}y^{4}+\frac{3}{2}(x-1)^{5}+3(x-1)^{3}y^{2}+ \frac{15}{32}(x-1)y^{4}+\frac{19}{32}(x-1)^{6}\] \[\quad+\frac{99}{32}(x-1)^{4}y^{2}+\frac{249}{128}(x-1)^{2}y^{4}+ \frac{113}{160}y^{6}+O_{7}. \tag{4.12}\] ### Expansion of \(\alpha_{2}\) The function \(\alpha_{2}(\rho,z)=\alpha(\rho,z/\rho)\) defined in (3.13) is even in \(z\), it is analytic around \((\rho,z)=(1,0)\), and its Taylor series \[\alpha_{2}(\rho,z)=\sum_{\begin{subarray}{c}k,j\geq 0\\ k+2j\geq 2\end{subarray}}(\alpha_{2})_{k,2j}(\rho-1)^{k}y^{2j} \tag{4.13}\] matches with the expansion in (4.12) in which \(x=\rho\) and \(y=z/\rho\). We expand \[\rho^{-2} =1-2(\rho-1)+3(\rho-1)^{2}-4(\rho-1)^{3}+5(\rho-1)^{4}+O((\rho-1) ^{5}),\] \[\rho^{-4} =1-4(\rho-1)+10(\rho-1)^{2}+O((\rho-1)^{3}),\] and we obtain \[\alpha_{2}(\rho,z) =2(\rho-1)^{2}+2z^{2}+3(\rho-1)^{3}-(\rho-1)z^{2}+\frac{19}{8}(\rho- 1)^{4}+\frac{15}{4}(\rho-1)^{2}y^{2}\] \[\quad+\frac{11}{8}z^{4}+\frac{3}{2}(\rho-1)^{5}-\frac{7}{2}(\rho- 1)^{3}z^{2}-\frac{161}{32}(\rho-1)z^{4}+\frac{19}{32}(\rho-1)^{6}\] \[\quad+\frac{203}{32}(\rho-1)^{4}z^{2}+\frac{1529}{128}(\rho-1)^{2} z^{4}+\frac{113}{160}z^{6}+O_{7}. \tag{4.14}\] ### Expansion of \(\gamma_{c}(\vartheta)\) Recalling the definition (3.18) of \(\mathcal{B}_{2}\) and the first inclusion in (3.19), the function \(\alpha_{2}(\rho,z)\) in (3.13) is defined and analytic in the disc \((\rho-1)^{2}+z^{2}<\delta_{2}^{2}\). We define \[\phi:\mathbb{T}\times(-\delta_{2},\delta_{2})\to\mathbb{R}^{2},\ \ \ \ \phi(\vartheta,r):=(1+r\sin\vartheta,\,r\cos \vartheta),\ \ \ \ \widetilde{\alpha}_{3}(\vartheta,r):=\alpha_{2}(\phi(\vartheta,r)). \tag{4.15}\] The function \(\widetilde{\alpha}_{3}\) is well-posed and analytic in \((\vartheta,r)\in\mathbb{T}\times(-\delta_{2},\delta_{2})\), and, by (4.13), it is the power series \[\widetilde{\alpha}_{3}(\vartheta,r)=\sum_{n=2}^{\infty}P_{n}(\vartheta)r^{n}, \ \ \ \ P_{n}(\vartheta)=\sum_{\begin{subarray}{c}k,j\geq 0\\ k+2j=n\end{subarray}}(\alpha_{2})_{k,2j}(\sin\vartheta)^{k}(\cos\vartheta)^{2 j}. \tag{4.16}\] All \(P_{n}(\vartheta)\) are trigonometric polynomials, and \(P_{n}(-\vartheta)=(-1)^{n}P_{n}(\vartheta)\), i.e., \(P_{n}\) is even for \(n\) even and \(P_{n}\) is odd for \(n\) odd, because \((-1)^{k}=(-1)^{n}\) for \(k+2j=n\). Hence \[\widetilde{\alpha}_{3}(-\vartheta,-r)=\widetilde{\alpha}_{3}(\vartheta,r)\ \ \ \ \forall( \vartheta,r)\in\mathbb{T}\times(-\delta_{2},\delta_{2}), \tag{4.17}\] namely \(\widetilde{\alpha}_{3}\) is an even function of the pair \((\vartheta,r)\). In fact, (4.17) is the symmetry property \(\alpha_{2}(\rho,-z)=\alpha_{2}(\rho,z)\) expressed in terms of the function \(\widetilde{\alpha}_{3}(\vartheta,r)\). By (4.14), \[P_{2}(\vartheta) =2, \tag{4.18}\] \[P_{3}(\vartheta) =3\sin^{3}\vartheta-\sin\vartheta\cos^{2}\vartheta,\] \[P_{4}(\vartheta) =\frac{19}{8}\sin^{4}\vartheta+\frac{15}{4}\sin^{2}\vartheta\cos ^{2}\vartheta+\frac{11}{8}\cos^{4}\vartheta,\] \[P_{5}(\vartheta) =\frac{3}{2}\sin^{5}\vartheta-\frac{7}{2}\sin^{3}\vartheta\cos^{ 2}\vartheta-\frac{161}{32}\sin\vartheta\cos^{4}\vartheta,\] \[P_{6}(\vartheta) =\frac{19}{32}\sin^{6}\vartheta+\frac{203}{32}\sin^{4}\vartheta \cos^{2}\vartheta+\frac{1529}{128}\sin^{2}\vartheta\cos^{4}\vartheta+\frac{113 }{160}\cos^{6}\vartheta.\] Hence \[P_{3}(\vartheta) =2\sin(\vartheta)-\sin(3\vartheta), \tag{4.19}\] \[P_{4}(\vartheta) =\frac{15}{8}-\frac{1}{2}\cos(2\vartheta),\] (4.20) \[P_{5}(\vartheta) =-\frac{33}{256}\sin(\vartheta)-\frac{835}{512}\sin(3\vartheta)+ (P_{5})_{5}\sin(5\vartheta),\] (4.21) \[P_{6}(\vartheta) =\frac{3173}{2048}+(P_{6})_{2}\cos(2\vartheta)+(P_{6})_{4}\cos(4 \vartheta)+(P_{6})_{6}\cos(6\vartheta). \tag{4.22}\] The Fourier coefficients \((P_{5})_{5}\) in (4.21) and \((P_{6})_{k}\), \(k=2,4,6\), in (4.22) are not involved in the calculations we are going to make, and therefore we avoid to calculate their numerical value. When \(\widetilde{\alpha}_{3}(\vartheta,r)\) is evaluated at \(r=\sqrt{2\xi}\), we obtain the function \(\alpha_{3}(\vartheta,\xi)\) defined in (3.23), i.e., \[\alpha_{3}(\vartheta,\xi)=\widetilde{\alpha}_{3}(\vartheta,\sqrt{2\xi}). \tag{4.23}\] The function \(\gamma_{c}(\vartheta)\) is defined in (3.31) as the unique solution \(\xi\) of the equation \(\alpha_{3}(\vartheta,\xi)=c\). Because of the square root in the construction, \(\gamma_{c}(\vartheta)\), as a function of \(c\), is not analytic around \(c=0\), i.e., it is not a power series of the form \(\sum Q_{n}(\vartheta)c^{n}\) for some analytic functions \(Q_{n}(\vartheta)\). However, \(\gamma_{c}(\vartheta)\) is a power series of the form \(\sum Q_{n}(\vartheta)c^{n/2}\), namely there exists a function \(\widetilde{\gamma}(\vartheta,\mu)\), analytic around \(\mu=0\), such that \(\gamma_{c}(\vartheta)\) is \(\widetilde{\gamma}(\vartheta,\mu)\) evaluated at \(\mu=\sqrt{c}\). To prove it, we use the implicit function theorem for analytic functions, taking into account the degeneracy of the problem. We define \[\mathcal{F}(\vartheta,\mu,w):=\mu^{-2}\widetilde{\alpha}_{3}(\vartheta,\mu w) -1\quad\text{ if }\mu\neq 0;\hskip 28.452756pt\mathcal{F}(\vartheta,0,w):=2w^{2}-1. \tag{4.24}\] The function \(\mathcal{F}\) in (4.24) is well-defined and analytic in \(\mathbb{T}\times(-\mu_{0},\mu_{0})\times(-w_{0},w_{0})\), for some \(\mu_{0},w_{0}>0\) small enough, and, by (4.16), \[\mathcal{F}(\vartheta,\mu,w)=\sum_{n=2}^{\infty}P_{n}(\vartheta)w^{n}\mu^{n-2 }-1=2w^{2}-1+\sum_{n=3}^{\infty}P_{n}(\vartheta)w^{n}\mu^{n-2}.\] Moreover, by (4.17) and (4.24), \[\mathcal{F}(-\vartheta,-\mu,w)=\mathcal{F}(\vartheta,\mu,w)\quad\ \forall(\vartheta,\mu,w)\in\mathbb{T}\times(-\mu_{0},\mu_{0})\times(-w_{0},w_{ 0}). \tag{4.25}\] For every \(\vartheta\in\mathbb{T}\) one has \[\mathcal{F}(\vartheta,0,2^{-\frac{1}{2}})=0,\quad\ \partial_{w}\mathcal{F}( \vartheta,0,2^{-\frac{1}{2}})=4\cdot 2^{-\frac{1}{2}}\neq 0.\] Hence there exist two constants \(\mu_{1},w_{1}\), with \(0<\mu_{1}\leq\mu_{0}\), \(0<w_{1}\leq w_{0}\), and a function \(w(\vartheta,\mu)\), defined and analytic in \(\mathbb{T}\times(-\mu_{1},\mu_{1})\), taking values in \((-w_{1},w_{1})\), such that \[w(\vartheta,0)=2^{-\frac{1}{2}}\quad\ \forall\vartheta\in\mathbb{T},\qquad \mathcal{F}(\vartheta,\mu,w(\vartheta,\mu))=0\quad\ \forall(\vartheta,\mu)\in\mathbb{T}\times(-\mu_{1},\mu_{1}), \tag{4.26}\] and such that if a point \((\vartheta,\mu,a)\in\mathbb{T}\times(-\mu_{1},\mu_{1})\times(-w_{1},w_{1})\) is a zero of \(\mathcal{F}\), then \(a=w(\vartheta,\mu)\). By (4.26) and (4.25), for all \((\vartheta,\mu)\) one has \[0=\mathcal{F}(-\vartheta,-\mu,w(-\vartheta,-\mu))=\mathcal{F}(\vartheta,\mu,w (-\vartheta,-\mu)).\] Hence the point \((\vartheta,\mu,w(-\vartheta,-\mu))\in\mathbb{T}\times(-\mu_{1},\mu_{1})\times (-w_{1},w_{1})\) is a zero of \(\mathcal{F}\), and therefore it belongs to the graph of the implicit function, i.e., \[w(-\vartheta,-\mu)=w(\vartheta,\mu)\quad\ \forall(\vartheta,\mu)\in\mathbb{T} \times(-\mu_{1},\mu_{1}). \tag{4.27}\] From the second identity in (4.26) and formula (4.16) it follows that \(w(\vartheta,r)\) is the power series \[w(\vartheta,\mu)=\sum_{n=0}^{\infty}W_{n}(\vartheta)\mu^{n}, \tag{4.28}\] where the functions \(W_{n}(\vartheta)\) are determined by the identity \(\sum_{n=2}^{\infty}P_{n}(\vartheta)w^{n}(\vartheta,\mu)\mu^{n-2}=1\), i.e., \(W_{0}(\vartheta)=2^{-\frac{1}{2}}\) and \(W_{n}(\vartheta)\) are trigonometric polynomials recursively determined by the system \[\sum_{\begin{subarray}{c}n\geq 2,\ j\geq 0,\\ n-2+j=m\end{subarray}}\sum_{\begin{subarray}{c}k_{1},\ldots,k_{n}\geq 0,\\ k_{1}+\ldots+k_{n}=j\end{subarray}}P_{n}(\vartheta)W_{k_{1}}(\vartheta)W_{k_{2} }(\vartheta)\cdots W_{k_{n}}(\vartheta)=0\quad\ \forall m\geq 1. \tag{4.29}\] By (4.27) and (4.28), \(W_{n}(-\vartheta)=(-1)^{n}W_{n}(\vartheta)\), i.e., \(W_{n}\) is even for \(n\) even and \(W_{n}\) is odd for \(n\) odd. We will make use of equations (4.29) for \(m=1,2,3,4\), which are \[2P_{2}W_{0}W_{1}+P_{3}W_{0}^{3}=0, \tag{4.30}\] \[P_{2}(2W_{0}W_{2}+W_{1}^{2})+3P_{3}W_{0}^{2}W_{1}+P_{4}W_{0}^{4} =0,\] (4.31) \[P_{2}(2W_{0}W_{3}+2W_{1}W_{2})+P_{3}(3W_{0}^{2}W_{2}+3W_{0}W_{1} ^{2})+4P_{4}W_{0}^{3}W_{1}+P_{5}W_{0}^{5}=0,\] (4.32) \[P_{2}(2W_{0}W_{4}+2W_{1}W_{3}+W_{2}^{2})+P_{3}(3W_{0}^{2}W_{3}+ 6W_{0}W_{1}W_{2}+W_{1}^{3})\] \[\ \ +P_{4}(4W_{0}^{3}W_{2}+6W_{0}^{2}W_{1}^{2})+5P_{5}W_{0}^{4}W_ {1}+P_{6}W_{0}^{6}=0. \tag{4.33}\] By the definition (4.24) of \({\cal F}\), the second identity in (4.26) implies that \[\widetilde{\alpha}_{3}(\vartheta,\mu w(\vartheta,\mu))=\mu^{2}\ \ \ \ \forall( \vartheta,\mu)\in\mathbb{T}\times(-\mu_{1},\mu_{1}) \tag{4.34}\] with \(\mu\neq 0\). Identity (4.34) also holds for \(\mu=0\) because \(\widetilde{\alpha}_{3}(\vartheta,0)=\alpha_{2}(\phi(\vartheta,0))=\alpha_{2}( 1,0)=0\). By the first identity in (4.26), taking \(\mu_{1}\) smaller if necessary, one has \(w(\vartheta,\mu)>0\) for all \((\vartheta,\mu)\in\mathbb{T}\times(-\mu_{1},\mu_{1})\). Given any \(c\in[0,\mu_{1}^{2})\), there exists a unique \(\mu\in[0,\mu_{1})\) such that \(c=\mu^{2}\), that is, \(\mu=\sqrt{c}\). Also, \(\mu w(\vartheta,\mu)\geq 0\), and there exists a unique \(\xi\geq 0\) such that \(\mu w(\vartheta,\mu)=\sqrt{2\xi}\), that is, \(\xi=\frac{1}{2}\mu^{2}w^{2}(\vartheta,\mu)\). As a consequence, by (4.23) and (4.34), \[\alpha_{3}(\vartheta,\xi)=\widetilde{\alpha}_{3}(\vartheta,\sqrt{2\xi})= \widetilde{\alpha}_{3}(\vartheta,\mu w(\vartheta,\mu))=\mu^{2}=c. \tag{4.35}\] Hence \(\xi=\gamma_{c}(\vartheta)\), and therefore \(\gamma_{c}(\vartheta)=\frac{1}{2}\mu^{2}w^{2}(\vartheta,\mu)\) where \(\mu=\sqrt{c}\). In other words, we have proved that \[\gamma_{c}(\vartheta)=\widetilde{\gamma}(\vartheta,\sqrt{c})\ \ \ \ \forall( \vartheta,c)\in\mathbb{T}\times[0,\mu_{1}^{2}), \tag{4.36}\] where \(\widetilde{\gamma}\) is the analytic function \[\widetilde{\gamma}(\vartheta,\mu):=\frac{1}{2}\mu^{2}w^{2}(\vartheta,\mu). \tag{4.37}\] By (4.27), \[\widetilde{\gamma}(-\vartheta,-\mu)=\widetilde{\gamma}(\vartheta,\mu)\ \ \ \ \forall( \vartheta,\mu)\in\mathbb{T}\times(-\mu_{1},\mu_{1}). \tag{4.38}\] By (4.37) and (4.28), \[\widetilde{\gamma}(\vartheta,\mu)=\sum_{n=2}^{\infty}Q_{n}(\vartheta)\mu^{n},\ \ \ \ Q_{n}(\vartheta):=\frac{1}{2}\sum_{\begin{subarray}{c}k,j\geq 0\\ k+j+2=n\end{subarray}}W_{k}(\vartheta)W_{j}(\vartheta). \tag{4.39}\] By (4.38) and (4.39), one has \(Q_{n}(-\vartheta)=(-1)^{n}Q_{n}(\vartheta)\). ### Expansion of the average of \(\gamma_{c}(\vartheta)\) and \(\partial_{c}\gamma_{c}(\vartheta)\) We study the average of \(\widetilde{\gamma}(\vartheta,\mu)\), \(\gamma_{c}(\vartheta)\) and \(\partial_{c}\gamma_{c}(\vartheta)\) over \(\vartheta\in[0,2\pi]\). To shorten the notation, given any \(2\pi\)-periodic function \(f(\vartheta)\), we denote \(\langle f\rangle\) its average over the period, i.e., \[\langle f\rangle:=\frac{1}{2\pi}\int_{0}^{2\pi}f(\vartheta)\,d\vartheta.\] For \(n\) odd, the trigonometric polynomial \(Q_{n}(\vartheta)\) in (4.39) is \(2\pi\)-periodic and odd, and therefore \(\langle Q_{n}\rangle=0\). As a consequence, by (4.39), \[\frac{1}{2\pi}\int_{0}^{2\pi}\widetilde{\gamma}(\vartheta,\mu)\,d\vartheta= \sum_{k=1}^{\infty}\langle Q_{2k}\rangle\mu^{2k}. \tag{4.40}\] By (3.55), (4.36) and (4.40), one has \[h_{1}(c)=\frac{1}{2\pi}\int_{0}^{2\pi}\gamma_{c}(\vartheta)\,d\vartheta= \frac{1}{2\pi}\int_{0}^{2\pi}\widetilde{\gamma}(\vartheta,\sqrt{c})\,d \vartheta=\sum_{k=1}^{\infty}\langle Q_{2k}\rangle c^{k}. \tag{4.41}\] Hence \(h_{1}(c)\) is analytic around \(c=0\), in the sense that \(h_{1}(c)\), which is defined for \(c\in[0,\mu_{1}^{2})\), coincides in \([0,\mu_{1}^{2})\) with the power series in (4.41), which is a function defined for \(c\in(-\mu_{1}^{2},\mu_{1}^{2})\) and analytic in that interval. Note that the average of \(\gamma_{c}(\vartheta)\) is analytic around \(c=0\) even if the function \(\gamma_{c}(\vartheta)\) itself is not analytic in \(c\) around \(c=0\). Now we calculate the averages \(\langle Q_{n}\rangle\) for \(n=2,4,6\). By (4.39), \[Q_{2}=\frac{1}{2}W_{0}^{2}=\frac{1}{4},\ \ \ \ Q_{4}=\frac{1}{2}(2W_{0}W_{2}+W_{1} ^{2}),\ \ \ \ Q_{6}=\frac{1}{2}(2W_{0}W_{4}+2W_{1}W_{3}+W_{2}^{2}). \tag{4.42}\] Hence \(\langle Q_{2}\rangle=1/4\). Since \(P_{2}=2\) and \(W_{0}=2^{-\frac{1}{2}}\), from (4.30) we get \[W_{1}=-\frac{1}{8}P_{3}. \tag{4.43}\] Using (4.31) to substitute \((2W_{0}W_{2}+W_{1}^{2})\), and using also (4.43), we calculate \[Q_{4}=-\frac{1}{4}(3P_{3}W_{0}^{2}W_{1}+P_{4}W_{0}^{4})=\frac{1}{64}(3P_{3}^{2 }-4P_{4}).\] By (4.19) and (4.20), we obtain \[\langle Q_{4}\rangle=0. \tag{4.44}\] Using (4.33) to substitute \((2W_{0}W_{4}+2W_{1}W_{3}+W_{2}^{2})\), and also (4.43), we calculate \[Q_{6} =-\frac{1}{4}\big{[}P_{3}(3W_{0}^{2}W_{3}+6W_{0}W_{1}W_{2}+W_{1}^ {3})+P_{4}(4W_{0}^{3}W_{2}+6W_{0}^{2}W_{1}^{2})+5P_{5}W_{0}^{4}W_{1}+P_{6}W_{0 }^{6}\big{]}\] \[=-\frac{3}{8}P_{3}W_{3}+\frac{3\sqrt{2}}{32}P_{3}^{2}W_{2}+\frac {1}{2048}P_{3}^{4}-\frac{\sqrt{2}}{4}P_{4}W_{2}-\frac{3}{256}P_{3}^{2}P_{4}+ \frac{5}{128}P_{3}P_{5}-\frac{1}{32}P_{6}. \tag{4.45}\] From (4.31) we obtain \[W_{2}=\frac{5\sqrt{2}}{128}P_{3}^{2}-\frac{\sqrt{2}}{16}P_{4}=\frac{\sqrt{2}} {256}\Big{(}-5-32\cos(2\vartheta)+20\cos(4\vartheta)-5\cos(6\vartheta)\Big{)} \tag{4.46}\] and, from (4.32) and (4.43), \[W_{3}=\frac{P_{3}P_{4}}{16}-\frac{P_{3}W_{2}}{2\sqrt{2}}-\frac{3P_{3}^{3}}{256 }-\frac{P_{5}}{16}=\frac{225}{4096}\sin(\vartheta)+\frac{1251}{8192}\sin(3 \vartheta)+\sum_{k=5,7,9}(W_{3})_{k}\sin(k\vartheta). \tag{4.47}\] The Fourier coefficients \((W_{3})_{k}\), \(k=5,7,9\), are not involved in the calculation of \(\langle Q_{6}\rangle\), and therefore we do not calculate them. By (4.19), \(\ldots\), (4.22), (4.46), (4.47), we calculate \[\langle P_{3}W_{3}\rangle =-\frac{351}{16384}, \langle P_{3}^{2}W_{2}\rangle =\frac{291\sqrt{2}}{1024}, \langle P_{3}^{4}\rangle =\frac{131}{8}, \langle P_{4}W_{2}\rangle =-\frac{11\sqrt{2}}{2048},\] \[\langle P_{3}^{2}P_{4}\rangle =\frac{91}{16}, \langle P_{3}P_{5}\rangle =\frac{703}{1024}, \langle P_{6}\rangle =\frac{3173}{2048}.\] Hence, integrating (4.45), we obtain \[\langle Q_{6}\rangle=-\frac{1065}{65536} \tag{4.48}\] (where \(65536=2^{16}\)), and, by (4.41), \[h_{1}(c)=\frac{1}{2\pi}\int_{0}^{2\pi}\gamma_{c}(\vartheta)\,d\vartheta=\frac {c}{4}+\langle Q_{6}\rangle c^{3}+O(c^{4}). \tag{4.49}\] Since \(\langle Q_{6}\rangle\) in (4.48) is nonzero, \(h_{1}(c)\) is a nonlinear, analytic function of \(c\). Its derivative is \[h_{1}^{\prime}(c)=\frac{1}{2\pi}\int_{0}^{2\pi}\partial_{c}\gamma_{c}( \vartheta)\,d\vartheta=\frac{1}{4}+3\langle Q_{6}\rangle c^{2}+O(c^{3}), \tag{4.50}\] which is a nonconstant, analytic function of \(c\). ### Expansion of \(J(c)\) The average \(J(c)\) is defined in (3.72). By (4.36), the partial derivative \(\partial_{c}\gamma_{c}(\vartheta)\) satisfies \[\partial_{c}\gamma_{c}(\vartheta)=\nu(\vartheta,\sqrt{c}), \tag{4.51}\] where \(\nu(\vartheta,\mu)\) is the analytic function \[\nu(\vartheta,\mu):=\frac{\partial_{\mu}\widetilde{\gamma}(\vartheta,\mu)}{2 \mu}=\sum_{n=2}^{\infty}Q_{n}(\vartheta)\frac{n}{2}\mu^{n-2}, \tag{4.52}\] with \(Q_{n}(\vartheta)\) defined in (4.39). By (4.38), one has \(\nu(-\vartheta,-\mu)=\nu(\vartheta,\mu)\). Moreover \[\nu(\vartheta,\mu)=Q_{2}+\frac{3}{2}Q_{3}(\vartheta)\mu+2Q_{4}(\vartheta)\mu^ {2}+O(\mu^{3}), \tag{4.53}\] and \(Q_{2}=1/4\) by (4.42), \(Q_{3}=W_{0}W_{1}\) by (4.39), \(\langle Q_{3}\rangle=0\) because \(Q_{3}\) is odd, and \(\langle Q_{4}\rangle=0\) by (4.44). Regarding the denominator in the definition of \(J(c)\), one has \(\sqrt{2\gamma_{c}(\vartheta)}=\sqrt{2\xi}=\sqrt{c}\,w(\vartheta,\sqrt{c})\) by construction (see (4.35) and the lines preceding it), and, by (4.28), (4.43), (4.19), \[\big{(}1+\mu w(\vartheta,\mu)\sin\vartheta\big{)}^{-2} =1-2\mu w(\vartheta,\mu)\sin\vartheta+3\mu^{2}w^{2}(\vartheta, \mu)\sin^{2}\vartheta+O(\mu^{3})\] \[=1-2W_{0}\sin(\vartheta)\mu+\big{(}3W_{0}^{2}\sin^{2}\vartheta-2W _{1}(\vartheta)\sin\vartheta\big{)}\mu^{2}+O(\mu^{3})\] \[=1-\sqrt{2}\sin(\vartheta)\mu+\Big{(}1-\frac{9}{8}\cos(2\vartheta )+\frac{1}{8}\cos(4\vartheta)\Big{)}\mu^{2}+O(\mu^{3}). \tag{4.54}\] Taking a smaller \(\mu_{1}\) if necessary, the function \[m(\vartheta,\mu):=\frac{\nu(\vartheta,\mu)}{[1+\mu w(\vartheta,\mu)\sin \vartheta]^{2}} \tag{4.55}\] is defined and analytic in \(\mathbb{T}\times(-\mu_{1},\mu_{1})\), it satisfies \[m(-\vartheta,-\mu)=m(\vartheta,\mu)\hskip 14.226378pt\forall(\vartheta,\mu) \in\mathbb{T}\times(-\mu_{1},\mu_{1}), \tag{4.56}\] and it has expansion \(m(\vartheta,\mu)=\sum_{n=0}^{\infty}M_{n}(\vartheta)\mu^{n}\) for some trigonometric polynomials \(M_{n}(\vartheta)\). From (4.56) it follows that \(M_{n}(-\vartheta)=(-1)^{n}M_{n}(\vartheta)\). Therefore \(\langle M_{n}\rangle=0\) for \(n\) odd, and \[\frac{1}{2\pi}\int_{0}^{2\pi}m(\vartheta,\mu)\,d\vartheta=\sum_{k=0}^{\infty} \langle M_{2k}\rangle\mu^{2k}.\] By (4.53) and (4.54) one has \[M_{0}=\frac{1}{4},\hskip 14.226378ptM_{2}(\vartheta)=2Q_{4}(\vartheta)- \frac{3}{2}W_{1}(\vartheta)\sin\vartheta+\frac{1}{4}-\frac{9}{32}\cos(2 \vartheta)+\frac{1}{32}\cos(4\vartheta).\] By (4.43) and (4.19), the average of \(W_{1}(\vartheta)\sin\vartheta\) is \(-1/8\). Hence \(\langle M_{2}\rangle=7/16\), and \[J(c) =\frac{1}{2\pi}\int_{0}^{2\pi}\frac{\partial_{c}\gamma_{c}( \vartheta)}{[1+\sqrt{2\gamma_{c}(\vartheta)}\sin\vartheta]^{2}}\,d\vartheta\] \[=\frac{1}{2\pi}\int_{0}^{2\pi}m(\vartheta,\sqrt{c})\,d\vartheta= \sum_{k=0}^{\infty}\langle M_{2k}\rangle c^{k}=\frac{1}{4}+\frac{7}{16}c+O(c ^{2}). \tag{4.57}\] Thus, the average \(J(c)\) is an analytic function of \(c\) around \(c=0\) (even if the integrand function \(m(\vartheta,\sqrt{c})\) is not). Moreover, taking \(\mu_{1}\) smaller if necessary, \(J(c)\) is strictly increasing in \([0,\mu_{1}^{2})\). ### Expansion of \(h(I)\) and \(\Omega_{1}(I)\) We have already proved that \(h_{1}(c)\) in (3.55) is analytic around \(c=0\), with expansion (4.49). Hence its inverse function \(h(I)=h_{1}^{-1}(I)\) is also analytic around \(I=0\), and it satisfies \[h(I)=4I-256(Q_{6})I^{3}+O(I^{4}),\ \ \ \ h^{\prime}(I)=4-768\langle Q_{6}\rangle I ^{2}+O(I^{3}). \tag{4.58}\] The frequency \(\Omega_{1}(I)\) is defined in (3.67), and it is the product of the \(C^{\infty}\) cut-off function \(\chi(h(I))\) times the analytic function \(h^{\prime}(I)\) in (4.58). ### Expansion of the frequency ratio \(\Omega_{2}(I)/\Omega_{1}(I)\) By (3.71) and (3.72), the frequency ratio \(\Omega_{2}(I)/\Omega_{1}(I)\) coincides with the function \[A(h(I))=\sqrt{H(h(I))}\,J(h(I)).\] By its definition in [7], the function \(H(c)\) is analytic around \(c=0\). Hence the composition \(H(h(I))\) is analytic around \(I=0\), and, by (4.10) and (4.58), \[H(h(I))=16I-168I^{2}+O(I^{3}).\] We write its square root as the product \[\sqrt{H(h(I))}=\sqrt{I}\,B(I), \tag{4.59}\] where \[B(I):=4\Big{(}\frac{H(h(I))}{16I}\Big{)}^{\frac{1}{2}}=4\Big{(}1-\frac{21}{2}I +O(I^{2})\Big{)}^{\frac{1}{2}}=4-21I+O(I^{2}). \tag{4.60}\] Since the function \(x\mapsto\sqrt{1+x}\) is analytic around \(x=0\), the function \(B(I)\) is analytic around \(I=0\). The function \(J(h(I))\) is also analytic around \(I=0\), and, by (4.57) and (4.58), \[J(h(I))=\frac{1}{4}+\frac{7}{4}I+O(I^{2}). \tag{4.61}\] Hence \[A(h(I))=\sqrt{I}\,\mathcal{R}(I),\ \ \ \ \mathcal{R}(I):=B(I)J(h(I))=1+\frac{7 }{4}I+O(I^{2}),\] and the function \(\mathcal{R}(I)\) is analytic around \(I=0\). Taking a smaller \(I^{*}\) if necessary, both \(\mathcal{R}(I)\) and \(A(h(I))\) are strictly increasing functions of \(I\in[0,I^{*})\). ### Expansion of \(\Phi(\sigma,\beta,I)\) Since \(Q_{2}=1/4\), by (4.36), (4.39), (4.51), (4.52) one has \[\gamma_{c}(\vartheta)=\frac{c}{4}+O(c^{\frac{3}{2}}),\ \ \ \ \partial_{c}\gamma_{c}( \vartheta)=\frac{1}{4}+O(c^{\frac{1}{2}}).\] Hence, by (3.39), \(F_{c}(\vartheta)=\frac{\vartheta}{4}+O(c^{\frac{1}{2}})\) and therefore, by (3.48), \(f_{c}(\vartheta)=\vartheta+O(c^{\frac{1}{2}})\). As a consequence, \(g_{c}\) in (3.49), being the inverse of \(f_{c}\), satisfies \[g_{c}(\vartheta)=\vartheta+O(c^{\frac{1}{2}}).\] By (4.58), \(h(I)=4I+O(I^{3})\) and \(h^{\prime}(I)=4+O(I^{2})\). Hence, at \(c=h(I)\), one has \[\gamma_{c}(\vartheta)=I+O(I^{\frac{3}{2}}),\ \ \ \ \ \ g_{c}(\vartheta)= \vartheta+O(I^{\frac{1}{2}}),\] and therefore the functions \(\rho(\sigma,I),\zeta(\sigma,I)\) in (3.74) have expansion \[\rho(\sigma,I)=\sqrt{2I}\,\sin(\sigma)+O(I),\ \ \ \ \ \ \zeta(\sigma,I)=\sqrt{2I}\,\cos( \sigma)+O(I). \tag{4.62}\] By (4.59) and (4.60), \[\sqrt{H(h(I))}=4\sqrt{I}+O(I^{\frac{3}{2}}). \tag{4.63}\] By (4.62) and (4.63), the function \(Q(\sigma,I)\) in (3.60) satisfies \[Q(\sigma,I)=4I^{\frac{1}{2}}+O(I).\] Hence \(Q_{0},\widetilde{Q},\eta\) in (3.65), (3.66) satisfy \[Q_{0}(I)=4I^{\frac{1}{2}}+O(I),\hskip 14.226378pt\widetilde{Q}(\sigma,I)=O(I), \hskip 14.226378pt\eta(\sigma,I)=O(I).\] The map \(\Phi_{1}\) in (3.3) is analytic in \(\mathcal{N}_{1}\) because \(\rho\) does not vanish in \(\mathcal{N}_{1}\); the map \(\Phi_{2}\) in (3.10) is also analytic in \(\mathcal{N}_{2}\) because \(\rho\) does not vanish in \(\mathcal{N}_{2}\). The map \(\Phi_{3}\) in (3.21) is analytic in \(\mathcal{B}_{3}\) because \(\xi\) is positive in \(\mathcal{B}_{3}\); however, \(\Phi_{3}\) is not analytic in \(\xi\) around \(\xi=0\). Nonetheless, \(\Phi_{3}\) can be obtained by evaluating at \(r=\sqrt{2\xi}\) a map that is analytic around \(r=0\), exactly like \(\alpha_{3}(\vartheta,\xi)\) in (4.23), like \(\gamma_{c}(\vartheta)\) in (4.36), and like \(\partial_{c}\gamma_{c}(\vartheta)\) in (4.51). Similarly, both the map \(\Phi_{4}\) in (3.52) and the map \(\Phi_{5}\) in (3.63) are analytic in \(\mathcal{S}_{4}^{*}\), they are not analytic in \(I\) around \(I=0\), but they can be obtained by evaluating at \(\mu=\sqrt{I}\) some suitable maps that are analytic around \(\mu=0\), because \(\gamma_{c},F_{c},f_{c},g_{c},\sqrt{H(c)}\) are all functions of this type. As a consequence, the map \(\Phi\) in (3.73) is analytic in \(\mathcal{S}_{4}^{*}\), it is not analytic in \(I\) around \(I=0\), and it can be obtained by evaluating at \(\mu=\sqrt{I}\) a map that is analytic around \(\mu=0\). Hence \(\Phi\) admits a converging expansion in powers of \(\sqrt{I}\) around \(I=0\). ### Smallness conditions The parameter \(\delta\) in the definition (3.1) of the set \(\mathcal{N}\) is subject to the following smallness conditions. After (3.1) we have taken \(\delta\in(0,1)\) to obtain that \(\mathcal{N}\) is an open neighborhood of the circle \(\mathcal{C}\) in (3.1) with \(\rho>1-\delta>0\) in \(\mathcal{N}\), and \(\delta\leq r_{0}\) to obtain that the functions \(\alpha(\sqrt{x^{2}+y^{2}},z)\) and \(H(\alpha(\sqrt{x^{2}+y^{2}},z))\) are analytic in \((x,y,z)\in\mathcal{N}\), where \(r_{0}\) is a universal constant given by the definition of \(\alpha\) and \(H\), i.e., by Gavrilov's construction in [7]. After (3.18) we have defined \(\delta_{2}=2\delta/3\), and we have assumed \(\delta\leq 1/2\) to obtain the inclusion (3.19). After (3.21) we have defined \(\xi_{3}=\delta_{2}^{2}/2=2\delta^{2}/9\). In (3.30) we have proved that \(\partial_{\xi}\alpha_{3}(\vartheta,0)=4\) for all \(\vartheta\in\mathbb{T}\), and that \(\partial_{\xi}\alpha_{3}(\vartheta,\xi)\) is continuous in \(\mathbb{T}\times[0,\xi_{3})\) -- in fact, in Section 4.3 we have proved more, because \(\alpha_{3}(\vartheta,\xi)=\widetilde{\alpha}_{3}(\vartheta,\sqrt{2\xi})\), see (4.23), and \(\widetilde{\alpha}_{3}(\vartheta,r)\) is the analytic function in (4.15), (4.16); hence \(\partial_{\xi}\alpha_{3}(\vartheta,\xi)=\sum_{n=2}^{\infty}P_{n}(\vartheta)n(2 \xi)^{(n-2)/2}=4+3P_{3}(\vartheta)\sqrt{2\xi}+O(\xi)\). Hence, by continuity, there exists a universal constant \(\xi^{*}>0\) such that \(\partial_{\xi}\alpha_{3}(\vartheta,\xi)>0\) in \(\mathbb{T}\times[0,\xi^{*})\). Therefore the condition on \(\xi_{3}\) after (3.30) (where we say "Taking \(\xi_{3}\) smaller if necessary") is \(\xi_{3}\leq\xi^{*}\). In terms of \(\delta\), this means \(\delta\leq 3\sqrt{\xi^{*}/2}\), which is a universal constant. The constants \(w_{0},\mu_{0}\) after the definition (4.24) of \(\mathcal{F}\) are universal, and the constants \(w_{1},\mu_{1}\) after the application of the implicit function theorem in (4.26) are universal too. By (4.26), \(w(\vartheta,0)>0\), and therefore, by continuity, there exists a universal constant \(\mu_{1}^{*}>0\) such that \(w(\vartheta,\mu)>0\) for all \((\vartheta,\mu)\in\mathbb{T}\times(-\mu_{1}^{*},\mu_{1}^{*})\). Hence the condition on \(\mu_{1}\) after (4.34) (where we say "taking \(\mu_{1}\) smaller if necessary") is \(\mu_{1}\leq\mu_{1}^{*}\). The same happens for the condition on \(\mu_{1}\) after (4.54), to obtain that the function \(m(\vartheta,\mu)\) in (4.55) is well-defined and analytic in \(\mathbb{T}\times(-\mu_{1},\mu_{1})\), and for the condition on \(\mu_{1}\) after (4.57), to obtain that \(J(c)\) is strictly increasing in \([0,\mu_{1}^{2})\). Thus, we fix \(\mu_{1}\) as the smallest of these three constants, and we obtain that \(\mu_{1}\) is a universal positive constant. The parameter \(\tau\) is related to \(\mu_{1}\) by the inequality \(4\tau\leq\mu_{1}^{2}\), because \([0,4\tau)\) is the interval where \(c\) varies, and the functions \(\gamma_{c}(\vartheta)\), \(\partial_{c}\gamma_{c}(\vartheta)\) and \(J(c)\) are obtained by evaluating at \(\mu=\sqrt{c}\) some functions of \((\vartheta,\mu)\) that are well-defined, analytic and monotonic in \(\mathbb{T}\times(-\mu_{1},\mu_{1})\), see (4.36), (4.51), (4.57). Thus, we want that, for all \(c\in[0,4\tau)\), the square root \(\sqrt{c}\) belongs to the domain \((-\mu_{1},\mu_{1})\) of those analytic functions, and this is true if \(4\tau\leq\mu_{1}^{2}\). Regarding the parameter \(\tau\), there are two other conditions to consider. The first one is (1.9), which holds if \(\tau\) is smaller than the infimum of the pressure \(P\) on the set \(\mathcal{N}\setminus\mathcal{N}^{\prime}\). Since \(P(x,y,z)=p(\rho,z)=\frac{1}{4}\alpha(\rho,z)\) (see (3.2)), by the definition (3.1) of \(\mathcal{N}\) and \(\mathcal{N}^{\prime}\) it follows that that infimum depends only on \(\delta\). The last condition for \(\tau\) is after (4.61), where we say "Taking a smaller if necessary", to obtain that \(\mathcal{R}(I)\) is strictly increasing in \([0,I^{*})\). Since the function \(\mathcal{R}\) does not depend on any parameter, this is a condition of the form \(I^{*}\leq I_{0}\) for some universal constant \(I_{0}>0\). Moreover, the invertible function \(h_{1}\) expressing \(I^{*}\) in terms of \(\tau\) in (3.56) is also independent on any parameter (see the definition of \(h_{1}\) in (3.55) and its expansion in (4.49)), and therefore this condition for \(\tau\) is satisfied for \(\tau\) smaller than a universal constant. Regarding \(\varepsilon\), the only condition to consider is that \(0<\varepsilon\leq\tau/3\), see after (1.9). In conclusion, the parameters \(\delta,\tau\) and \(\varepsilon\) must satisfy \[0<\delta\leq\delta_{0},\ \ \ \ 0<\tau\leq\tau_{0}(\delta),\ \ \ \ 0< \varepsilon\leq\tau/3,\] where \(\delta_{0}\) is a universal constant, and \(\tau_{0}(\delta)\) depends only on \(\delta\). We fix \(\delta=\delta_{0}\) and \(\tau=\tau_{0}(\delta_{0})\). Both \(\delta_{0}\) and \(\tau_{0}(\delta_{0})\) are universal constants. We rename \(\tau_{0}:=\tau_{0}(\delta_{0})\) and \(\varepsilon_{0}:=\tau_{0}/3\). Since \(\tau=\tau_{0}\), by (3.56), \(I^{*}\) is also a universal constant. For notation convenience, we denote \[\mathcal{K}(I):=\frac{1}{4}h(I).\] Hence, by (3.67) and (3.6), \(\Omega_{1}(I)=\frac{1}{4}\omega(\frac{1}{4}h(I))h^{\prime}(I)=\omega(\mathcal{ K}(I))\mathcal{K}^{\prime}(I)\); by (4.58) and (4.48), \[\mathcal{K}(I)=\frac{1}{4}\Big{(}4I+256\frac{1065}{2^{16}}I^{3}+O(I^{4})\Big{)} =I+\frac{1065}{1024}I^{3}+O(I^{4});\] (3.76) becomes \(P(\Phi(\sigma,\beta,I))=\mathcal{K}(I)\). For all \(\varepsilon\in(0,\varepsilon_{0}]\), the proof of Theorem 1.1 is complete. **Acknowledgements.** Supported by the Project PRIN 2020XB3EFL _Hamiltonian and dispersive PDEs._
2301.12019
A Greedy Sensor Selection Algorithm for Hyperparameterized Linear Bayesian Inverse Problems
We consider optimal sensor placement for a family of linear Bayesian inverse problems characterized by a deterministic hyper-parameter. The hyper-parameter describes distinct configurations in which measurements can be taken of the observed physical system. To optimally reduce the uncertainty in the system's model with a single set of sensors, the initial sensor placement needs to account for the non-linear state changes of all admissible configurations. We address this requirement through an observability coefficient which links the posteriors' uncertainties directly to the choice of sensors. We propose a greedy sensor selection algorithm to iteratively improve the observability coefficient for all configurations through orthogonal matching pursuit. The algorithm allows explicitly correlated noise models even for large sets of candidate sensors, and remains computationally efficient for high-dimensional forward models through model order reduction. We demonstrate our approach on a large-scale geophysical model of the Perth Basin, and provide numerical studies regarding optimality and scalability with regard to classic optimal experimental design utility functions.
Nicole Aretz, Peng Chen, Denise Degen, Karen Veroy
2023-01-27T23:04:26Z
http://arxiv.org/abs/2301.12019v1
# A Greedy Sensor Selection Algorithm for Hyperparameterized Linear Bayesian Inverse Problems ###### Abstract We consider optimal sensor placement for a family of linear Bayesian inverse problems characterized by a deterministic hyper-parameter. The hyper-parameter describes distinct configurations in which measurements can be taken of the observed physical system. To optimally reduce the uncertainty in the system's model with a single set of sensors, the initial sensor placement needs to account for the non-linear state changes of all admissible configurations. We address this requirement through an observability coefficient which links the posteriors' uncertainties directly to the choice of sensors. We propose a greedy sensor selection algorithm to iteratively improve the observability coefficient for all configurations through orthogonal matching pursuit. The algorithm allows explicitly correlated noise models even for large sets of candidate sensors, and remains computationally efficient for high-dimensional forward models through model order reduction. We demonstrate our approach on a large-scale geophysical model of the Perth Basin, and provide numerical studies regarding optimality and scalability with regard to classic optimal experimental design utility functions. ## 1 Introduction In the Bayesian approach to inverse problems (c.f. [1]), the uncertainty in a parameter is described via a probability distribution. With Bayes' Theorem, the prior belief in a parameter is updated when new information is revealed such that the posterior distribution describes the parameter with improved certainty. Bayes' posterior is optimal in the sense that it is the unique minimizer of the sum of the relative entropy between the posterior and the prior, and the mean squared error between the model prediction and the experimental data. The noise model drives, along with the measurements, how the posterior's uncertainty is reduced in comparison to the prior. A critical aspect - espe cially for expensive experimental data1 - is how to select the measurements to improve the posterior's credibility best. The selection of adequate sensors meeting individual applications' needs is, therefore, a big goal of the optimal experimental design (OED) research field and its surrounding community. We refer to the literature (e.g., [3, 4, 5]) for introductions. Footnote 1: For instance, for projects harvesting geothermal energy, the development costs (e.g., drilling, stimulation, and tests) take up \(50-70\%\) of the total budget ([2]). As each borehole can cost several million dollars, it is essential to plan their location carefully. The analysis and algorithm presented in this work significantly extend our initial ideas presented in [6] in which we seek to generalize the 3D-VAR stability results from [7] to the probabilistic Bayesian setting. Our proposed algorithm is directly related to the orthogonal matching pursuit (OMP) algorithm [8, 9] for the parameterized-background data-weak (PBDW) method and the empirical interpolation method (EIM) ([10, 11]). Closely related OED methods for linear Bayesian inverse problems over partial differential equations (PDEs) include [12, 13, 14, 15, 16, 17], mostly for A- and D-OED and uncorrelated noise. In recent years, these methods have also been extended to non-linear Bayesian inverse problems, e.g., [18, 19, 20, 21, 22], while an advance to correlated noise has been made in [23]. In particular, [21, 22] use similar algorithmic approaches to this work by applying a greedy algorithm to maximize the expected information gain. Common strategies for dealing with the high dimensions imposed by the PDE model use the framework in [24] for discretization, combined with parameter reduction methods (e.g., [25, 26, 27, 28, 29, 30, 31]) and model order reduction (MOR) methods for uncertainty quantification (UQ) problems (e.g., [32, 33, 34, 35, 36]). In this paper, we consider inverse problem settings, in which a deterministic hyper-parameter describes anticipated system configurations such as material properties or loading conditions. Each configuration changes the model non-linearly, so we obtain a _family_ of possible posterior distributions for any measurement data. Supposing data can only be obtained with a single set of sensors regardless of the system's configuration, the OED task becomes to reduce the uncertainty in each posterior uniformly over all hyper-parameters. This task is challenging for high-dimensional models since 1) each configuration requires its own computationally expensive model solve, and 2) for large sets of admissible measurements, the comparison between sensors requires the inversion of the associated, possibly dense noise covariance matrix. By building upon [6], this paper addresses both challenges and proposes in detail a sensor selection algorithm that remains efficient even for correlated noise models. The main contributions are as follows: First, we identify an observability coefficient as a link between the sensor choice and the maximum eigenvalue of each posterior distribution. We also provide an analysis of its sensitivity to model approximations. Second, we decompose the noise covariance matrix for any observation operator to allow fast computation of the observability gain under expansion with additional sensors. Third, we propose a sensor selection algorithm that iteratively constructs an observation operator from a large set of sensors to increase the observability coefficient over all hyper-parameters. The algorithm is applicable to correlated noise models, and requires, through the efficient use of MOR techniques, only a _single_ full-order model evaluation per selected sensor. While the main idea and derivation of the observability coefficient are similar to [6], this work additionally features 1) an analysis of the observability coefficient regarding model approximations, 2) explicit computational details for treating correlated noise models, and 3) a comprehensive discussion of the individual steps in the sensor selection algorithm. Moreover, the proposed method is tested using a large-scale geophysical model of the Perth Basin. This paper is structured as follows: In Section 2 we introduce the hyper-parameterized inverse problem setting, including all assumptions for the prior distribution, the noise model, and the forward model. In Section 3, we then establish and analyze the connection between the observability coefficient and the posterior uncertainty. We finally propose our sensor selection algorithm in Section 4 which exploits the presented analysis to choose sensors that improve the observability coefficient even in a hyper-parameterized setting. We demonstrate the applicability and scalability of our approach on a high-dimensional geophysical model in Section 5 before concluding in Section 6. ## 2 Problem setting Let \(\mathcal{X}\) be a Hilbert space with inner product \(\langle\cdot,\cdot\rangle_{\mathcal{X}}\) and induced norm \(\|x\|_{\mathcal{X}}^{2}:=\langle x,x\rangle_{\mathcal{X}}\). We consider the problem of identifying unknown states \(x_{\text{true}}(\theta)\in\mathcal{X}\) of a single physical system under changeable configurations \(\theta\) from noisy measurements \[\mathbf{d}(\theta)\approx[\ell_{1}(x_{\text{true}}(\theta)),\ldots,\ell_{K}(x _{\text{true}}(\theta))]^{T}\in\mathbb{R}^{K}.\] The measurements are obtained by a set of \(K\) unique _sensors_ (or _experiments_) \(\ell_{1},\ldots,\ell_{K}\in\mathcal{X}^{\prime}\). Our goal is to choose these sensors from a large _sensor library_\(\mathcal{L}\subset\mathcal{X}^{\prime}\) of options in a way that optimizes how much information is gained from their measurements for any configurations \(\theta\). #### Hyper-parameterized forward model We consider the unknown state \(x_{\text{true}}\) to be uniquely characterized by two sources of information: * an unknown parameter \(\mathbf{u}_{\text{true}}\in\mathbb{R}^{M}\) describing uncertainties in the governing physical laws, and * a hyper-parameter (or configuration2) \(\theta\in\mathcal{P}\subset\mathbb{R}^{p}\) describing dependencies on controllable configurations under which the system may be observed (such as material properties or loading conditions) where \(\mathcal{P}\) is a given compact set enclosing all possible configurations. Footnote 2: We call \(\theta\) interchangeably hyper-parameter or configuration to either stress its role in the mathematical model or physical interpretation. For any given \(\mathbf{u}\in\mathbb{R}^{M}\) and \(\theta\in\mathcal{P}\), we let \(x_{\theta}(\mathbf{u})\in\mathcal{X}\) be the solution of an abstract model equation \(\mathcal{M}_{\theta}(x_{\theta}(\mathbf{u});\mathbf{u})=0\) and assume that the map \(\mathbf{u}\to x_{\theta}(\mathbf{u})\) is well-defined, linear, and uniformly continuous in \(\mathbf{u}\), i.e. \[\exists\ \bar{\eta}>0: \overline{\eta}(\theta):=\sup_{\mathbf{u}\in\mathbb{R}^{M}}\frac{ \|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}}{\|\mathbf{u}\|_{\mathcal{X}^{-1}_{ \mathrm{pr}}}}<\bar{\eta} \forall\ \theta\in\mathcal{P}. \tag{1}\] **Remark 1**.: _Although we assumed that \(\mathbf{u}_{\mathrm{true}}\) lies in the Euclidean space \(\mathbb{R}^{M}\), any other linear space can be considered via an affine transformation onto an appropriate basis (see [12, 37]). For infinite-dimensional spaces, we first discretize with appropriate treatment of the adjoint operator (c.f. [24])._ **Remark 2**.: _By keeping the model equation general, we stress the applicability of our approach to a wide range of problems. For instance, time-dependent states can be treated by choosing \(\mathcal{X}\) as a Bochner space or its discretization (c.f. [38]). We also do not formally restrict the dimension of \(\mathcal{X}\), though any implementation relies on the ability to compute \(x_{\theta}(\mathbf{u})\) with sufficient accuracy. To this end, we note that the analysis in Section 3.2 can be applied to determine how discretization errors affect the observability criterion in the sensor selection._ Following a probabilistic approach to inverse problems, we express the initial uncertainty in \(\mathbf{u}_{\mathrm{true}}=\mathbf{u}_{\mathrm{true}}(\theta)\) of any \(x_{\mathrm{true}}=x_{\theta}(\mathbf{u}_{\mathrm{true}})\) in configuration \(\theta\) through a random variable \(\mathbf{u}\) with Gaussian prior \(\mu_{\mathrm{pr}}=\mathcal{N}\left(\mathbf{u}_{\mathrm{pr}},\Sigma_{\mathrm{pr}}\right)\), where \(\mathbf{u}_{\mathrm{pr}}\in\mathbb{R}^{M}\) is the prior mean and \(\Sigma_{\mathrm{pr}}\in\mathbb{R}^{M\times M}\) is a symmetric positive definite (s.p.d.) covariance matrix. The latter defines the inner product \(\langle\cdot,\cdot\rangle_{\Sigma_{\mathrm{pr}}}\) and its induced norm \(\|\cdot\|_{\Sigma_{\mathrm{pr}}^{-1}}\) through \[\langle\mathbf{u},\mathbf{v}\rangle_{\Sigma_{\mathrm{pr}}^{-1}}:=\mathbf{u}^{ T}\Sigma_{\mathrm{pr}}^{-1}\bar{\mathbf{u}}, \hskip 42.679134pt\|\mathbf{u}\|_{\Sigma_{\mathrm{pr}}^{-1}}^{2}:=\langle \mathbf{u},\mathbf{u}\rangle_{\Sigma_{\mathrm{pr}}^{-1}}\,,\hskip 42.679134pt \forall\ \mathbf{u},\mathbf{v}\in\mathbb{R}^{M}. \tag{2}\] With these definitions, the probability density function (pdf) for \(\mu_{\mathrm{pr}}\) is \[\pi_{\mathrm{pr}}(\mathbf{u})=\frac{1}{\sqrt{(2\pi)^{M}\det\Sigma_{\mathrm{pr }}}}\exp\left(-\frac{1}{2}\|\mathbf{u}-\mathbf{u}_{\mathrm{pr}}\|_{\Sigma_{ \mathrm{pr}}^{-1}}^{2}\right).\] For simplicity, we assume \(\{\mathbf{u}_{\mathrm{true}}(\theta)\}_{\theta\in\mathcal{P}}\) to be independent realizations of \(\mathbf{u}\) such that we may consider the same prior for all \(\theta\) without accounting for a possible history of measurements at different configurations. #### Sensor library and noise model For taking measurements of the unknown states \(\{x_{\mathrm{true}}(\theta)\}_{\theta}\), we call any linear functional \(\ell\in\mathcal{X}^{\prime}\) a _sensor_, and its application to a state \(x\in\mathcal{X}\) its _measurement_\(\ell(x)\in\mathbb{R}\). We model experimental measurements \(d_{\ell}\in\mathbb{R}\) of the actual physical state \(x_{\mathrm{true}}\) as \(d_{\ell}=\ell(x_{\mathrm{true}})+\varepsilon_{\ell}\) where \(\varepsilon_{\ell}\sim\mathcal{N}(0,\mathbf{cov}(\varepsilon_{\ell}, \varepsilon_{\ell}))\) is a Gaussian random variable. We permit noise in different sensor measurements to be correlated with a known covariance function \(\mathbf{cov}\). In a slight overload of notation, we write \(\mathbf{cov}:\mathcal{L}\times\mathcal{L}\to\mathbb{R}\), \(\mathbf{cov}(\ell_{i},\ell_{j}):=\mathbf{cov}(\varepsilon_{\ell_{i}}, \varepsilon_{\ell_{j}})\) as a symmetric bilinear form over the sensor library. Any ordered subset \(\mathcal{S}=\{\ell_{1},\dots,\ell_{K}\}\subset\mathcal{L}\) of sensors can then form a (linear and continuous) _observation operator_ through \[L:=[\ell_{1},\dots,\ell_{K}]^{T}:\mathcal{X}\to\mathbb{R}^{K},\hskip 42.679134ptLx :=[\ell_{1}(x),\dots,\ell_{K}(x)]^{T}\,.\] The experimental measurements of \(L\) have the form \[\mathbf{d}=[\ell_{1}(x_{\text{true}})+\varepsilon_{\ell_{1}},\ldots,\ell_{K}(x_{ \text{true}})+\varepsilon_{\ell_{K}}]^{T}=Lx_{\text{true}}+\varepsilon\qquad \text{with}\qquad\varepsilon=[\varepsilon_{\ell_{1}},\ldots,\varepsilon_{\ell_ {K}}]^{T}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\Sigma_{L}), \tag{3}\] where \(\sigma^{2}\Sigma_{L}\) is the noise covariance matrix defined through \[\Sigma_{L}\in\mathbb{R}^{K\times K},\qquad\text{such that}\qquad\left[\sigma^{2} \Sigma_{L}\right]_{i,j}:=\mathbf{cov}(\ell_{j},\ell_{i})=\mathbf{cov}( \varepsilon_{\ell_{j}},\varepsilon_{\ell_{i}}) \tag{4}\] with an auxiliary scaling parameter3\(\sigma^{2}>0\). We assume that the library \(\mathcal{L}\) and the noise covariance function \(\mathbf{cov}\) have been chosen such that \(\Sigma_{L}\) is s.p.d. for any combination of sensors in \(\mathcal{L}\). This assumption gives rise to the \(L\)-dependent inner product and its induced norm Footnote 3: We introduce \(\sigma^{2}\) here as an additional variable to ease the discussion of scaling in Section 13. However, we can set \(\sigma^{2}=1\) without loss of generality (w.l.o.g.). \[\left\langle\mathbf{d},\tilde{\mathbf{d}}\right\rangle_{\Sigma_{L}^{-1}}:= \mathbf{d}^{T}\Sigma_{L}^{-1}\tilde{\mathbf{d}},\qquad\qquad\|\mathbf{d}\|_{ \Sigma_{L}^{-1}}^{2}:=\left\langle\mathbf{d},\mathbf{d}\right\rangle_{\Sigma_ {L}^{-1}},\qquad\qquad\forall\ \mathbf{d},\tilde{\mathbf{d}}\in\mathbb{R}^{K}. \tag{5}\] Measured with respect to this norm, the largest observation of any (normalized) state is thus \[\gamma_{L}:=\sup_{\|x\|_{\chi}=1}\|Lx\|_{\Sigma_{L}^{-1}}=\sup_{x\in\mathcal{ X}}\frac{\|Lx\|_{\Sigma_{L}^{-1}}}{\|x\|_{\mathcal{X}}}. \tag{6}\] We show in Section 4.1 that \(\gamma_{L}\) increases under expansion of \(L\) with additional sensors despite the change in norm, and is therefore bounded by \(\gamma_{L}\leq\gamma_{L}\). We also define the _parameter-to-observable map_ \[G_{L,\theta}:\mathbb{R}^{M}\to\mathbb{R}^{K},\quad\text{such that}\quad G_{L, \theta}\left(\mathbf{u}\right):=Lx_{\theta}(\mathbf{u}). \tag{7}\] With the assumptions above - in particular the linearity and uniform continuity (1) of \(x\) in \(\mathbf{u}\) - the map \(G_{L,\theta}\) is linear and uniformly bounded in \(\mathbf{u}\). We let \(\mathbf{G}_{L,\theta}\in\mathbb{R}^{K\times M}\) denote its matrix representation with respect to the unit basis \(\{\mathbf{e}_{m}\}_{m=1}^{M}\). The likelihood of \(\mathbf{d}\in\mathbb{R}^{K}\) obtained through the observation operator \(L\) for the parameter \(\mathbf{u}\in\mathbb{R}^{M}\) and the system configuration \(\theta\) is then \[\Phi_{L}\left(\mathbf{d}\ \big{|}\ \mathbf{u},\theta\right):=\frac{1}{\sqrt{2^{K} \det\Sigma_{L}}}\exp\left(-\frac{1}{2\sigma^{2}}\left\|\mathbf{d}-G_{L,\theta }\left(\mathbf{u}\right)\right\|_{\Sigma_{L}^{-1}}^{2}\right).\] Note that \(G_{L,\theta}\) and \(\mathbf{G}_{L,\theta}\) may depend non-linearly on \(\theta\). _Posterior distribution_ Once noisy measurement data \(\mathbf{d}\approx Lx_{\text{true}}(\theta)\) is available, Bayes' theorem yields the posterior pdf as \[\pi_{\text{post}}^{L,\theta}(\mathbf{u}\,|\,\mathbf{d})=\frac{1}{Z(\theta)} \exp\left(-\frac{1}{2\sigma^{2}}\left\|G_{L,\theta}\left(\mathbf{u}\right)- \mathbf{d}\right\|_{\Sigma_{L}^{-1}}^{2}-\frac{1}{2}\|\mathbf{u}-\mathbf{u}_{ \text{pr}}\|_{\Sigma_{\text{pr}}^{-1}}^{2}\right)\propto\pi_{\text{pr}}( \mathbf{u})\cdot\Phi_{L}\left(\mathbf{d}\ \big{|}\ \mathbf{u},\theta\right), \tag{8}\] with normalization constant \[Z(\theta):=\int_{\mathbb{R}^{p}}\exp\left(-\frac{1}{2\sigma^{2}}\left\|G_{L,\theta }\left(\mathbf{u}\right)-\mathbf{d}\right\|_{\Sigma_{L}^{-1}}^{2}\right)\,d\mu_{ \mathrm{pr}}.\] Due to the linearity of the parameter-to-observable map, the posterior measure \(\mu_{\mathrm{post}}^{L,\theta}\) is a Gaussian \[\mu_{\mathrm{post}}^{L,\theta}=\mathcal{N}(\mathbf{u}_{\mathrm{post}}^{L, \theta}(\mathbf{d}),\Sigma_{\mathrm{post}}^{L,\theta})\] with known (c.f. [1]) mean and covariance matrix \[\mathbf{u}_{\mathrm{post}}^{L,\theta}(\mathbf{d})=\Sigma_{ \mathrm{post}}^{L,\theta}\left(\frac{1}{\sigma^{2}}\mathbf{G}_{L,\theta}^{T} \Sigma_{L}^{-1}\mathbf{d}+\Sigma_{\mathrm{pr}}^{-1}\mathbf{u}_{\mathrm{pr}}\right) \in\mathbb{R}^{M}, \tag{9}\] \[\Sigma_{\mathrm{post}}^{L,\theta}=\left(\frac{1}{\sigma^{2}} \mathbf{G}_{L,\theta}^{T}\Sigma_{L}^{-1}\mathbf{G}_{L,\theta}+\Sigma_{ \mathrm{pr}}^{-1}\right)^{-1} \in\mathbb{R}^{M\times M}. \tag{10}\] The posterior \(\mu_{\mathrm{post}}^{L,\theta}\) thus depends not only on the choice of sensors, but also on the configuration \(\theta\) under which their measurements were obtained. Therefore, to decrease the uncertainty in all possible posteriors with a single, \(\theta\)-independent observation operator \(L\), the construction of \(L\) should account for all admissible configurations \(\theta\in\mathcal{P}\) under which \(x_{\mathrm{true}}\) may be observed. **Remark 3**.: _The linearity of \(x_{\theta}(\mathbf{u})\) in \(\mathbf{u}\) is a strong assumption that dictates the Gaussian posterior. However, in combination with the hyper-parameter \(\theta\), our setting here can be re-interpreted as the Laplace-approximation for a non-linear state map \(\theta\mapsto x(\theta)\) (c.f. [39; 21; 40]). The sensor selection presented here is then an intermediary step for OED over non-linear forward models._ ## 3 The Observability Coefficient In this section, we characterize how the choice of sensors in the observation operator \(L\) and its associated noise covariance matrix \(\Sigma_{L}\) influence the uncertainty in the posteriors \(\mu_{\mathrm{post}}^{L,\theta}\), \(\theta\in\mathcal{P}\). We identify an observability coefficient that bounds the eigenvalues of the posterior covariance matrices \(\Sigma_{\mathrm{post}}^{L,\theta}\), \(\theta\in\mathcal{P}\) with respect to \(L\), and facilitates the sensor selection algorithm presented in Section 4. ### Eigenvalues of the Posterior Covariance Matrix The uncertainty in the posterior \(\pi_{\mathrm{post}}^{L,\theta}\) for any configuration \(\theta\in\mathcal{P}\) is uniquely characterized by the posterior covariance matrix \(\Sigma_{\mathrm{post}}^{L,\theta}\), which is in turn connected to the observation operator \(L\) through the parameter-to-observable map \(G_{L,\theta}\) and the noise covariance matrix \(\Sigma_{L}\). To measure the uncertainty in \(\Sigma_{\mathrm{post}}^{L,\theta}\), the OED literature suggests a variety of different utility functions to be minimized over \(L\) in order to optimize the sensor choice. Many of these utility functions can be expressed in terms of the eigenvalues \(\lambda_{L}^{\theta,1}\geq\cdots\geq\lambda_{L}^{\theta,M}>0\) of \(\Sigma_{\text{post}}^{L,\theta}\), e.g., A-OED: \[\text{trace}(\Sigma_{\text{post}}^{L,\theta})=\sum_{m=1}^{M}\lambda_ {L}^{\theta,m} \text{(mean variance)}\] D-OED: \[\text{det}(\Sigma_{\text{post}}^{L,\theta})=\prod_{m=1}^{M} \lambda_{L}^{\theta,m} \text{(volume)}\] E-OED: \[\lambda_{\text{max}}(\Sigma_{\text{post}}^{L,\theta})=\lambda_{L }^{\theta,1} \text{(spectral radius)}.\] In practice, the choice of the utility function is dictated by the application. In E-optimal experimental design (E-OED), for instance, posteriors whose uncertainty ellipsoids stretch out into any one direction are avoided, whereas D-OED minimizes the overall volume of the uncertainty ellipsoid regardless of the uncertainty in any one parameter direction. We refer to [3] for a detailed introduction and other OED criteria. Considering the hyper-parameterized setting where each configuration \(\theta\) influences the posterior uncertainty, we seek to choose a single observation operator \(L\) such that the selected utility function remains small for _all_ configurations \(\theta\in\mathcal{P}\), e.g., for E-OED, minimizing \[\min_{\ell_{1},\ldots,\ell_{K}\in\mathcal{L}}\ \max_{\theta\in\mathcal{P}}\ \lambda_{\text{max}}(\Sigma_{\text{post}}^{L,\theta})\quad\text{such that}\quad L=[\ell_{1},\ldots,\ell_{K}]^{T}\] guarantees that the longest axis of each posterior covariance matrix \(\Sigma_{\text{post}}^{L,\theta}\) for any \(\theta\in\mathcal{P}\) has the same guaranteed upper bound. The difficulty here is that the minimization over \(\mathcal{P}\) necessitates repeated, cost-intensive model evaluations to compute the utility function for many different configurations \(\theta\). In the following, we therefore introduce an upper bound to the posterior eigenvalues that can be optimized through an observability criterion with far fewer model solves. The bound's optimization indirectly reduces the different utility functions through the posterior eigenvalues. Recalling that \(\Sigma_{\text{post}}^{L,\theta}\) is s.p.d., let \(\{\psi_{m}\}_{m=1}^{M}\) be an orthonormal eigenvector basis of \(\Sigma_{\text{post}}^{L,\theta}\), i.e. \(\psi_{m}^{T}\psi_{n}=\delta_{m,n}\) and \[\Sigma_{\text{post}}^{L,\theta}\psi_{m}=\lambda_{L}^{\theta,m} \psi_{m} m =1,\ldots,M. \tag{11}\] Using the representation (10), any eigenvalue \(\lambda_{L}^{\theta,m}\) can be written in the form \[\frac{1}{\lambda_{L}^{\theta,m}}=\psi_{m}^{T}\left[\Sigma_{\text{post}}^{L, \theta}\right]^{-1}\psi_{m}=\psi_{m}^{T}\left[\frac{1}{\sigma^{2}}\mathbf{G}_{ L,\theta}^{T}\Sigma_{L}^{-1}\mathbf{G}_{L,\theta}+\Sigma_{\text{pr}}^{-1} \right]\psi_{m}=\frac{1}{\sigma^{2}}\left\|G_{L,\theta}\left(\psi_{m}\right) \right\|_{\Sigma_{L}^{-1}}^{2}+\left\|\psi_{m}\right\|_{\Sigma_{\text{pr}}^{-1 }}^{2}. \tag{12}\] Since \(\psi_{m}\) depends implicitly on \(L\) and \(\theta\) through (11), we cannot use this representation directly to optimize over \(L\). To take out the dependency on \(\psi_{m}\), we bound \(\left\|\psi_{m}\right\|_{\Sigma_{\text{pr}}^{-1}}^{2}\geq\frac{1}{\frac{1}{ \sigma^{2}}}\) in terms of the maximum eigenvalue of the prior covariance matrix \(\Sigma_{\text{pr}}\). Likewise, we define \[\beta_{G}(\theta):=\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\left\|G_{L,\theta} \left(\mathbf{u}\right)\right\|_{\Sigma_{L}^{-1}}}{\left\|\mathbf{u}\right\|_{ \Sigma_{\text{pr}}^{-1}}}=\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\left\|Lx_ {\theta}(\mathbf{u})\right\|_{\Sigma_{L}^{-1}}}{\left\|\mathbf{u}\right\|_{ \Sigma_{\text{pr}}^{-1}}}, \tag{13}\] as the minimum ratio between an observation for a parameter \(\mathbf{u}\) relative to the prior's covariance norm. From (12) and (13) we obtain the upper bound \[\lambda_{L}^{\theta,m}=\left(\frac{1}{\sigma^{2}}\frac{\left\|G_{L,\theta}\left( \psi_{m}\right)\right\|_{\mathbb{E}_{\mathbb{P}^{-1}}^{-1}}^{2}}{\left\|\psi_{ m}\right\|_{\mathbb{E}_{\mathbb{P}^{\pi}}^{-1}}^{2}}+1\right)^{-1}\left\|\psi_{m} \right\|_{\mathbb{E}_{\mathbb{P}^{\pi}}^{-1}}^{-2}\leq\left(\frac{1}{\sigma^{2} }\beta_{G}(\theta)^{2}+1\right)^{-1}\lambda_{\text{pr}}^{\max}.\] Geometrically, this bound means that the radius \(\lambda_{L}^{\theta,1}\) of the outer ball around the posterior uncertainty ellipsoid is smaller than that of the prior uncertainty ellipsoid by at least the factor \(\left(\frac{1}{\sigma^{2}}\beta_{G}(\theta)^{2}+1\right)^{-1}\). By choosing \(L\) to maximize \(\min_{\theta}\beta_{G}(\theta)\), we therefore minimize this outer ball containing all uncertainty ellipsoids (i.e., for any \(\theta\in\mathcal{P}\)). As expected, the influence of \(L\) is strongest when the measurement noise is small such that data can be trusted (\(\sigma^{2}\ll 1\)), and diminishes with increasing noise levels (\(\sigma^{2}\gg 1\)). ### Parameter Restriction An essential property of \(\beta_{G}(\theta)\) is that \(\beta_{G}(\theta)=0\) if \(K<M\), i.e., the number of sensors in \(L\) is smaller than the number of parameter dimensions. In this case, \(\beta_{G}(\theta)\) cannot distinguish between sensors during the first \(M-1\) steps of an iterative algorithm, or in general when less than a total of \(M\) sensors are supposed to be chosen. For medium-dimensional parameter spaces (\(M\in\mathcal{O}(10)\)), we mitigate this issue by restricting \(\mathbf{u}\) to the subspace \(\text{span}\{\varphi_{1},\ldots,\varphi_{\min\{K,M\}}\}\subset\mathbb{R}^{M}\) spanned by the first \(\min\{K,M\}\) eigenvectors of \(\Sigma_{\text{pr}}\) corresponding to its largest eigenvalues, i.e., the subspace with the largest prior uncertainty. For high-dimensional parameter spaces or when the model \(\mathcal{M}_{\theta}\) has a non-trivial null-space, we bound \(\beta_{G}(\theta)\) further \[\beta_{G}(\theta)=\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\left\|Lx_{\theta}( \mathbf{u})\right\|_{\mathbb{E}_{\mathbb{P}}^{-1}}}{\left\|x_{\theta}( \mathbf{u})\right\|_{\mathcal{X}}}\frac{\left\|x_{\theta}(\mathbf{u})\right\| _{\mathcal{X}}}{\left\|\mathbf{u}\right\|_{\mathbb{E}_{\mathbb{P}}^{-1}}} \geq\inf_{x\in\mathcal{W}_{\theta}}\frac{\left\|Lx\right\|_{\mathbb{E}_{ \mathbb{P}}^{-1}}}{\left\|x\right\|_{\mathcal{X}}}\ \inf_{\mathbf{u}\in\mathbb{R}^{M}} \frac{\left\|x_{\theta}(\mathbf{u})\right\|_{\mathcal{X}}}{\left\|\mathbf{u} \right\|_{\mathbb{E}_{\mathbb{P}}^{-1}}}=\beta_{L\mathcal{U}}\eta(\theta)\ \underline{\eta}(\theta) \tag{14}\] where we define the linear space \(\mathcal{W}_{\theta}\) of all achievable states \[\mathcal{W}_{\theta}:=\{x_{\theta}(\mathbf{u})\in\mathcal{X}:\ \mathbf{u}\in \mathbb{R}^{M}\}\] and the coefficients \[\beta_{L\mathcal{U}}\eta(\theta):=\inf_{x\in\mathcal{W}_{\theta}}\frac{\left\| Lx\right\|_{\mathbb{E}_{\mathbb{P}}^{-1}}}{\left\|x\right\|_{\mathcal{X}}}, \underline{\eta}(\theta):=\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\left\|x_{ \theta}(\mathbf{u})\right\|_{\mathcal{X}}}{\left\|\mathbf{u}\right\|_{\mathbb{ E}_{\mathbb{P}}^{-1}}}. \tag{15}\] The value of \(\underline{\eta}(\theta)\) describes the minimal state change that a parameter \(\mathbf{u}\) can achieve relative to its prior-induced norm \(\left\|\mathbf{u}\right\|_{\mathbb{E}_{\mathbb{P}}^{-1}}\). It can filter out parameter directions that have little influence on the states \(x_{\theta}(\mathbf{u})\). In contrast, the observability coefficient \(\beta_{L\mathcal{U}}\eta(\theta)\) depends on the prior only implicitly via \(\mathcal{W}_{\theta}\); it quantifies the minimum amount of information (measured with respect to the noise model) that can be obtained on any state in \(\mathcal{W}_{\theta}\) relative to its norm. Future work will investigate how to optimally restrict the parameter space based on \(\underline{\eta}(\theta)\) before choosing sensors that maximize \(\beta_{L\mathcal{U}}\eta(\theta)\). Existing parameter reduction approaches in a similar context include [28; 41; 42; 27]. In this work, however, we solely focus on the maximization of \(\beta_{G}(\theta)\) and, by extension, \(\beta_{L\mathcal{U}}\eta(\theta)\) and henceforth assume that \(M\) is sufficiently small and \(\underline{\eta}:=\inf_{\theta\in\mathcal{P}}\underline{\eta}(\theta)>0\) is bounded away from zero. ### Observability under model approximations To optimize the observability coefficient \(\beta_{G}(\theta)\) or \(\beta_{L\mathcal{W}}(\theta)\), it must be computed for many different configurations \(\theta\in\mathcal{P}\). The accumulating computational cost motivates the use of _reduced-order_ surrogate models, which typically yield considerable computational savings versus the original _full-order_ model. However, this leads to errors in the state approximation. In the following, we thus quantify the influence of state approximation error on the observability coefficients \(\beta_{G}(\theta)\) and \(\beta_{L\mathcal{W}}(\theta)\). An analysis of the change in posterior distributions when the entire model \(\mathcal{M}_{\theta}\) is substituted in the inverse problem can be found in [1]. Suppose a reduced-order surrogate model \(\tilde{\mathcal{M}}_{\theta}(\tilde{x}_{\theta}(\mathbf{u});\mathbf{u})=0\) is available that yields for any configuration \(\theta\in\mathcal{P}\) and parameter \(\mathbf{u}\in\mathbb{R}^{M}\) a unique solution \(\tilde{x}_{\theta}(\mathbf{u})\in\mathcal{X}\) such that \[\|x_{\theta}(\mathbf{u})-\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}\leq \varepsilon_{\theta}\|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}\quad\text{ with accuracy}\quad 0\leq\varepsilon_{\theta}\leq\varepsilon<1. \tag{16}\] Analogously to (13) and (15), we define the reduced-order observability coefficients \[\tilde{\beta}_{G}(\theta):=\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\|L\tilde{ x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}}{\|\mathbf{u}\|_{\Sigma_{\mathcal{P}}^{-1 }}},\qquad\qquad\tilde{\beta}_{L\mathcal{W}}(\theta):=\inf_{\mathbf{u}\in \mathbb{R}^{M}}\frac{\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}}{ \|\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}} \tag{17}\] to quantify the smallest observations of the surrogate states. For many applications, it is possible to choose a reduced-order model whose solution can be computed at a significantly reduced cost such that \(\tilde{\beta}_{G}(\theta)\) and \(\tilde{\beta}_{L\mathcal{W}}(\theta)\) are much cheaper to compute than their full-order counterparts \(\beta_{G}(\theta)\) and \(\beta_{L\mathcal{W}}(\theta)\). Since the construction of such a surrogate model depends strongly on the application itself, we refer to the literature (e.g., [43; 44; 45; 46; 47]) for tangible approaches. Recalling the definition of \(\gamma_{L}\) in (6), we start by bounding how closely the surrogate observability coefficient \(\tilde{\beta}_{L\mathcal{W}}(\theta)\) approximates the full-order \(\beta_{L\mathcal{W}}(\theta)\). **Proposition 1**.: _Let \(\eta(\theta)>0\) hold, and let \(\tilde{x}_{\theta}(\mathbf{u})\in\mathcal{X}\) be an approximation to \(x_{\theta}(\mathbf{u})\) such that (16) holds for all \(\theta\in\mathcal{P}\), \(\mathbf{u}\in\mathbb{R}^{M}\). Then_ \[(1-\varepsilon_{\theta})\,\tilde{\beta}_{L\mathcal{W}}(\theta)-\gamma_{L} \varepsilon_{\theta}\;\leq\;\beta_{L\mathcal{W}}(\theta)\;\leq\;(1+\varepsilon _{\theta})\,\tilde{\beta}_{L\mathcal{W}}(\theta)+\gamma_{L}\varepsilon_{ \theta}. \tag{18}\] Proof.: Let \(\mathbf{u}\in\mathbb{R}^{M}\setminus\{\mathbf{0}\}\) be arbitrary. Using (16) and the (reversed) triangle inequality, we obtain the bound \[\frac{\|\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}}{\|x_{\theta}(\mathbf{u })\|_{\mathcal{X}}}\geq\frac{\|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}-\|x_{ \theta}(\mathbf{u})-\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}}{\|x_{ \theta}(\mathbf{u})\|_{\mathcal{X}}}\geq 1-\varepsilon_{\theta}. \tag{19}\] Note here that \(\eta(\theta)>0\) implies \(\|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}>0\) so the quotient is indeed well defined. The ratio of observation to state can now be bounded from below by \[\frac{\|Lx_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}}{\|x_{\theta} (\mathbf{u})\|_{\mathcal{X}}} \geq\frac{\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}}{ \|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}}-\frac{\|L(x_{\theta}(\mathbf{u})- \tilde{x}_{\theta}(\mathbf{u}))\|_{\Sigma_{L}^{-1}}}{\|x_{\theta}(\mathbf{u}) \|_{\mathcal{X}}}\] \[\geq\frac{\|\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}}{\|x_{ \theta}(\mathbf{u})\|_{\mathcal{X}}}\frac{\|L\tilde{x}_{\theta}(\mathbf{u})\|_{ \Sigma_{L}^{-1}}}{\|\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}}-\gamma_{L} \frac{\|x_{\theta}(\mathbf{u})-\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}} {\|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}}\] \[\geq(1-\varepsilon_{\theta})\frac{\|L\tilde{x}_{\theta}(\mathbf{u })\|_{\Sigma_{L}^{-1}}}{\|\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}}- \gamma_{L}\varepsilon_{\theta}\] \[\geq(1-\varepsilon_{\theta})\tilde{\beta}_{L\mathcal{W}}(\theta)- \gamma_{L}\varepsilon_{\theta},\] where we have applied the reverse triangle inequality, definition (6), the bounds (16), (19), and definition (17) of \(\tilde{\beta}_{L\gamma W}(\theta)\). Since \(\mathbf{u}\) is arbitrary, the lower bound in (18) follows from definition (13) of \(\beta_{L\gamma W}(\theta)\). The upper bound in (18) follows analogously. For the observability of the parameter-to-observable map \(G_{L,\theta}\) and its approximation \(\mathbf{u}\mapsto L\tilde{x}_{\theta}(\mathbf{u})\), we obtain a similar bound. It uses the norm \(\overline{\eta}(\theta)\) of \(x_{\theta}:\mathbf{u}\mapsto x_{\theta}(\mathbf{u})\) as a map from the parameter to the state space, see (1). **Proposition 2**.: _Let \(\tilde{x}_{\theta}(\mathbf{u})\in\mathcal{X}\) be an approximation to \(x_{\theta}(\mathbf{u})\) such that (16) holds for all \(\theta\in\mathcal{P}\), \(\mathbf{u}\in\mathbb{R}^{M}\). Then_ \[\tilde{\beta}_{G}(\theta)-\gamma_{L}\overline{\eta}(\theta)\varepsilon_{ \theta}\leq\beta_{G}(\theta)\leq\tilde{\beta}_{G}(\theta)+\gamma_{L}\overline {\eta}(\theta)\varepsilon_{\theta}. \tag{20}\] Proof.: Let \(\mathbf{u}\in\mathbb{R}^{M}\setminus\{\mathbf{0}\}\) be arbitrary. Then \[\|Lx_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}} \geq\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}-\|L(x _{\theta}(\mathbf{u})-\tilde{x}_{\theta}(\mathbf{u}))\|_{\Sigma_{L}^{-1}}\] \[\geq\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}-\gamma _{L}\,\|x_{\theta}(\mathbf{u})-\tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}\] \[\geq\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}-\gamma _{L}\varepsilon_{\theta}\,\|x_{\theta}(\mathbf{u})\|_{\mathcal{X}}\] \[\geq\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}-\gamma _{L}\varepsilon_{\theta}\overline{\eta}(\theta)\|\mathbf{u}\|_{\Sigma_{\eta}^ {-1}},\] where we have used the reverse triangle inequality, followed by (6), (16), and (1). We divide by \(\|\mathbf{u}\|_{\Sigma_{\eta}^{-1}}\) and take the infimum over \(\mathbf{u}\) to obtain \[\beta_{G}(\theta)=\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\|Lx_{\theta}( \mathbf{u})\|_{\Sigma_{L}^{-1}}}{\|\mathbf{u}\|_{\Sigma_{\eta}^{-1}}}\geq\inf _{\mathbf{u}\in\mathbb{R}^{M}}\frac{\|L\tilde{x}_{\theta}(\mathbf{u})\|_{ \Sigma_{L}^{-1}}}{\|\mathbf{u}\|_{\Sigma_{\eta}^{-1}}}-\gamma_{L}\,\,\overline {\eta}(\theta)\,\,\varepsilon_{\theta}=\tilde{\beta}_{G}(\theta)-\gamma_{L}\, \,\overline{\eta}(\theta)\,\,\varepsilon_{\theta}.\] The upper bound in (20) follows analogously. If \(\varepsilon_{\theta}\) is sufficiently small, Propositions 1 and 2 justify employing the surrogates \(\tilde{\beta}_{L\gamma W}(\theta)\) and \(\tilde{\beta}_{G}(\theta)\) instead of the original full-order observability coefficients \(\beta_{L\gamma W}(\theta)\) and \(\beta_{G}(\theta)\). This substitution becomes especially necessary when the computation of \(x_{\theta}(\mathbf{u})\) is too expensive to evaluate \(\beta_{L\gamma W}(\theta)\) or \(\beta_{G}(\theta)\) repeatedly for different configurations \(\theta\). Another approximation step in our sensor selection algorithm relies on the identification of a parameter direction \(\mathbf{v}\in\mathbb{R}^{M}\) with comparatively small observability, i.e. \[\frac{\|Lx_{\theta}(\mathbf{v})\|_{\Sigma_{\eta}^{-1}}}{\|\mathbf{v}\|_{ \Sigma_{\eta}^{-1}}}\approx\inf_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\|Lx_{ \theta}(\mathbf{u})\|_{\Sigma_{L}^{-1}}}{\|\mathbf{u}\|_{\Sigma_{\eta}^{-1}} }=\beta_{G}(\theta)\qquad\text{or}\qquad\frac{\|Lx_{\theta}(\mathbf{v})\|_{ \Sigma_{L}^{-1}}}{\|x_{\theta}(\mathbf{v})\|_{\mathcal{X}}}\approx\inf_{x\in W _{\theta}}\frac{\|Lx\|_{\Sigma_{L}^{-1}}}{\|x\|_{\mathcal{X}}}=\beta_{L\gamma W }(\theta).\] The ideal choice would be the infimizer of respectively \(\beta_{G}(\theta)\) or \(\beta_{L\gamma W}(\theta)\), but its computation involves \(M\) full-order model evaluations (c.f. Section 4.2). To avoid these costly computations, we instead choose \(\mathbf{v}\) as the infimizer of the respective reduced-order observability coefficient. This choice is justified for small \(\varepsilon_{\theta}<1\) by the following proposition: **Proposition 3**.: _Let \(\underline{\eta}(\theta)>0\) hold, and let \(\tilde{x}_{\theta}(\mathbf{u})\in\mathcal{X}\) be an approximation to \(x_{\theta}(\mathbf{u})\) such that (16) holds for all \(\theta\in\mathcal{P}\), \(\mathbf{u}\in\mathbb{R}^{M}\). Suppose \(\mathbf{v}\in\arg\inf_{\mathbf{u}\in\mathbb{R}^{M}}\|\mathbf{u}\|_{\sum_{ \mathbf{v}}^{-1}}^{-1}\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\sum_{\mathbf{v}}^{ -1}}\), then_ \[\beta_{G}(\theta)\leq\frac{\|Lx_{\theta}(\mathbf{v})\|_{\sum_{L}^{-1}}}{\| \mathbf{v}\|_{\sum_{\mathbf{v}}^{-1}}}\leq\beta_{G}(\theta)+2\gamma_{L} \overline{\eta}(\theta)\varepsilon_{\theta}. \tag{21}\] _Likewise, if \(\mathbf{v}\in\arg\inf_{\mathbf{u}\in\mathbb{R}^{M}}\|\tilde{x}_{\theta}( \mathbf{u})\|_{\mathcal{X}}^{-1}\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\sum_{L }^{-1}}\), then_ \[\beta_{L^{\prime}W}(\theta)\leq\frac{\|Lx_{\theta}(\mathbf{v})\|_{\sum_{L}^{ -1}}}{\|x_{\theta}(\mathbf{v})\|_{\mathcal{X}}}\leq\frac{1+\varepsilon_{ \theta}}{1-\varepsilon_{\theta}}\ \left(\beta_{L^{\prime}W}(\theta)+\gamma_{L} \varepsilon_{\theta}\right)+\gamma_{L}\varepsilon_{\theta}. \tag{22}\] Proof.: For both (21) and (22) the lower bound follows directly from definitions (13) and (15). To prove the upper bound in (21), let \(\mathbf{v}\in\arg\inf_{\mathbf{u}\in\mathbb{R}^{M}}\|\mathbf{u}\|_{\sum_{ \mathbf{v}}^{-1}}^{-1}\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\sum_{L}^{-1}}\). Following the same steps as in the proof of Proposition 2, we can then bound \[\frac{\|Lx_{\theta}(\mathbf{v})\|_{\sum_{L}^{-1}}}{\|\mathbf{v}\|_{\sum_{ \mathbf{v}}^{-1}}}\leq\frac{\|L\tilde{x}_{\theta}(\mathbf{v})\|_{\sum_{L}^{-1 }}}{\|\mathbf{v}\|_{\sum_{\mathbf{v}}^{-1}}}+\frac{\|L(x_{\theta}(\mathbf{v})- \tilde{x}_{\theta}(\mathbf{v}))\|_{\sum_{L}^{-1}}}{\|\mathbf{v}\|_{\sum_{ \mathbf{v}}^{-1}}}\leq\tilde{\beta}_{G}(\theta)+\gamma_{L}\overline{\eta}( \theta)\varepsilon_{\theta}.\] The upper bound in (21) then follows with Proposition 2. To prove the upper bound in (22), let \(\mathbf{v}\in\arg\inf_{\mathbf{u}\in\mathbb{R}^{M}}\|\tilde{x}_{\theta}( \mathbf{u})\|_{\mathcal{X}}^{-1}\|L\tilde{x}_{\theta}(\mathbf{u})\|_{\sum_{L}^ {-1}}\). Then \[\frac{\|Lx_{\theta}(\mathbf{v})\|_{\sum_{L}^{-1}}}{\|x_{\theta}(\mathbf{v})\| _{\mathcal{X}}}\leq\frac{\|L\tilde{x}_{\theta}(\mathbf{v})\|_{\sum_{L}^{-1}} }{\|\tilde{x}_{\theta}(\mathbf{v})\|_{\mathcal{X}}}\frac{\|\tilde{x}_{\theta} (\mathbf{v})\|_{\mathcal{X}}}{\|x_{\theta}(\mathbf{v})\|_{\mathcal{X}}}+\frac{ \|L(x_{\theta}(\mathbf{v})-\tilde{x}_{\theta}(\mathbf{v}))\|_{\sum_{L}^{-1}} }{\|x_{\theta}(\mathbf{v})\|_{\mathcal{X}}}\leq(1+\varepsilon)\,\tilde{\beta }_{L^{\prime}W}(\theta)+\gamma_{L}\varepsilon_{\theta}.\] The result then follows with Proposition 1. ## 4 Sensor selection In the following, we present a sensor selection algorithm that iteratively increases the minimal observability coefficient \(\min_{\theta\in\mathcal{P}}\beta_{G}(\theta)\) and thereby decreases the upper bound for the eigenvalues of the posterior covariance matrix for all admissible system configurations \(\theta\in\mathcal{P}\). The iterative approach is relatively easy to implement, allows a simple way of dealing with combinatorial restrictions, and can deal with large4 sensor libraries. Footnote 4: For instance, in Section 5.3 we apply the presented algorithm to a library with \(K_{\mathcal{L}}=11,045\) available sensor positions. ### Cholesky decomposition The covariance function \(\mathbf{cov}\) connects an observation operator \(L\) to its observability coefficients \(\beta_{G}(\theta)\) and \(\beta_{L^{\prime}W}(\theta)\) through the noise covariance matrix \(\Sigma_{L}\). Its inverse enters the norm \(\|\cdot\|_{\sum_{L}^{-1}}\) and the posterior covariance matrix \(\Sigma_{\mathrm{post}}^{L,\theta}\). The inversion poses a challenge when the noise is correlated, i.e., when \(\Sigma_{L}\) is not diagonal, as even the expansion of \(L\) with a single sensor \(\ell\in\mathcal{L}\) changes each entry of \(\Sigma_{L}^{-1}\). In naive computations of the observability coefficients and the posterior covariance matrix, this leads to \(M\) dense linear system solves of order \(\mathcal{O}((K+1)^{3})\) each time the observation operator is expanded. In the following, we therefore expound on how \(\Sigma_{L}^{-1}\) changes under expansion of \(L\) to exploit its structure when comparing potential sensor choices. Suppose \(L=[\ell_{1},\ldots,\ell_{K}]^{T}\) has already been chosen with sensors \(\ell_{k}\in\mathcal{X}^{\prime}\), but shall be expanded by another sensor \(\ell\) to \[[L,\ell]:=[\ell_{1},\ldots,\ell_{K},\ell]^{T}:\mathcal{X}\to\mathbb{R}^{K+1}.\] Following definition (4), the noise covariance matrix \(\Sigma_{[L,\ell]}\) of the expanded operator \([L,\ell]\) has the form \[\Sigma_{[L,\ell]}=\left(\begin{array}{cc}\Sigma_{L}&\mathbf{v}_{L,\ell}\\ \mathbf{v}_{L,\ell}^{T}&v_{\ell,\ell}\end{array}\right)=\left(\begin{array}[] {cc}\mathbf{C}_{L}&\mathbf{0}\\ \mathbf{c}_{L,\ell}^{T}&c_{\ell,\ell}\end{array}\right)\left(\begin{array}[] {cc}\mathbf{C}_{L}^{T}&\mathbf{c}_{L,\ell}\\ \mathbf{0}&c_{\ell,\ell}\end{array}\right),\] where \(\mathbf{C}_{L}\mathbf{C}_{L}^{T}=\Sigma_{L}\in\mathbb{R}^{K\times K}\) is the Cholesky decomposition of the s.p.d. noise covariance matrix \(\Sigma_{L}\) for the original observation operator \(L\), and \(\mathbf{v}_{L,\ell},\mathbf{c}_{L,\ell}\in\mathbb{R}^{K}\), \(v_{\ell,\ell},c_{\ell,\ell}\in\mathbb{R}\) are defined through \[\left[\mathbf{v}_{L,\ell}\right]_{i} :=\mathbf{cov}(\ell_{i},\ell), \mathbf{c}_{L,\ell} :=\mathbf{C}_{L}^{-1}\mathbf{v}_{L,\ell},\] \[v_{\ell,\ell} :=\mathbf{cov}(\ell,\ell), c_{\ell,\ell} :=\sqrt{v_{\ell,\ell}-\mathbf{c}_{L,\ell}^{T}\mathbf{c}_{L,\ell}}.\] Note that \(\Sigma_{[L,\ell]}\) is s.p.d. by the assumptions posed on \(\mathbf{cov}\) in Section 2; consequently, \(c_{\ell,\ell}\) is well-defined and strictly positive. With this factorization, the expanded Cholesky matrix \(\mathbf{C}_{[L,\ell]}\) with \(\mathbf{C}_{[L,\ell]}\mathbf{C}_{[L,\ell]}^{T}=\Sigma_{[L,\ell]}\) can be computed in \(\mathcal{O}(K^{2})\), dominated by the linear system solve with the triangular \(\mathbf{C}_{L}\) for obtaining \(\mathbf{c}_{L,\ell}\). It is summarized in Algorithm 1 for later use in the sensor selection algorithm. ``` Input: observation operator \(L=[\ell_{1},\ldots,\ell_{K}]^{T}\), noise covariance matrix \(\Sigma_{L}\), Cholesky matrix \(\mathbf{C}_{L}\), where \[\mathbf{r}_{L,\ell}:=-\frac{1}{c_{\ell,\ell}}\mathbf{C}_{L}^{-T} \mathbf{c}_{L,\ell}=-\frac{1}{c_{\ell,\ell}}\mathbf{C}_{L}^{-T}\mathbf{C}_{L}^{ -1}\mathbf{v}_{L,\ell}=-\frac{1}{c_{\ell,\ell}}\Sigma_{L}^{-1}\mathbf{v}_{L,\ell}.\] For an arbitrary state \(x\in\mathcal{X}\), the norm of the extended observation \([L,\ell](x)=\left[Lx^{T},\ell(x)\right]^{T}\in\mathbb{R}^{K+1}\) in the corresponding norm \(\left\|\cdot\right\|_{\Sigma_{L,\ell}^{-1}}\) is hence connected to the original observation \(Lx\in\mathbb{R}^{K}\) in the original norm \(\left\|\cdot\right\|_{\Sigma_{L}^{-1}}\) via \[\left\|\left[L,\ell\right](x)\right\|_{\Sigma_{L,\ell}^{-1}}^{2} =\left(\begin{array}{c}Lx\\ \ell(x)\end{array}\right)^{T}\left(\begin{array}{cc}\Sigma_{L}&\mathbf{v}_{ L,\ell}\\ \mathbf{v}_{L,\ell}^{T}&v_{\ell,\ell}\end{array}\right)^{-1}\left(\begin{array} []{c}Lx\\ \ell(x)\end{array}\right) \tag{23}\] \[=\left(\begin{array}{c}Lx\\ \ell(x)\end{array}\right)^{T}\left(\begin{array}{cc}\mathbf{C}_{L}^{-T}& \mathbf{r}_{L,\ell}\\ \mathbf{0}&1/c_{\ell,\ell}\end{array}\right)\left(\begin{array}{cc}\mathbf{ C}_{L}^{-1}&\mathbf{0}\\ \mathbf{r}_{L,\ell}^{T}&1/c_{\ell,\ell}\end{array}\right)\left(\begin{array}{ c}Lx\\ \ell(x)\end{array}\right)\] \[=\left(\begin{array}{c}\mathbf{C}_{L}^{-1}Lx\\ \mathbf{r}_{L,\ell}^{T}Lx+\ell(x)/c_{\ell,\ell}\end{array}\right)^{T}\left( \begin{array}{cc}\mathbf{C}_{L}^{-1}Lx\\ \mathbf{r}_{L,\ell}^{T}Lx+\ell(x)/c_{\ell,\ell}\end{array}\right)\] \[=(Lx)^{T}\mathbf{C}_{L}^{-T}\mathbf{C}_{L}^{-1}Lx+(\mathbf{r}_{L,\ell}^{T}Lx+\ell(x)/c_{\ell,\ell})^{2}\] \[=\left\|Lx\right\|_{\Sigma_{L}^{-1}}^{2}+(\mathbf{r}_{L,\ell}^{T} Lx+\ell_{K+1}(x)/c_{\ell,\ell})^{2}\] \[\geq\left\|Lx\right\|_{\Sigma_{L}^{-1}}^{2}.\] We conclude from this result that the norm \(\left\|Lx\right\|_{\Sigma_{L}^{-1}}\) of any observation, and therefore also the continuity coefficient \(\gamma_{L}\) defined in (6), is increasing under expansion of \(L\) despite the change in norms. For any configuration \(\theta\), the observability coefficients \(\beta_{G}(\theta)\) and \(\beta_{LTW}(\theta)\) are thus non-decreasing when sensors are selected iteratively. Given a state \(x\in\mathcal{X}\) and an observation operator \(L\), we can determine the sensor \(\ell_{K+1}\in\mathcal{L}\) that increases the observation of \(x\) the most by comparing the increase \((\mathbf{r}_{L,\ell}^{T}Lx+\ell(x)/c_{\ell,\ell})^{2}\) for all \(\ell\in\mathcal{L}\). Algorithm 2 summarizes the computation of this observability gain for use in the sensor selection algorithm (see Section 4.3). Its general runtime is determined by \(K+1\) sensor evaluations and two linear solves with the triangular Cholesky matrix \(\mathbf{C}_{L}\) in \(\mathcal{O}(K^{2})\). When called with the same \(L\) and the same state \(x\) for different candidate sensors \(\ell\), the preparation step must only be performed once, which reduces the runtime to one sensor evaluation and one linear system solve in all subsequent calls. Compared to computing \(\|\left[L,\ell\right]\!(x)\|_{\mathbb{E}^{-1}_{\tau\left[L,\ell\right]}}^{2}\) for all \(K_{\mathcal{L}}\) candidate sensors in the library \(\mathcal{L}\), we save \(\mathcal{O}(K_{\mathcal{L}}K^{2})\). ### Computation of the observability coefficient We next discuss the computation of the observability coefficient \(\beta_{G}(\theta)\) for a given configuration \(\theta\) and observation operator \(L\). Let \(\Sigma_{\mathrm{pr}}=\mathbf{U}^{T}\mathbf{D}_{\mathrm{pr}}\mathbf{U}\) be the eigenvalue decomposition of the s.p.d. prior covariance matrix with \(\mathbf{U}=[\varphi_{1},\ldots,\varphi_{M}]\in\mathbb{R}^{M\times M}\), \(\varphi_{j}\in\mathbb{R}^{M}\) orthonormal in the Euclidean inner product, and \(\mathbf{D}_{\mathrm{pr}}=\mathrm{diag}(\lambda_{\mathrm{pr}}^{1},\ldots, \lambda_{\mathrm{pr}}^{M})\) a diagonal matrix containing the eigenvalues \(\lambda_{\mathrm{pr}}^{1}\geq\cdots\geq\lambda_{\mathrm{pr}}^{M}>0\) in decreasing order. Using the eigenvector basis \(\{\varphi_{m}\}_{m=1}^{M}\), we define the matrix \[\mathbf{M}(\theta):=\left[Lx_{\theta}(\varphi_{1}),\ldots,Lx_{\theta}(\varphi _{M})\right]\in\mathbb{R}^{K\times M} \tag{24}\] featuring all observations of the associated states \(x_{\theta}(\varphi_{j})\) for the configuration \(\theta\). The observability coefficient \(\beta_{G}(\theta)\) can then be computed as the square root of the minimum eigenvalue \(\lambda^{\mathrm{min}}\) of the generalized eigenvalue problem \[\mathbf{M}(\theta)^{T}\mathbf{C}_{L}^{-T}\mathbf{C}_{L}^{-1}\mathbf{M}(\theta )\mathbf{u}_{\mathrm{min}}=\lambda^{\mathrm{min}}\mathbf{D}_{\mathrm{pr}}^{- 1}\mathbf{u}_{\mathrm{min}}. \tag{25}\] Note that (25) has \(M\) real, non-negative eigenvalues because the matrix on the left is symmetric positive semi-definite, and \(\mathbf{D}_{\mathrm{pr}}\) is s.p.d. (c.f. [48]). The eigenvector \(\mathbf{u}_{\mathrm{min}}\) contains the basis coefficients in the eigenvector basis \(\{\varphi_{m}\}_{m=1}^{M}\) of the "worst-case" parameter, i.e. the infimizer of \(\beta_{G}(\theta)\). **Remark 4**.: _For computing \(\beta_{L\left[\mathcal{W}\right]}(\theta)\), we exchange the right-hand side matrix \(\mathbf{D}_{\mathrm{pr}}^{-1}\) in (25) with the \(\mathcal{X}\)-inner-product matrix for the states \(x_{\theta}(\varphi_{1}),\ldots,x_{\theta}(\varphi_{M})\)._ The solution of the eigenvalue problem can be computed in \(\mathcal{O}(M^{3})\), with an additional \(\mathcal{O}(MK^{2}+M^{2}K)\) for the computation of the left-hand side matrix in (25). The dominating cost is hidden in \(\mathbf{M}(\theta)\) since it requires \(KM\) sensor observations and \(K\) full-order model solves. To reduce the computational cost, we therefore approximate \(\beta_{G}(\theta)\) with \(\tilde{\beta}_{G}(\theta)\) by exchanging the full-order states \(x_{\theta}(\varphi_{j})\) in (24) with their reduced-order approximations \(\tilde{x}_{\theta}(\varphi_{j})\). The procedure is summarized in Algorithm 3. **Remark 5**.: _If \(K<M\), Algorithm 3 restricts the parameter space, as discussed in Section 3.2, to the span of the first \(K\) eigenvectors \(\varphi_{1},\ldots,\varphi_{K}\) encoding the least certain directions in the prior. A variation briefly discussed in [8] in the context of the PBDW method to prioritize the least certain parameters even further is to only expand the parameter space once the observability coefficient on the subspace surpasses a predetermined threshold._ ### Sensor selection In our sensor selection algorithm, we iteratively expand the observation operator \(L\) and thereby increase the observability coefficient \(L\) for all \(\theta\in\mathcal{P}\). Although this procedure cannot guarantee finding the maximum observability over all sensor combinations, the underlying greedy searches are well-established in practice, and can be shown to perform with exponentially decreasing error rates in closely related settings, see [49; 8; 50; 51; 52]. In each iteration, the algorithm performs two main steps: * A **greedy search** over a training set \(\Xi_{\text{train}}\subset\mathcal{P}\) to identify the configuration \(\theta\in\Xi_{\text{train}}\) for which the observability coefficient \(\beta_{G}(\theta)\) is minimal; * A **data-matching step** to identify the sensor in the library that maximizes the observation of the "worst-case" parameter at the selected configuration \(\theta\). The procedure is summarized in Algorithm 4. It terminates when \(K_{\max}\leq K_{\mathcal{L}}\) sensors have been selected.5 In the following, we explain its computational details. Footnote 5: This termination criterion can easily be adapted to prescribe a minimum value of the observability coefficient. This value should be chosen with respect to the observability \(\beta_{G}(\mathcal{L})\) achieved with the entire sensor library. _Preparations_ In order to increase \(\beta_{G}(\theta)\) uniformly over the hyper-parameter domain \(\mathcal{P}\), we consider a finite training set, \(\Xi_{\text{train}}\subset\mathcal{P}\), that is chosen to be fine enough to capture the \(\theta\)-dependent variations in \(x_{\theta}(\mathbf{u})\). We assume a reduced-order model is available such that we can compute approximations \(\tilde{x}_{\theta}(\varphi_{m})\approx x_{\theta}(\varphi_{m})\) for each \(\theta\in\Xi_{\text{train}}\) and \(1\leq m\leq M\) within an acceptable computation time while guaranteeing the accuracy requirement (16). If necessary, the two criteria can be balanced via adaptive training domains (e.g., [53; 54]). **Remark 6**.: _If storage allows (e.g., with projection-based surrogate models), we only compute the surrogate states once and avoid unnecessary re-computations when updating the surrogate observability coefficients \(\tilde{\beta}_{G}(\theta)\) in each iteration._ As a first "worst-case" parameter direction, \(\mathbf{u}_{0}\), we choose the vector \(\varphi_{1}\) with the largest prior uncertainty. Likewise, we choose the "worst-case" configuration \(\theta_{K}\in\mathcal{P}\) as the one for which the corresponding state \(\tilde{x}_{\theta}(\varphi_{1})\) is the largest. ``` Input: sensor library \(\mathcal{L}\subset\mathcal{X}^{\prime}\), training set \(\Xi_{\text{train}}\subset\mathcal{P}\), maximum number of sensors \(K_{\max}\leq|K_{\mathcal{L}}|\), surrogate model \(\tilde{\mathcal{M}}_{\theta}\), covariance function cov : \(\mathcal{L}\times\mathcal{L}\rightarrow\mathbb{R}\) Compute \(\Sigma_{\text{pr}}=[\varphi_{1},\ldots,\varphi_{M}]^{T}\,\mathbf{D}_{\text{pr}} \,[\varphi_{1},\ldots,\varphi_{M}]\) // eigenvalue decomposition For all \(\theta\in\Xi_{\text{train}}\), \(1\leq m\leq M\), compute \(\tilde{x}_{\theta}(\varphi_{m})\) // preparation \(K\gets 0\), \(\theta_{0}\leftarrow\arg\max_{\theta\in\Xi_{\text{train}}}\|\tilde{x}_{ \theta}(\varphi_{1})\|_{\mathcal{X}}\), \(\mathbf{u}_{0}\leftarrow\varphi_{1}\) // initialization while\(K<K_{\max}\)do Solve full-order equation \(\mathcal{M}_{\theta_{K}}(x_{K},\mathbf{u}_{K})\) for \(x_{K}\) // "worst-case" state \(\ell_{K+1}\leftarrow\arg\max_{\ell\in\mathcal{L}}\texttt{Observability Gain}(L,\mathbf{C}_{L},\ell)\) // sensor selection \(L,\Sigma_{L},\mathbf{C}_{L}\leftarrow\texttt{Cholesky Expansion}(L,\Sigma_{L},\mathbf{C}_{L},\ell_{K+1})\) // expansion \(K\gets K+1\) for\(\theta\in\Xi_{\text{train}}\)do \(\tilde{\beta}_{L\mathcal{I}W}(\theta),\mathbf{u}_{\min}(\theta)\leftarrow \texttt{SurrogateObservability}(\theta,L,\mathbf{C}_{L})\) // update coefficients \(\theta_{K}\leftarrow\arg\min_{\theta\in\Xi_{\text{train}}}\tilde{\beta}_{L \mathcal{I}W}(\theta)\) // greedy step \(\mathbf{u}_{K}\leftarrow\sum_{m=1}^{\min(M,K)}\left[\mathbf{u}_{\min}(\theta_{K })\right]_{m}\varphi_{m}\) return\(L\), \(\mathbf{C}_{L}\) ``` **Algorithm 4**SensorSelection #### Data-matching step In each iteration, we first compute the full-order state \(x_{K}=x_{\theta_{K}}(\mathbf{u}_{K})\) at the "worst-case" parameter \(\mathbf{u}_{K}\) and configuration \(\theta_{K}\). We then choose the sensor \(\ell_{K+1}\) which most improves the observation of the "worst-case" state \(x_{K}\) under the expanded observation operator \([L^{T},\ell_{K+1}]^{T}\) and its associated norm. We thereby iteratively approximate the information that would be obtained by measuring with all sensors in the library \(\mathcal{L}\). For fixed \(\theta_{K}\) and in combination with selecting \(x\) to have the smallest observability in \(\mathcal{W}_{\theta}\), we arrive at an algorithm similar to worst-case orthogonal matching pursuit (c.f. [8; 9]) but generalized to deal with the covariance function **cov** in the noise model (3). **Remark 7**.: _We use the full-order state \(x_{\theta_{K}}(\mathbf{u}_{K})\) rather than its reduced-order approximation in order to avoid training on local approximation inaccuracies in the reduced-order model. Here, by using the "worst-case" parameter direction \(\mathbf{u}_{K}\), we only require a single full-order solve per iteration instead of the \(M\) required for setting up the entire posterior covariance matrix \(\Sigma_{\text{post}}^{L,\theta}\)._ #### Greedy step We train the observation operator \(L\) on all configurations \(\theta\in\Xi_{\text{train}}\) by varying for which \(\theta\) the "worst-case" state is computed. Specifically, we follow a greedy approach where, in iteration \(K\), we choose the minimizer \(\theta_{K}\) of \(\beta_{G}(\theta)\) over the training domain \(\Xi_{\text{train}}\), i.e., the configuration for which the current observation operator \(L\) is the least advantageous. The corresponding "worst-case" parameter \(\mathbf{u}_{K}\) is the parameter direction for which the least significant observation is achieved. By iteratively increasing the observability at the "worst-case" parameters and hyper-parameters, we increase the minimum of \(\beta_{G}(\theta)\) throughout the training domain. **Remark 8**.: _Since the computation of \(\tilde{\beta}_{G}(\theta)\) requires as many reduced-order model solves as needed for the posterior covariance matrix over the surrogate model, it is possible to directly target an (approximated) OED utility function in the greedy step in place of \(\tilde{\beta}_{L\mathcal{W}}(\theta)\) without major concessions in the computational efficiency. The OMP step can then still be performed for the "worst-case" parameter with only one full-order model solve, though its benefit for the utility function should be evaluated carefully._ #### Runtime Assuming the dominating computational restriction is the model evaluation to solve for \(x_{\theta}(\mathbf{u})\) - as is usually the case for PDE models - then the runtime of each iteration in Algorithm 4 is determined by one full-order model evaluation, and \(K_{\mathcal{L}}\) sensor measurements of the full-order state. Compared to computed the posterior covariance matrix for the chosen configuration, the OMP step saves \(N-1\) full-order model solves. The other main factor in the runtime of Algorithm 4 is the \(|\Xi_{\text{train}}|M\) reduced-order model evaluations with \(K_{\mathcal{L}}\) sensor evaluations each that need to be performed in each iteration (unless they can be pre-computed). The parameter dimension \(M\) not only enters as a scaling factor, but also affects the cost of the reduced-order model itself since larger values of \(M\) generally require larger or more complicated reduced-order models to achieve the desired accuracy (16). In turn, the computational cost of the reduced-order model indicates how large \(\Xi_{\text{train}}\) may be chosen for a given computational budget. While some cost can be saved through adaptive training sets and models, overall, this connection to \(M\) stresses the need for an adequate initial parameter reduction as discussed in Section 3.2. ## 5 Numerical Results We numerically confirm the validity of our sensor selection approach using a geophysical model of a section of the Perth Basin in Western Australia. The basin has raised interest in the geophysics community due to its high potential for geothermal energy, e.g., [55; 56; 57; 58; 59]. We focus on a subsection that spans an area of \(63\text{ km}\times 70\text{ km}\) and reaches \(19\text{ km}\) below the surface. The model was introduced in [60] and the presented section of the model was discussed extensively in the context of MOR in [61; 62]. In particular, the subsurface temperature distribution is described through a steady-state heat conduction problem with different subdomains for the geological layers, and local measurements may be obtained through boreholes. The borehole locations need to be chosen carefully due to their high costs (typically several million dollars, [63]), which in turn motivates our application of Algorithm 4. For demonstration purposes, we make the following simplifications to our test model: 1) We neglect radiogenic heat production; 2) we merge geological layers with similar conductive behaviors; and 3) we scale the prior to emphasize the influence of different sensor measurements on the posterior. All computations were performed in Python 3.7 on a computer with a 2.3 GHz Quad-Core Intel Core i5 processor and 16 GB of RAM. The code will be available in a public GitHub repository for another geophysical test problem.6 Footnote 6: The Perth Basin Model is available upon request from the third author. ### Model Description We model the temperature distribution \(x_{\theta}\) with the steady-state PDE \[-\nabla\left(\theta\nabla x_{\theta}\right)=0\qquad\text{in}\ \Omega:=(0,0.2714)\times(0,0.9)\times(0,1)\subset\mathbb{R}^{3}, \tag{26}\] where the domain \(\Omega\) is a non-dimensionalized representation of the basin, and \(\theta:\Omega\rightarrow\mathbb{R}_{>0}\) the local thermal conductivity. The section comprises three main geological layers \(\Omega=\bigcup_{i=1,2,3}\Omega_{i}\), each characterized by different rock properties, i.e. thermal conductivity \(\theta|_{\Omega_{i}}\equiv\theta_{i}\) shown in Figure 1. We consider the position of the geological layers to be fixed as these are often determined beforehand by geological and geophysical surveys but allow the thermal conductivity to vary. In a slight abuse of notation, this lets us identify the field \(\theta\) with the vector \[\theta=(\theta_{1},\theta_{2},\theta_{3})\in\mathcal{P}:=[0.453,1.360]\times[ 0.448,1.343]\times[0.360,1.081].\] in the hyper-parameter domain \(\mathcal{P}\). We impose zero-Dirichlet boundary conditions at the surface7, and zero-Neumann ("no-flow") boundary conditions at the lateral faces of the domain. The remaining boundary \(\Gamma_{\text{In}}\) corresponds to an area spanning \(63\ \text{km}\times 70\ \text{km}\) area in the Perth basin 19 km below the surface. At this depth, local variations in the heat flux have mostly stabilized which makes modeling possible, but since most boreholes - often originating from hydrocarbon exploration - are found in the uppermost 2 km we treat it as Figure 1: Schematic overview of the Perth Basin section including (merged) geological layers, depths for potential measurements, and configuration range for thermal conductivity \(\theta\) on each subdomain. The bounds are obtained from the reference values (c.f. [60; 61]) with a \(\pm 50\%\) margin. Adapted from [61]. uncertain. Specifically, we model it as a Neumann boundary condition \[\mathbf{n}\cdot\nabla x_{\theta}=\mathbf{u}\cdot\mathbf{p}\qquad\qquad\qquad \text{a.e. on }\Gamma_{\text{In}}:=\{0\}\times[0,0.9]\times[0,1]\] where \(\mathbf{n}:\Gamma_{\text{In}}\to\mathbb{R}^{3}\) is the outward pointing unit normal on \(\Omega\), \(\mathbf{p}:\Gamma_{\text{In}}\to\mathbb{R}^{5}\) is a vector composed of quadratic, \(L^{2}(\Gamma_{\text{In}})\)-orthonormal polynomials on the basal boundary that vary either in north-south or east-west direction, and \(\mathbf{u}\sim\pi_{\text{pr}}=\mathcal{N}(\mathbf{u}_{\text{pr}},\Sigma_{ \text{pr}})\) is a random variable. The prior is chosen such that the largest uncertainty is attributed to a constant entry in \(\mathbf{p}\), and the quadratic terms are treated as the most certain with prior zero. This setup reflects typical geophysical boundary conditions, where it is most common to assume a constant Neumann heat flux (e.g., [61]), and sometimes a linear one (e.g., [60]). With the quadratic functions, we allow an additional degree of freedom than typically considered. The problem is discretized using a linear finite element (FE) basis of dimension 132,651. The underlying mesh was created with GemPy ([64]) and MOOSE ([65]). Since the FE matrices decouple in \(\theta\), we precompute and store an affine decomposition using DwarfElephant ([61]). Given a configuration \(\theta\) and a coefficient vector \(\mathbf{u}\) for the heat flux at \(\Gamma_{\text{In}}\), the computation of a full-order solution \(x_{\theta}(\mathbf{u})\in\mathcal{X}\) then takes 2.96 s on average. We then exploit the affine decomposition further to construct a reduced basis (RB) surrogate model via a greedy algorithm (c.f. [49, 66]). Using the inner product8\(\langle x,\phi\rangle_{\mathcal{X}}:=\int_{\Omega}\nabla x\cdot\nabla\phi d\Omega\) and an _a posteriori_ error bound \(\Delta(\theta)\), we prescribe the relative target accuracy Footnote 8: Note that \(\langle\cdot,\cdot\rangle_{\mathcal{X}}\) is indeed an inner product due to the Dirichlet boundary conditions. \[\max_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\|x_{\theta}(\mathbf{u})-\tilde{x}_{ \theta}(\mathbf{u})\|_{\mathcal{X}}}{\|\tilde{x}_{\theta}(\mathbf{u})\|_{ \mathcal{X}}}\leq\max_{\mathbf{u}\in\mathbb{R}^{M}}\frac{\Delta(\theta)}{\| \tilde{x}_{\theta}(\mathbf{u})\|_{\mathcal{X}}}<\varepsilon:=1\mathbf{e}-4 \tag{27}\] to be reached for 511,000 consecutively drawn, uniformly distributed samples of \(\theta\). The training phase and final computational performance of the RB surrogate model are summarized in Figure 2. The speedup of the surrogate model (approximately a factor of 3,000 without error bounds) justifies its offline training time, with computational savings expected already after 152 approximations of \(\beta_{G}(\theta)\). For taking measurements, we consider a \(47\times 47\) grid over the surface to represent possible drilling sites. At each, a single point evaluation9 of the basin's temperature distribution may be made at any one of five possible depths as shown in Figure 1. In total, we obtain a set \(\mathcal{L}\subset\Omega\) of \(11,045\) admissible points for measurements. We model the noise covariance between sensors \(\ell_{\mathcal{X}},\ell_{\tilde{\chi}}\in\mathcal{L}\) at points \(\chi,\tilde{\chi}\in\Omega\) via Footnote 9: Point evaluations are standard for geophysical models because a borehole (diameter approximately 1 m) is very small compared to the size of the model. \[\mathbf{cov}(\ell_{\chi},\ell_{\tilde{\chi}}):=a+b-y(h)\] with the exponential variogram model \[y(h):=a+(b-a)\left(\frac{3}{2}\max\{\frac{h}{c},1\}-\frac{1}{2}\max\{\frac{h} {c},1\}^{3}\right)\] where \(h^{2}:=(\chi_{2}-\bar{\chi}_{2})^{2}+(\chi_{3}-\bar{\chi}_{3})^{2}\) is the horizontal distance between the points and \[a :=2.2054073480730403\] (sill) \[b :=1.6850672040263555\] (nugget) \[c :=20.606782733391228\] (range) The covariance function was computed via kriging (c.f. [67]) from the existing measurements [68]. With this covariance function, the noise between measurements at any two sensor locations is increasingly correlated the closer they are on the horizontal plane. Note that for any subset of sensor locations, the associated noise covariance matrix remains regular as long as each sensor is placed at a distinct drilling location. We choose this experimental setup because measurements in typical geothermal data sets are often made at the bottom of a borehole ("bottom hole temperature measurements") within the first 2 km below the surface. ### Restricted Library To test the feasibility of the observability coefficient for sensor selection, we first consider a small sensor library (denoted as \(\mathcal{L}_{5\times 5}\) below) with 25 drilling locations positioned on a \(5\times 5\) grid. We consider the problem of choosing 8 pair-wise different, unordered sensor locations out of the given 25 positions; this is a combinatorial problem with 1,081,575 possible combinations. #### Sensor selection We run Algorithm 4, using the RB surrogate model and a training set \(\Xi_{\text{train}}\subset\mathcal{P}\) with 512,000 configurations on an \(80\times 80\times 80\) regular grid on \(\mathcal{P}\). When new sensors are chosen, the surrogate observability coefficient \(\tilde{\beta}_{G}(\theta)\) increases monotonously with a strong incline just after the initial \(M=5\) sensors, followed by a visible stagnation (see Figure 2(a)) as is often observed for similar OMP-based sensor selection algorithms Figure 2: Training of the RB surrogate model for the Perth Basin section. On the left: Maximum relative error bound (27) in the course of the greedy algorithm, computed over the training set \(\Xi_{\text{train}}\) together with the true relative error at the corresponding configuration \(\theta\). On the right: Performance pointers for the obtained RB model after (27) was reached; online computation times and speedups are averages computed over 1000 randomly drawn configurations \(\theta\). (e.g., [8, 69, 70, 7]). Algorithm 4 terminates in 7.93 min with a minimum reduced-order observability of \(\tilde{\beta}_{G}(\theta)=7\,.3227\)e-2 and an average of 1.0995e-1. At the reference configuration \(\theta_{\text{ref}}\), the full-order observability coefficient is \(\beta_{G}(\theta_{\text{ref}})=1.0985\), slightly below the reduced-order average. We call this training procedure "\(\Xi_{\text{train}}\)-training" hereafter and denote the chosen sensors as "\(\Xi_{\text{train}}\)-trained sensor set" in the subsequent text and as "proposal" in the plots. In order to get an accurate understanding of how the surrogate model \(\tilde{x}_{\theta}(\mathbf{u})\) and the large configuration training set \(\Xi_{\text{train}}\) influence the sensor selection, we run Algorithm 4 again, this time restricted on the full-order FE model \(x_{\theta_{\text{ref}}}(\mathbf{u})\) at only the reference configuration \(\theta_{\text{ref}}\). The increase in \(\beta_{G}(\theta_{\text{ref}})\) in the course of the algorithm is shown in Figure 2(a). The curve starts significantly above the average for \(\Xi_{\text{train}}\)-training, presumably because conflicting configurations cannot occur, e.g., when one sensor would significantly increase the observability at one configuration but cause little change in another. However, in the stagnation phase, the curve comes closer to the average achieved with \(\Xi_{\text{train}}\)-training. The computation finishes within 12.53 s, showing that the long runtime before can be attributed to the size of \(\Xi_{\text{train}}\). The final observability coefficient with 8 sensors is \(\beta_{G}(\theta_{\text{ref}})=1.2647\)e-1, above the average over \(\tilde{\beta}_{G}(\theta)\) achieved training on \(\Xi_{\text{train}}\). We call this training procedure "\(\theta_{\text{ref}}\)-training" hereafter, and the sensor configuration "\(\theta_{\text{ref}}\)-trained" in the text or "proposal, fixed config." in the plots. #### Comparison at the reference configuration For comparing the performance of the \(\Xi_{\text{train}}\)- and \(\theta_{\text{ref}}\)-trained sensor combinations, we compute - at the reference configuration \(\theta_{\text{ref}}\) - all 1,081,575 posterior covariance matrices \(\Sigma_{\text{post}}^{\theta_{\text{ref}},L}\) for all unordered combinations \(L\) of 8 distinct sensors in the sensor library \(\mathcal{L}_{5\times 5}\). For each matrix, we compute the trace (A-OED criterion), the determinant (D-OED criterion), the maximum eigenvalue (E-OED criterion), and the observability coefficient \(\beta_{G}(\theta_{\text{ref}})\). This lets us identify the A-, D-, and E-optimal sensor combina Figure 3: Observability coefficient for different methods when choosing 8 out of 25 sensor locations. Left: Minimum and mean over \(\theta\) of \(\tilde{\beta}_{G}(\theta)\) as well as \(\beta_{G}(\theta_{\text{ref}})\) obtained in the course of running Algorithm 4 once for 512,000 configurations and once for the training set \(\{\theta_{\text{ref}}\}\). Right: Distribution of \(\beta_{G}(\theta_{\text{ref}})\) over all possible sensor combinations with indicators for the A-, D-, and E-optimal choices, the combination with maximum observability, and the sensors chosen by the Algorithm 4 with \(\Xi_{\text{train}}\)-training (“proposal”, purple, marked “x”) and \(\theta_{\text{ref}}\)-training (“proposal, fixed”, turquoise, marked “+”). Note that the height of the indicator line was chosen solely for readability. tions. The total runtime for these computations is 4 min - well above the 12.53 s of \(\theta_{\text{ref}}\)-training. The (almost) 8 min for \(\Xi_{\text{train}}\)-training remain reasonable considering it is trained on \(|\Xi_{\text{train}}|=512,000\) configurations and not only \(\theta_{\text{ref}}\). A histogram for the distribution of \(\beta_{G}(\theta_{\text{ref}})\) is given in Figure 3b with markers for the values of the A-, D-, and E-optimal choices and the \(\Xi_{\text{train}}\)- and \(\theta_{\text{ref}}\)-trained observation operators. Out of these five, the D-optimal choice has the smallest value, since the posterior determinant is influenced less by the maximum posterior eigenvalue and hence the observability coefficient. In contrast, both the A- and E-optimal sensor choices are among the 700 combinations with the largest \(\beta_{G}(\theta_{\text{ref}})\) (this corresponds to the top 0.065%). The \(\theta_{\text{ref}}\)-trained sensors have similar observability and are even among the top 500 combinations. For the \(\Xi_{\text{train}}\)- trained sensors, the observability coefficient is smaller, presumably because \(\Xi_{\text{train}}\)-training is not as optimized for \(\theta_{\text{ref}}\). Still, it ranks among the top 0.705 % of sensor combinations with the largest observability. In order to visualize the connection between the observability coefficient \(\beta_{G}(\theta_{\text{ref}})\) and the classic A-, D-, and E-OED criteria, we plot the distribution of the posterior covariance matrix's trace, determinant, and maximum eigenvalue over all sensor combinations against \(\beta_{G}(\theta)\) in Figures 4, 5, 6. Overall we observe a strong correlation between the respective OED criteria and \(\beta_{G}(\theta_{\text{ref}})\): It is the most pronounced in Figure 6 for E-optimality, and the least pronounced for D-optimality in Figure 5. For all OED Figure 4: Distribution of \(\text{trace}(\Sigma_{\text{post}}^{L,\theta})\) for \(\theta=\theta_{\text{ref}}\) over all 1,081,575 combinations for choosing 8 out of the 25 sensor locations. On the left: distribution of \(\text{trace}(\Sigma_{\text{post}}^{L,\theta})\) against the observability coefficient \(\beta_{G}(\theta_{\text{ref}})\). Note that the marginal distribution of the horizontal axis is provided in Figure 3b. On the right: histogram of \(\text{trace}(\Sigma_{\text{post}}^{L,\theta})\) (marginal distribution for the plot on the left) with for the different sensor combinations (in percent out of 1,081,575 combinations). The plots include markers for the A-optimal sensor choice, the sensors chosen by Algorithm 4 with \(\Xi_{\text{train}}\)-training (“proposal”) and with \(\{\theta_{\text{ref}}\}\)-training (“proposal, fixed configuration”), the sensor combination with maximum observability \(\beta_{G}(\theta_{\text{ref}})\), and when all 25 sensors are included. Figure 5: Distribution of the posterior determinant \(\det(\Sigma^{L\theta}_{\text{post}})\) for \(\theta=\theta_{\text{ref}}\). See Figure 4 for details about the plot structure. Figure 6: Distribution of the maximum eigenvalue of the posterior covariance matrix \(\Sigma^{L\theta}_{\text{post}}\) for \(\theta=\theta_{\text{ref}}\). See Figure 4 for details about the plot structure. Note that the \(\theta_{\text{ref}}\)-trained sensor combination has the 101-st smallest maximum posterior eigenvalue among all 1,081,575 possibilities. criteria, the correlation becomes stronger for smaller scaling factors \(\sigma^{2}\) and weakens for large \(\sigma^{2}\) when the prior is prioritized (plots not shown). This behavior aligns with the discussion in Section 3.1 that \(\beta_{G}(\theta)\) primarily targets the largest posterior eigenvalue and is most decisive for priors with higher uncertainty. #### Comparison for different libraries We finally evaluate the influence of the library \(\mathcal{L}_{5\times 5}\) on our results. To this end, we randomly select 200 sets of new measurement positions, each consisting of 25 drilling locations with an associated drilling depth. For each library, we run Algorithm 4 to choose 8 sensors, once with \(\Xi_{\text{train}}\)-training on the surrogate model, and once with the full-order model at \(\theta_{\text{ref}}\) only. For comparison, we then consider in each library each possible combination of choosing 8 unordered sensor sets and compute the trace, determinant, and maximum eigenvalue of the associated posterior covariance matrix at the reference configuration \(\theta_{\text{ref}}\) together with its observability coefficient. This lets us identify the A-, D-, and E-optimal sensor combinations. Figure 7 shows how \(\beta_{G}(\theta_{\text{ref}})\) is distributed over the 200 libraries, with percentiles provided in the adjacent table. For 75% of the libraries, the A- and E-optimal, and the \(\Xi_{\text{train}}\)- and \(\theta_{\text{ref}}\)-trained sensor choices rank among the top 1% of combinations with the largest observability. Due to its non-optimized training for \(\theta_{\text{ref}}\), the \(\Xi_{\text{train}}\)-trained sensor set performs slightly worse than what is achieved with \(\theta_{\text{ref}}\)-training, but still yields a comparatively large value for \(\beta_{G}(\theta_{\text{ref}})\). In contrast, overall, the D-optimal sensor choices have smaller observability coefficients, presumably because the minimization of the posterior determinant is influenced less by the maximum posterior eigenvalue. The ranking of the \(\Xi_{\text{train}}\)- and \(\theta_{\text{ref}}\)-trained sensor configurations in terms of the posterior covariance matrix's trace, determinant, and maximum eigenvalue over the 200 libraries is given in Figure 8. Both perform well and lie for 75% of the libraries within the top 1% of combinations. As the ranking is performed for the configuration parameter \(\theta_{\text{ref}}\), the \(\theta_{\text{ref}}\)-trained sensor combination performs better, remaining in 95% of the libraries within the top 5% of sensor combinations. Figure 7: Ranking in \(\beta_{G}(\theta_{\text{ref}})\) of the A-, D-, E- optimal and the \(\theta_{\text{ref}}\)- and \(\Xi_{\text{train}}\)-trained sensor choices for all possible combinations of choosing 8 unordered sensors in the library. Left: Boxplots obtained over 200 random sensor libraries. Right: worst-case ranking (in percent) of the corresponding percentiles (”pctl”). ## 6 Conclusion Figure 8: Ranking of the posterior covariance matrix \(\Sigma^{\theta_{\text{ref}}\cdot L}_{\text{post}}\) in terms of the A-, D-, E-OED criteria and the observability coefficient \(\beta_{G}(\theta_{\text{ref}})\) when the observation operator \(G_{L,\theta}\) is chosen with Algorithm 4 and \(\Xi_{\text{train}}\)-training (top) or \(\theta_{\text{ref}}\)-training (bottom). The ranking is obtained by comparing all possible unordered combinations of 8 sensors in each sensor library. On the left: Boxplots of the ranking over 200 sensor libraries; on the right: ranking (in percent) among different percentiles. ### Unrestricted Library We next verify the scalability of Algorithm 4 to large sensor libraries by permitting all 2,209 drilling locations, at each of which at most one measurement may be taken at any of the 5 available measurement depths. Choosing 10 unordered sensors yields approximately 7.29e+33 possible combinations. Using the RB surrogate model from before, we run Algorithm 4 once on a training grid \(\Xi_{\text{train}}\subset\mathcal{P}\) consisting of 10,000 randomly chosen configurations using only the surrogate model (runtime 14.19 s), and once on the reference configuration \(\theta_{\text{ref}}\) using the full-order model (runtime 15.85 s) for comparison. We terminate the algorithm whenever 10 sensors are selected. Compared to the training time on \(\mathcal{L}_{5\times 5}\) before, the results confirm that the size of the library itself has little influence on the overall runtime but that the full-order computations and the size of \(\Xi_{\text{train}}\) relative to the surrogate compute dominate. The sensors chosen by the two runs of Algorithm 4 are shown in Figure 9. They share many structural similarities: * **Depth:** Despite the availability of 5 measurement depths, sensors have only been chosen on the lowest and the upmost layers with 5 sensors each. The lower sensors were chosen first (with one exception, sensor 3 in \(\theta_{\text{ref}}\)-training), presumably Figure 9: Sensor positions chosen by Algorithm 4 from a grid of \(47\times 47\) available horizontal positions with available 5 depths each, though only the lowest (bottom) and upmost (top) layers were chosen. The underlying plot shows cuts through the full-order solution \(x_{\theta}(\mathbf{u})\) at \(\theta=\theta_{\text{ref}}\). Left: \(\Xi_{\text{train}}\)-training with the RB surrogate model on a training set \(\Xi_{\text{train}}\subset\mathcal{P}\) with 10,000 random configurations; runtime 14.19 s for 10 sensors. Right: \(\theta_{\text{ref}}\)-training with full-order model at reference parameter; runtime 15.85 s for 10 sensors. because the lower layer is closer to the uncertain Neumann boundary condition and therefore yields larger measurement values. * **Pairing** Each sensor on the lowest layer has a counterpart on the upmost layer that has almost the same position on the horizontal plane. This pairing targets noise sensitivity: With the prescribed error covariance function, the noise in two measurements is increasingly correlated the closer the measurements lie horizontally, independent of their depth coordinate. Choosing a reference measurement near the zero-Dirichlet boundary at the surface helps filter out noise terms in the lower measurement. * **Organization** On each layer, the sensors are spread out evenly and approximately aligned in 3 rows and 3 columns. The alignment helps distinguish between the constant, linear, and quadratic parts of the uncertain Neumann flux function in north-south and east-west directions. Figure 10 (left side) shows the increase in the observability coefficients \(\tilde{\beta}_{G}(\theta)\) (for \(\Xi_{\text{train}}\)-training) and \(\beta_{G}(\theta_{\text{ref}})\) (for \(\theta_{\text{ref}}\)-training) over the number of chosen sensors. We again observe a strong initial incline followed by stagnation for the \(\Xi_{\text{train}}\)-trained sensors, whereas the curve for \(\theta_{\text{ref}}\)-training already starts at a large value to remain then almost constant. The latter is explained by the positions of the first 5 sensors in Figure 9 (right), as they are already spaced apart in both directions for the identification of quadratic polynomials. In contrast, for \(\Xi_{\text{train}}\)-training, the "3 rows, 3 columns" structure is only completed after the sixth sensor (c.f. Figure 9, left). With 6 sensors, the observability coefficients in both training schemes have already surpassed the final observability coefficients with 8 sensors in the previous training on the smaller library \(\mathcal{L}_{5\times 5}\). The final observability coefficients at the reference parameter \(\theta_{\text{ref}}\) are \(\beta_{G}(\theta_{\text{ref}})=0.4042\) for \(\theta_{\text{ref}}\)-training, and \(\beta_{G}(\theta_{\text{ref}})=0.3595\) for \(\Xi_{\text{train}}\)-training. As a final experiment, we compare the eigenvalues of the posterior covariance matrix \(\Sigma_{\text{post}}^{L_{\theta_{\text{ref}}}}\) for the \(\Xi_{\text{train}}\)- and \(\theta_{\text{ref}}\)-trained sensors against 50,000 sets of 10 random sensors each. We confirm that all 50,000 sensor combinations comply with the combinatorial restrictions. Boxplots of the eigenvalues are provided in Figure 10 (right side). The eigenvalues of the posterior covariance matrix with sensors chosen by Algorithm 4 are smaller10 than all posterior eigenvalues for the random sensor combinations. Footnote 10: Here we compare the largest eigenvalue of one matrix to the largest eigenvalue of another, the second largest to the second largest, and so on. ## 6 Conclusion In this work, we analyzed the connection between the observation operator and the eigenvalues of the posterior covariance matrix in the inference of an uncertain parameter via Bayesian inversion for a linear, hyper-parameterized forward model. We identified an observability coefficient whose maximization decreases the uncertainty in the posterior probability distribution for all hyper-parameters. To this end, we proposed a sensor selection algorithm that expands an observation operator iteratively to guarantee a uniformly large observability coefficient for all hyper-parameters. Computational feasibility is retained through a reduced-order model in the greedy step and an OMP search for the next sensor that only requires a single full-order model evaluation. The validity of the approach was demonstrated on a large-scale heat conduction problem over a section of the Perth Basin in Western Australia. Future extensions of this work are planned to address 1) high-dimensional parameter spaces through parameter reduction techniques, 2) the combination with the PBDW _inf-sup_-criterion to inform sensors by functionalanalytic means in addition to the noise covariance, and 3) the expansion to non-linear models through a Laplace approximation. ## Acknowledgments We would like to thank Tan Bui-Thanh, Youssef Marzouk, Francesco Silva, Andrew Stuart, Dariusz Ucinski, and Keyi Wu for very helpful discussions, and Florian Wellmann at the Institute for Computational Geoscience, Geothermics and Reservoir Geophysics at RWTH Aachen University for providing the Perth Basin Model. This work was supported by the Excellence Initiative of the German federal and state governments and the German Research Foundation through Grants GSC 111 and 33849990/GRK2379 (IRTG Modern Inverse Problems). This project has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n\({}^{\circ}\) 818473), the US Department of Energy (grant DE-SC0021239), and the US Air Force Office of Scientific Research (grant FA9550-21-1-0084). Peng Chen is partially supported by the NSF grant DMS #2245674. Figure 10: Left: Observability coefficients during sensor selection with \(\Xi_{\text{train}}\)- and \(\theta_{\text{ref}}\)-training for a library with 11,045 measurement positions and combinatorial restrictions. Shown are 1) the minimum and mean surrogate observability coefficient \(\bar{\beta}_{G}(\theta)\) over a training set with 10,000 random configurations with final values \(\Xi_{\text{train}}\bar{\beta}_{G}(\theta)=0.4160\) and \(\Xi_{\text{train}}\bar{\beta}_{G}(\theta)=0.6488\), and 2) the full-order observability coefficient \(\beta_{G}(\theta_{\text{ref}})\) when training on the reference parameter \(\theta_{\text{ref}}\) alone (final value \(\beta_{G}(\theta_{\text{ref}})=0.4042\)). Right: Boxplots for the 5 eigenvalues of the posterior covariance matrix \(\Sigma_{\text{post}}^{L,\theta}\) over 50,000 sets of 10 sensors chosen uniformly from a \(5\times 47\times 47\) grid with imposed combinatorial restrictions. The eigenvalues are compared according to their order from largest to smallest. Indicated are also the eigenvalues for the \(\Xi_{\text{train}}\)-trained (purple, “x”-marker) and \(\theta_{\text{ref}}\)-trained (turquoise, “+”-marker) sensors from Figure 9.
2305.16950
Implementation-Efficient Finite Alphabet Decoding of Polar Codes
An implementation-efficient finite alphabet decoder for polar codes relying on coarsely quantized messages and low-complexity operations is proposed. Typically, finite alphabet decoding performs concatenated compression operations on the received channel messages to aggregate compact reliability information for error correction. These compression operations or mappings can be considered as lookup tables. For polar codes, the finite alphabet decoder design boils down to constructing lookup tables for the upper and lower branches of the building blocks within the code structure. A key challenge is to realize a hardware-friendly implementation of the lookup tables. This work uses the min-sum implementation for the upper branch lookup table and, as a novelty, a computational domain implementation for the lower branch lookup table. The computational domain approach drastically reduces the number of implementation parameters. Furthermore, a restriction to uniform quantization in the lower branch allows a very hardware-friendly compression via clipping and bit-shifting. Its behavior is close to the optimal non-uniform quantization, whose implementation would require multiple high-resolution threshold comparisons. Simulation results confirm excellent performance for the developed decoder. Unlike conventional fixed-point decoders, the proposed method involves an offline design that explicitly maximizes the preserved mutual information under coarse quantization.
Philipp Mohr, Syed Aizaz Ali Shah, Gerhard Bauch
2023-05-26T14:01:30Z
http://arxiv.org/abs/2305.16950v1
# Implementation-Efficient ###### Abstract An implementation-efficient finite alphabet decoder for polar codes relying on coarsely quantized messages and low-complexity operations is proposed. Typically, finite alphabet decoding performs concatenated compression operations on the received channel messages to aggregate compact reliability information for error correction. These compression operations or mappings can be considered as lookup tables. For polar codes, the finite alphabet decoder design boils down to constructing lookup tables for the upper and lower branches of the building blocks within the code structure. A key challenge is to realize a hardware-friendly implementation of the lookup tables. This work uses the min-sum implementation for the upper branch lookup table and, as a novelty, a computational domain implementation for the lower branch lookup table. The computational domain approach drastically reduces the number of implementation parameters. Furthermore, a restriction to uniform quantization in the lower branch allows a very hardware-friendly compression via clipping and bit-shifting. Its behavior is close to the optimal non-uniform quantization, whose implementation would require multiple high-resolution threshold comparisons. Simulation results confirm excellent performance for the developed decoder. Unlike conventional fixed-point decoders, the proposed method involves an offline design that explicitly maximizes the preserved mutual information under coarse quantization. ## I Introduction Polar codes are the first class of linear block codes that have been shown to asymptotically achieve the capacity of binary-input discrete memory-less channels through successive cancellation (SC) decoding [1]. While the SC decoding does not reach capacity for practical code word lengths, the introduction of successive cancellation list (SCL) decoding with cyclic redundancy check (CRC) [2] made polar codes very competitive in the short-block length regime. Further advances eventually evolved polar codes to be standardized for the uplink and downlink control channels in 5G [3], making them widely used nowadays. In a communication system, forward error correction requires a high proportion of the total energy and hardware resources for the baseband processing. In particular, the bit-width \(w\) of the messages, which represent reliability information in the decoding process, should be chosen as small as possible to achieve the required performance with minimal space complexity. This led to the paradigm of finite alphabet decoding where \(w\)-bit integer-valued messages communicate reliability levels among lower and upper branch operations in the decoding graph of a polar code. Inherently, each multiple-input operation must involve a compression to maintain small bit widths in the output messages. Recently, the information bottleneck (IB) method has been introduced for designing mutual information maximizing decoding operations implemented via lookup tables [4, 5, 6, 7]. However for a size \(N\) code, \(2N-2\) individual lookup tables with size of up to \(2^{2w+1}\) are required [5]. In this paper, we propose to use another implementation variant for decoding polar codes by using a so-called computational domain that avoids the multi-input lookup tables. The technique is inspired by the computational domain used in mutual information maximizing decoding of low-density parity check (LDPC) codes [8]. For each update, two messages are translated to representation levels and merged using an addition whose result is compressed via threshold comparisons. When symmetric representation levels are enforced, the number of implementation parameters is reduced to \(2^{w}+2^{w-1}-1\) for the two translations and the compression. It can be shown that non-uniformly placed thresholds can preserve the same amount of mutual information as the lookup table approach [8]. In [9] a simplified computational domain approach for LDPC decoding was proposed. We adopt the idea in the lower branch update of a polar decoder: A restriction to uniformly placed thresholds is exploited in order to effectively avoid the threshold comparisons, reducing the number of calculation operations from \(w\) to \(1\) for each of the \(N_{L}\frac{1}{2}N\log_{2}(N)\) lower branch updates where \(N_{L}\) is the list size. The uniform quantization is done with a very simple clipping and bit-shifting operation combined with properly scaled translated messages. For the upper branch, the idea from [6] is kept, i.e., the upper branch updates are designed using the min-sum rule. The overall result is a highly implementation-efficient finite alphabet decoder, specified by only \(2^{w}+1\) instead of \(2^{2w+1}\) parameters per lower branch update compared to an IB decoder [4, 5]. The contributions can be summarized as follows: * A computational domain approach, known from LDPC decoding, is adopted for the decoding of polar codes. Its behavior is potentially equivalent to a mutual-information-maximizing lookup table based IB decoder but reduces the number of design parameters drastically. * A simplified computational domain update is proposed that avoids costly threshold comparisons in each lower branch update at close-to-optimal performance. * Simulation results confirm that the proposed simplified decoder involves a loss of only 0.04-0.07 dB for code rates ranging between 0.75 to 0.25. The rest of the paper is organized as follows: First section II briefly explains polar codes and their conventional LLR-based decoding. Then, section III introduces the principle of finite alphabet decoding. In section IV a new finite alphabet decoder variant with low complexity is described. Finally, section V evaluates the performance with block error rate simulations. ## II Polar Codes A polar code with length \(N=2^{n}\), where \(n=1,2,\ldots\), is described by its \(N\times N\) generator matrix \[\mathbf{G}=\mathbf{F}^{\otimes n}\mathbf{B}, \tag{1}\] where matrix \(\mathbf{F}{=}\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\) and \(\mathbf{B}\) is the _bit reversal_ permutation matrix [1]. For a code rate of \(R{=}K/N\), \(N{-}K\) bits in \(\textbf{u}=[u_{0},\ldots u_{N-1}]^{T}\) are set to fixed values, e.g., \(u_{i}{=}0\), and referred to as the _frozen_ bits. The values and locations of the frozen bits are known to the decoder. The remaining \(K\) positions, specified in the _information_ set \(\mathcal{A}\), in **u** carry the information bits. The process of determining the information set is referred to as the code construction. The encoding follows as **x**=**G**u**. The matrix \(\mathbf{F}\), as depicted in the factor graph of Fig. 1, serves as the building block of polar codes. It encodes the bits \(\textbf{u}{=}[u_{0},u_{1}]^{T}\) into the codeword \(\textbf{x}{=}[x_{0},x_{1}]^{T}\) which is transmitted over a channel with transition probabilities \(p(y_{i}|x_{i})\). The received codeword is \(\textbf{y}{=}[y_{0},y_{1}]^{T}\). Two virtual bit channels are created over the building block: The first bit channel treats \(u_{0}\) as input and \(\textbf{y}_{0}=\textbf{y}\) as output, where \(u_{1}\) is considered a hidden variable observed via \(y_{1}\). The second bit channel treats \(u_{1}\) as input and \(\textbf{y}_{1}=[\textbf{y},u_{0}]^{T}\) as output, assuming the true knowledge of \(u_{0}\). The individual bit channels of a polar code of length \(N{>}2\) are synthesized using a recursive application of the building block [1]. For instance, Fig. 2 shows the factor graph of a polar code for \(N{=}4\), where the transmission channel is implicitly included, i.e., the right most variable nodes correspond to the (quantized) channel outputs \(y_{i}\). The code structure is composed of columns of building blocks referred to as _levels_, \(j\) (dashed rectangles). Every node is labelled \(v_{i,j}\) with row indices \(i{=}0,1,\ldots,N{-}1\), referred to as _stage_, and column indices \(j{=}0,\ldots,n\). Then \(v_{i,n}\) are the encoder inputs \(u_{i}\) and \(v_{i,0}\) are the channel outputs \(y_{i}\). From encoding perspective, Fig. 2 shows the flow of bits from left to right. From decoding perspective, LLRs flow from right to left in the code structure. ### _Successive Cancellation Decoding_ The successive cancellation decoder [1] exploits the bit channels created in the code structure. For a codeword length \(N\), the SC decoder estimates the input of the \(i\)th bit channel, i.e., \(\hat{u}_{i}\), in a sequential manner from \(i=0\) to \(N-1\). With the bit channel output \(\textbf{y}_{i}=[\textbf{y},u_{0},\ldots,u_{i-1}]^{T}\), \(\hat{u}_{i}\) is estimated at each decoding stage \(i\in\mathcal{A}\) as \[\hat{u}_{i}=\begin{cases}0&L_{u_{i}}(\textbf{y}_{i})\geq 0\\ 1&\text{otherwise.}\end{cases} \tag{2}\] The LLR \(L_{u_{i}}(\textbf{y}_{i})=\log\frac{p(\textbf{y}_{i}|u_{i}=0)}{p(\textbf{y}_ {i}|u_{i}=1)}\) is computed in recursive steps that can be illustrated on the building block. In Fig. 1, \[L_{u_{0}}(\textbf{y})=L_{x_{0}}(y_{0})\boxplus L_{x_{1}}(y_{1}), \tag{3}\] for \(i=0\), where \(L_{x_{0}}(y_{0})\) and \(L_{x_{1}}(y_{1})\) are channel level LLRs and the box-plus operation between two LLR values \(L_{0}\) and \(L_{1}\) is defined as \(L_{0}\boxplus L_{1}=\log\frac{1+\iota+\iota_{0}e^{\iota_{1}}}{e^{\iota_{0}}+ \varepsilon e^{\iota_{1}}}\). For \(i=1\), \[L_{u_{1}}(\textbf{y},\hat{u}_{0})=(-1)^{\hat{u}_{0}}L_{x_{0}}(y_{0})+L_{x_{1}}( y_{1}), \tag{4}\] with the bit value \(\hat{u}_{0}\) available from the previous decoding stage. For example, the LLR \(L_{u_{2}}(\textbf{y}_{2})\) in Fig. 2 is computed using (3) from the intermediate LLRs \(L_{v_{1,1}}(.)\) and \(L_{v_{3,1}}(.)\). The LLR \(L_{v_{1,1}}(.)\) is in turn computed according to (4) from the channel level LLRs \(L_{x_{0}}(y_{0})\) and \(L_{x_{1}}(y_{1})\) as well as the bit estimate \(\hat{v}_{0,1}\). The LLR \(L_{v_{3,1}}(.)\) is computed in a similar fashion from \(L_{x_{2}}(y_{2})\), \(L_{x_{3}}(y_{3})\) and \(\hat{v}_{2,1}\). ### _Successive Cancellation List Decoding_ The SCL decoder [2] can be seen as multiple SC decoders working in parallel. Every time an estimate \(\hat{u}_{i}\) for \(i{\in}\mathcal{A}\) has to be made, the decoder proceeds as an SC decoder for both possible decisions of \(\hat{u}_{i}\) instead of using (2). The number of decoding paths doubles at each decoding stage \(i{\in}\mathcal{A}\). If the number of decoding paths in the list exceeds \(N_{L}\) at any stage, the decoder retains only the \(N_{L}\) most likely decoding paths, dropping the rest. The likelihood of the correctness of a path \(l\in\{0,1\ldots N_{L}-1\}\) in the list at stage \(i\) is conveyed by the path metric \(M_{i,l}\)[10] \[M_{i,l}=M_{i-1,l}+\log(1+e^{-(1-2\hat{u}_{i,l})L_{u_{i}}(\textbf{y}_{i,l})}), \tag{5}\] where \(M_{i-1,l}\) is the path metric of the \(l\)th path at decoding stage \(i-1\), \(\hat{u}_{i,l}\) is the bit value with which the path is being extended, and \(L_{u_{i}}(\textbf{y}_{i,l})\) is the LLR value for the \(l\)th path according to (3) or (4). After the last decoding stage \(i{=}N{-}1\), the most likely decoding path from the list, i.e., the one having the smallest path metric, is selected as the decoder output. In the CRC-aided settings, a CRC checksum of \(N_{\text{CRC}}\) bits is appended to the \(K\) information bits and the \(K+N_{\text{CRC}}\) bits are encoded into an \(N\) bit codeword using (1). The decoder output is then Fig. 1: Factor graph of the building block (dashed rectangle) of a polar code and the transmission channel. Fig. 2: Structure of a polar code with length \(N=4\). the most likely decoding path in the final list that passes the CRC check. If no path passes the CRC test, the most likely path in the list is selected as the decoder output. ## III Finite Alphabet Decoding Finite alphabet decoders are a family of quantized decoders that replace LLRs with integer valued messages in order to achieve a reduced space complexity. Instead of exchanging exact or approximated LLRs, \(w\)-bit messages \(t\) from a finite alphabet \(\mathcal{T}\) of size \(|\mathcal{T}|{=}2^{w}\) are used to convey the reliability information w.r.t. a certain bit \(x\). Thus, each message \(t\) corresponds to an LLR level \(L_{x}(t)\). A general choice for the finite alphabet \(\mathcal{T}\) is unsigned integers \(\{0,1,\ldots,|\mathcal{T}|-1\}\), e.g., as in [4, 11]. However, this work uses a symmetric finite alphabet \(\mathcal{T}{=}\{-2^{w-1},\ldots,-1,+1\ldots,+2^{w-1}\}\) that is convenient to describe the proposed simplified hardware implementation [9, 12]. The alphabet \(\mathcal{T}\) is typically chosen such that it is sorted w.r.t. the underlying LLRs, i.e., \(L_{x}(t{=}{-}2^{w-1})<\ldots<L_{x}(t{=}{+}2^{w-1})\). In the design of the decoder, the LLRs \(L_{x}(t)\) are enforced to exhibit odd symmetry as \[|L_{x}(t)|=|L_{x}(-t)|\,\forall\,t\in\mathcal{T}. \tag{6}\] The first half of such an alphabet translates to negative LLRs while the second half translates to positive LLR values. The LLR computations in finite alphabet decoders are replaced with compression operations with some input \(y{\in}\mathcal{Y}\) and output \(t{\in}\mathcal{T}\) where \(|\mathcal{T}|{<}|\mathcal{Y}|\). In order to minimize the loss in error correction performance of the decoder under the constrained resolution \(w\), a mutual information maximizing decoder design aims at \(\max_{p(t|y)}I(X;T)\) when designing the operations. This kind of situation is classified as an information bottleneck setup where \(X\) is the relevant, \(Y\) is the observed and \(T\) is the compressed variable [13]. The information bottleneck framework provides algorithms for determining the mapping \(p(t|y)\) as well as the output joint distribution \(p(x,t)\) from an input joint distribution \(p(x,y)\). The mapping \(p(t|y)\) is designed by placing \(|\mathcal{T}|{-}1\) boundaries in the sorted observed alphabet \(\mathcal{Y}\) and optimizing them to maximize \(I(X;T)\). The distribution \(p(x,t)\) is used to obtain the LLRs \(L_{x}(t)\) and the distribution \(p(t)\) of the compressed messages. The deterministic mapping \(p(t|y)\) represents the compression operation in the form of a lookup table. ### _Mutual Information Maximizing Polar Decoders_ In finite alphabet polar decoders the information bottleneck method can be used to construct lookup tables which replace (3) and (4) [4, 5]. This process is recapped here for the building block of Fig. 1 where the underlying channel \(p(y_{i}|x_{i})\) is a quantized binary input AWGN channel. Construction of the decoding lookup table begins by designing a mutual information maximizing channel quantizer such that \(y_{i}{\in}\mathcal{T}\)[11]. With the quantized channel outputs \(\mathbf{y}{=}[y_{0},y_{1}]^{T}\) at hand, the lookup table \(p(t_{0}|\mathbf{y})\) is designed for the upper branch update with \(t_{0}{\in}\mathcal{T}\) which compresses the input alphabet of size \(2^{2w}\) to an output alphabet of size \(2^{w}\). Similarly, the lookup table \(p(t_{1}|\mathbf{y},\hat{u}_{0})\) is designed for the lower branch update with \(t_{1}{\in}\mathcal{T}\) which compresses the input alphabet of size \(2^{2w+1}\) to an output alphabet of size \(2^{w}\). Both \(t_{0}\) and \(t_{1}\) can be translated to LLR values \(L_{u_{0}}(t_{0})\) and \(L_{u_{1}}(t_{1})\), respectively. The mappings \(p(t_{0}|\mathbf{y})\) and \(p(t_{1}|\mathbf{y},\hat{u}_{0})\) define a non uniform quantization of the underlying LLR space of thier inputs. For a polar code of length \(N\), there are \(N-1\) distinct decoding tables for upper branch updates as well as \(N-1\) distinct tables for the lower branch updates [4, 5]. For instance, the decoder for Fig. 2 requires \(2N{-}2{=}6\) distinct decoding tables: A common decoding table for both the upper branch updates at level \(j{=}0\) and an individual decoding table for each upper branch update at the level \(j{=}1\). Similarly, a single decoding table for both the lower branch updates at level \(j{=}0\) and a decoding table for each lower branch update at level \(j{=}1\). Each upper branch decoding table has a size of \(2^{2w}\) while each lower branch decoding table is of size \(2^{2w+1}\). For further details, the reader is referred to [4, 5, 6]. ## IV Proposed Efficient Decoder Implementation A key challenge in finite alphabet decoders is the efficient implementation of the mutual information maximizing lookup tables. In that regard, the computational domain implementation of the lookup tables in [8] offers an elegant solution for LDPC decoders which is adopted for polar decoders here. Recall that (3) and (4) deliver the result of the upper and lower branch update as \(L_{u_{0}}(\mathbf{y})\) and \(L_{u_{1}}(\mathbf{y},\hat{u}_{0})\), respectively. For avoiding expensive propagation of the high resolution message to the building blocks of the next level in the code structure, quantization of the two LLRs is indispensable. Consider an observed variable \(Y,y{\in}\mathcal{Y}\) that models a high resolution LLR related to a relevant binary variable \(X,x{\in}\mathcal{X}\). It can be shown that threshold quantization of \(Y\) to a compressed variable \(T,t{\in}\mathcal{T}\) using a set of thresholds \(\tau\) can maximize the preserved mutual information \(\max_{\tau}I(X;T)\)[14]. While the decoders designed in [4, 11] with the information bottleneck method use the result of such a threshold quantization in the form of a lookup table, [8] uses these thresholds for performing the quantization in a computational domain. In other words, the boundaries or thresholds determined during the lookup table design are used for implementing compression operations. Such a threshold quantization is henceforth represented as \(t=Q(y)\). In order to simplify the implementation, symmetric quantization is considered where the sign is preserved and the magnitude is clustered using thresholds \(\tau=\{\tau_{0},\ldots,\tau_{2^{w-1}-2}\}\) in the following non-uniform quantization [9]: \[Q(y){=}\,\mathrm{sgn}(y)\begin{cases}1&|y|{\leq}\tau_{0}\\ i&\tau_{i-2}{<}|y|{\leq}\tau_{i-1},2{<}i{<}2^{w-1}{-}1\\ 2^{w-1}&|y|{\sim}\tau_{2^{w-1}-2}\end{cases} \tag{7}\] For building block of polar codes in Fig. 1, we have \(\mathcal{X}{=}\mathcal{U}_{0}\) and \(\mathcal{Y}{=}\{L_{u_{0}}(\mathbf{y}):\mathbf{y}{\in}\mathcal{Y}_{0}{\times} \mathcal{Y}_{1}\}\) for the upper branch. For the lower branch we have \(\mathcal{X}{=}\mathcal{U}_{1}\) and \(\mathcal{Y}{=}\{L_{u_{1}}(\mathbf{y},u_{0}):\mathbf{y}{\in}\mathcal{Y}_{0}{ \times}\mathcal{Y}_{1}\ \ \&\ the symmetric alphabet \(\mathcal{T}\), no translation to LLRs is required and it naturally preserves the desired \(w\)-bit message resolution: \[t_{0}=\mathrm{sgn}(y_{0})\,\mathrm{sgn}(y_{1})\min(|y_{0}|,|y_{1}|) \tag{8}\] The approximation causes only minor performance degradation as shown in [6] and is therefore the recommended choice for the upper branch update. ### _Lower Branch Update_ The mutual information maximizing update for the lower branch leads to \(t_{1}{=}Q(L_{u_{1}}(\mathbf{y},\hat{u}_{0}))\). It can be implemented as a lookup table like in [4, 5, 6] or alternatively as a computation with threshold quantization (only done for LDPC decoders yet) as in [8, 9, 15]. The lookup table implementation suffers from its large size to cover all the \(2^{2w+1}\) input combinations. This aspect is significantly improved when using the computation according to (4). Up to this point, the operation's internal computations have been considered with real valued numbers. For a hardware implementation this is not acceptable. To reduce the internal resolution one option is to scale the real valued LLRs to an integer range from \(-\iota\) to \(+\iota\), with \(\iota=2^{w^{\prime}-1}{-}1\), as follows: \[\phi_{s}(t)=\left\lceil sL(t)\right\rceil=\mathrm{sgn}(L(t))\min\left(\left| s\middle|L(t)\right|+0.5\right],\iota) \tag{9}\] where the scaling \(s\in\mathbb{R}\) controls the LLR resolution \(\Delta=1/s\) in the integer domain. Then, the integer computation yields \[t_{1}=Q\left((-1)^{i_{0}}\phi_{s}(y_{0})+\phi_{s}(y_{1})\right)\approxeq Q(L _{u_{1}}(\mathbf{y},\hat{u}_{0})). \tag{10}\] Fig. (a)a depicts a corresponding hardware schematic where the translations are assumed to be implemented with two \((w{-}1)\)-bit lookup tables. The adder is assumed to work in a binary two's complement format such that subtraction and addition can be performed with the same hardware module. The quantization expects a sign-magnitude format. Therefore, two conversions from sign magnitude into the 2's complement format \(b{=}\partial_{2s}(a)\) and vice versa \(a{=}\vartheta_{\text{SM}}(b)\) must be part of the hardware. Table I describes an accurate conversion \[b =\vartheta_{2\text{'s}}(a)=[a_{0},(|a|\oplus a_{0})+(+12_{\text{'s }}\wedge a_{0})]\text{ and } \tag{11a}\] \[a =\vartheta_{\text{SM}}(b)=[b_{0},(|b|+(-1_{\text{'s }}\wedge b_{0}))\oplus b_{0}] \tag{11b}\] where \(a_{0}\) (\(b_{0}\)) refers to sign bit of a number \(a\) (\(b\)), \(+\) is binary addition with carry propagation and, eventually, \(\wedge\) and \(\oplus\) are bitwise logic AND and XOR operations. In particular the \(+\) operation causes significant complexity. Therefore, an approximated conversion is proposed according to \[b =\vartheta_{2\text{'s}}(a)=[a_{0},|a|\oplus a_{0}]\text{ and } \tag{12a}\] \[a =\vartheta_{\text{SM}}(b)=[b_{0},|b|\oplus b_{0}] \tag{12b}\] which is illustrated in Table II. The technique involves a slight bias, since e.g. \(\vartheta(+1_{2^{\prime}s}){=}{+}1_{\text{SM}}\) but \(\vartheta(-1_{2^{\prime}s}){=}{-}2_{\text{SM}}\). To distribute the bias fairly, one option is to let every second lower branch update invert the sign for inputs and output. In our simulations we used the accurate variant (11) but we expect only insignificant performance loss from the much simpler conversion (12). Another bottleneck is the computation of (7). It requires \(w{-}1\) threshold comparisons when being implemented in a binary search manner, as depicted in Fig. (b)b. As proposed in [9] for LDPC codes, a restriction to uniform thresholds enables a much simpler implementation of the quantization operation, which is shown in Fig. (c)c. In that approach the quantization is achieved by a clipping and bit shifting operation defined as \[Q(y)=\mathrm{sgn}(y)\min(\lfloor|y|/2^{r}\rfloor+1,2^{w-1}) \tag{13}\] where \(r\) denotes the number of right-shifted bit positions. By modifying \(r\) and the scaling factor \(s=1/\Delta\) for the translation tables, any uniform threshold spacing, \((\tau_{i+1}-\tau_{i})=\Delta 2^{r}\), can be achieved. The optimal uniform quantization is obtained with a grid based search aiming for \(\max_{s,r}I(U_{1};T_{1})\). ### _Complexity Analysis_ For the upper branch processing the lowest complexity is observed with the min-sum update which requires only a single exclusive-or gate and a \((w{-}1)\)-bit comparison (see (8)). A comparison of the complexity for the discussed lower branch updates is provided in Table III. The total number of \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \(a\) & + 0 000 & + 0 001 & + 2 010 & +s 011 & -s 111 & -2 110 & -1 101 & -0 100 \\ \(\frac{\partial_{2\text{'s}}(a)}{\partial}\) & + 0 000 & + 0 001 & + 2 010 & +s 011 & +s 011 & -1 101 & -1 111 & -0 100 \\ \(\frac{\partial_{2\text{'s}}(a)}{\partial}\) & + 0 000 & + 0 001 & + 2 010 & +s 011 & +s 011 & - & -2 110 & -1 110 & -1 111 \\ \(\frac{\partial_{2\text{'s}}(b)}{\partial}\) & + 0 00 & + 0 001 & + 2 010 & +s 011 & - & - & -2 110 & -1 101 & -1 100 \\ \end{tabular} \end{table} TABLE II: Simplified binary conversions \(w^{\prime}=3\). \begin{table} \begin{tabular}{c|c|c|c c c c} variant & \multicolumn{2}{c|}{\begin{tabular}{c} additions/ \\ comparisons \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} memory usage in bit (e.g. \(w^{\prime}{=}6\)) \\ \(w{-}1\)- \\ \end{tabular} } & \multicolumn{2}{c|}{ \begin{tabular}{c} 9 \\ \end{tabular} } \\ \hline IB-LUT & 0 & \(w\cdot 2^{2w+1}\) & 2048 & 384 & 64 \\ CD (non-uni.) & \(w\) & \((2(w^{\prime}{-}1)+w^{\prime})2^{w-1}\) & 128 & 64 & 32 \\ CD (uniform) & \(1\) & \(2(w^{\prime}{-}1)2^{w-1}\) & 80 & 40 & 20 \\ \end{tabular} \end{table} TABLE III: Complexity of lower branch updates. Fig. 3: (a) shows a hardware schematic for the lower branch processing that can be used with (b) non-uniform quantization or (c) uniform quantization [9]. For (b) and (c) we have \(w^{\prime}{=}9\)-bit internal resolution and \(w{=}4\)-bit message resolution. potentially different parameterized updates is \(N{-}1\) for the complete decoder. The example memory usage is evaluated for an internal resolution of \(w^{\prime}{=}6\,\)bits which sacrifices only minor performance in the simulations. Clearly, the computational domain solution with uniform quantization yields the lowest complexity. It only requires memory for the two translations to \(w^{\prime}{\)-bit LLR magnitudes of and one addition operation. The non-uniform computational domain variant requires additional complexity to perform the non-uniform threshold quantization with \(w{-}1\) comparisons tested against the total \(2^{w-1}\) different \(w^{\prime}{\)-bit thresholds in a binary search fashion. In case of \(w{=}4\,\)bits the lookup table solution requires more than \(25\) times the number of memory bits to specify the decoder. A conventional fixed point SC decoder from [10] calls for about \(w{=}6\,\)bits to achieve similar performance as the proposed \(w{=}4\)-bit decoder. ## V Performance Analysis This section presents the simulation results showing the error correction performance of the proposed quantized decoders. The proposed decoding scheme is compared with double-precision floating-point LLR-based decoding as well as finite alphabet decoders designed using the information bottleneck method [4, 5]. The LLR-based decoding represents the unquantized decoders. The SCL decoding is used here in the CRC-aided setting with list size of \(N_{L}=32\) and CRC size of \(N_{\mathrm{CRC}}=16\). For the construction of polar codes, the method adopted in 5G NR [3] is used. Finally, all the simulations are performed for a codeword length of \(N=1024\) over an AWGN channel using BPSK modulation. The SC decoding of polar codes mainly performs two types of computations, i.e., upper or lower branch on a building block in the polar code structure. Hence, the finite alphabet decoders in this work are labelled according to the design method used for upper and lower branch updates. The decoders from [4, 5] where both upper and lower branch updates are designed using the information bottleneck (IB) method are labelled IB-IB. The finite alphabet decoders of [6] that deploys min-sum (MS) and the information bottleneck for designing upper and lower branch updates, respectively, are labelled MS-IB. The proposed decoders which use min-sum rule for upper branch and computational-domain uniform quantization method for lower branch updates are labelled MS-CD. The finite alphabet quantized decoders are constructed offline for a selected \(w{=}\log_{2}(|\mathcal{T}|)\)-bit resolution. Each \(w\)-bit quantized decoder deploys a \(w\)-bit mutual information maximizing channel quantizer designed using the information bottleneck method [4, 5, 11]. The channel quantizer and, in turn, the quantized decoder are constructed for a specific \(E_{b}/N_{0}\), which is referred to as the design \(E_{b}/N_{0}\) of the decoder. For a given code rate \(R\) and resolution \(w\), the design \(E_{b}/N_{0}\) for the IB-IB decoder is selected as the one which achieves a block error rate of \(10^{-3}\) at the smallest channel \(E_{b}/N_{0}\). The same design \(E_{b}/N_{0}\) is then used to generate MS-IB and MS-CD decoders for the same \(R\) and \(w\). ### _Successive Cancellation Decoding_ Fig. 4 shows the block error rates under the successive cancellation decoding for a code rate \(R{=}0.5\) and resolution of \(w{=}4\) and \(2\) bits. The three finite alphabet decoders in the figure for \(w{=}4\) bit resolution were designed for \(E_{b}/N_{0}{=}0.5\,\)dB. The \(w{=}2\) bit decoders were design for \(E_{b}/N_{0}{=}3.5\,\)dB. Compared to the floating-point LLR-based decoder, the 4-bit decoder show a degradation of around 0.2 dB while the 2-bit quantized decoders exhibit a significant performance loss of approximately 2.4 dB. Most importantly, the IB-IB, MS-IB and the proposed MS-CD decoders have practically the same error rate performance. Thus, the implementation friendly MS-CD approximation costs nothing in terms of performance loss. ### _Successive Cancellation List Decoding_ Fig. 5 presents the block error rates for CRC-aided SCL decoding of 4-bit quantized decoder for multiple code rates. For the low code rate \(R{=}0.25\), the IB-IB [4] decoder exhibits a loss of \(0.2\,\)dB w.r.t the double-precision LLR decoder while the proposed MS-CD decoder shows an approx. \(0.08\,\)dB of additional degradation. Both the IB-IB and the MS-CD decoders for \(R{=}0.25\) are constructed for a design \(E_{b}/N_{0}{=}0.5\,\)dB. Fig. 4: Block error rate comparison under SC decoding. Fig. 5: Block error rate comparison under CRC-aided SCL decoding with various code rates for \(w=4\) The additional performance loss of MS-CD w.r.t the IB-IB decoder shrinks to approximately 0.05 dB at the code rate \(R{=}0.5\). For the code rate \(R{=}0.5\), Fig. 5 also includes the block error rate of MS-IB [6] decoder. The three finite alphabet decoders are constructed for design \(E_{b}/N_{0}{=}0.5\) dB. The error rate curve of the MS-IB decoder is in between the error rate curves of IB-IB and MS-CD decoders. This is expected behaviour since the MS-IB decoder design principle deploys an approximate, i.e., min-sum, design rule only for the upper branch while keeping the information bottleneck design rule for lower branch operations. The proposed MS-CD uses approximate design rules for both the upper and the lower branch operations. The quantized decoders in Fig. 5 for the code rate \(R{=}0.75\) are designed for \(E_{b}/N_{0}{=}1.75\) dB. It can be seen that the performance degradation shown by the MS-CD decoder w.r.t the IB-IB decoder reduces further at this high code rate. Similar trends have been noticed for LDPC decoders [9]. Fig. 6 compares block error rates of the 4-bit IB-IB and MS-CD decoders of Fig. 5 at code rate \(0.5\) with their respective 3 and 2 bit variants. The 3-bit IB-IB and MS-CD decoders are designed for \(E_{b}/N_{0}{=}0.5\) dB while the 2-bit decoders are designed for \(E_{b}/N_{0}{=}3.0\) dB. It can be seen that by decreasing the decoder resolution from 4 to 3 bits, the gap between the IB-IB and MS-CD widens to 0.59 dB. Varying resolutions within a decoder and extended design techniques could reduce the observed degradation under coarse quantization as shown in [12] for LDPC decoding. Another observation is the difference in the performance under the SC and CRC-aided SCL decoding of the IB-IB and MS-CD decoders constructed for the same design \(E_{b}/N_{0}\). For \(w{=}4\) bits, there is no difference in the error correction performance of IB-IB and MS-CD decoders as seen in Fig. 4. However, a small difference can be seen in Fig. 5 when the same decoder is used for SCL decoding. It is not completely clear as what leads to this performance difference between the IB-IB and MS-CD under SC and SCL decoding. A major reason could be the fact that the decoders are constructed using quantized density evolution that assumes successive cancellation decoding. In other words, the decoder design framework is not aware of the list and the outer CRC used in the SCL decoding. ## VI Conclusions In this paper, finite alphabet decoders are designed for polar codes. This class of decoders replaces LLR-based computations with mutual information maximizing table lookup operations. The main contribution is the use of a computational domain with uniform quantization instead of a lookup table for a significant complexity reduction in the lower branch update. In the case of 4-bit message resolution, we estimate only 1/25 of the memory consumption compared to a pure lookup table implementation. The uniform quantization requires only 1/4 of the computational cost compared to the optimal non-uniform quantization. The min-sum operation is chosen for the upper branch processing. It is shown that at 4-bit resolution, the performance degradation due to the used hardware-friendly approximations remains below 0.08 dB compared to the information-optimal lookup table design.
2310.03655
Strategic Evaluation: Subjects, Evaluators, and Society
A broad current application of algorithms is in formal and quantitative measures of murky concepts -- like merit -- to make decisions. When people strategically respond to these sorts of evaluations in order to gain favorable decision outcomes, their behavior can be subjected to moral judgments. They may be described as 'gaming the system' or 'cheating,' or (in other cases) investing 'honest effort' or 'improving.' Machine learning literature on strategic behavior has tried to describe these dynamics by emphasizing the efforts expended by decision subjects hoping to obtain a more favorable assessment -- some works offer ways to preempt or prevent such manipulations, some differentiate 'gaming' from 'improvement' behavior, while others aim to measure the effort burden or disparate effects of classification systems. We begin from a different starting point: that the design of an evaluation itself can be understood as furthering goals held by the evaluator which may be misaligned with broader societal goals. To develop the idea that evaluation represents a strategic interaction in which both the evaluator and the subject of their evaluation are operating out of self-interest, we put forward a model that represents the process of evaluation using three interacting agents: a decision subject, an evaluator, and society, representing a bundle of values and oversight mechanisms. We highlight our model's applicability to a number of social systems where one or two players strategically undermine the others' interests to advance their own. Treating evaluators as themselves strategic allows us to re-cast the scrutiny directed at decision subjects, towards the incentives that underpin institutional designs of evaluations. The moral standing of strategic behaviors often depend on the moral standing of the evaluations and incentives that provoke such behaviors.
Benjamin Laufer, Jon Kleinberg, Karen Levy, Helen Nissenbaum
2023-10-05T16:33:08Z
http://arxiv.org/abs/2310.03655v1
# Strategic Evaluation: Subjects, Evaluators, and Society ###### Abstract. A broad current application of algorithms is in formal and quantitative measures of murky concepts - like merit - to make decisions. When people strategically respond to these sorts of evaluations in order to gain favorable decision outcomes, their behavior can be subjected to moral judgments. They may be described as 'gaming the system' or 'cheating,' or (in other cases) investing 'honest effort' or 'improving.' Machine learning literature on strategic behavior has tried to describe these dynamics by emphasizing the efforts expended by decision subjects hoping to obtain a more favorable assessment -- some works offer ways to preempt or prevent such manipulations, some differentiate 'gaming' from 'improvement' behavior, while others aim to measure the effort burden or disparate effects of classification systems. We begin from a different starting point: that the design of an evaluation _itself_ can be understood as furthering goals held by the evaluator which may be misaligned with broader societal goals. To develop the idea that evaluation represents a strategic interaction in which both the evaluator and the subject of their evaluation are operating out of self-interest, we put forward a model that represents the process of evaluation using three interacting agents: a decision subject, an evaluator, and _society_, representing a bundle of values and oversight mechanisms. We highlight our model's applicability to a number of social systems where one or two players strategically undermine the others' interests to advance their own. Treating evaluators as themselves strategic allows us to re-cast the scrutiny directed at decision subjects, towards the incentives that underpin institutional designs of evaluations. In practice, the moral standing of strategic behaviors often depend on the moral standing of the evaluations and incentives that provoke such behaviors. We apply our framework to a variety of extended examples and discuss ethical implications. Strategic behavior, Evaluation, Measurement + Footnote †: journal: Computer Vision and Pattern Recognition Or, alternatively, should we think of the hiring process itself as deficient in some way? A model that treats the evaluation merely as an elicitation device for an applicant's attributes will struggle to identify the deeper normative concerns at play in such an example. Consider a second example: when a university assigns grades to its students, we typically describe this grading process as a way of measuring student performance. But in this narrow view, we may miss some of the other interests at play in a grading scenario, together with their normative implications. A university that is engaging in grade inflation, for instance, might find that its instructors receive higher teaching evaluations and its development program receives larger alumni donations in a regime with higher grades. Students may also view themselves as benefitting from such a regime. If both parties appear to be advantaged by the decision to assign uniformly high grades, how do we pinpoint what is intuitively undesirable about grade inflation? These and many other examples suggest that a fuller understanding of strategic evaluation requires that we include an additional player in the model. We draw on sociological theory and empirical evidence about how evaluating institutions function in society: while evaluating institutions are frequently tasked, explicitly or implicitly, with implementing broad societal goals and values, they also operate out of self-interest, which may be more or less aligned with these societal aims. From this starting point, we observe that in a richer model of strategic evaluation, it is not only the decision subjects who operate out of self-interest: the design of an evaluation should _itself_ be understood as a self-interested behavior by the institution, which aims to achieve its own goals under various social, legal, and organizational constraints. The institution is in turn held to account by a third player, which we can think of as playing the role of society--in the form of laws, regulations, norms, or individual authorities tasked with oversight. Our introductory examples suggest that many of the central considerations in the design of evaluations are better understood as clashes within this three-player structure, between the strategic actions of the individuals being evaluated, the institution performing the evaluation, and society's expectations for what the evaluation should be achieving. The paper is organized as follows. Section 2 provides our three-player model, aiming to capture the various ways that an evaluation outcome can diverge from societal goals. Section 3 discusses three extended examples: hiring practices, grade inflation, and sports. In Section 4, we use the model to enumerate the set of possible scenarios where the interests of the three players either align or diverge. Section 5 discusses the ethical implications of our model, aiming to recast ethical scrutiny in light of evaluators' strategic aims. We discuss further related work in Section 6. ## 2. Overview We suppose that society has determined some desirable property of interest, and it would like to find the people who exhibit this property. But since society lacks the ability to perform this assessment itself, it delegates the task to an _evaluator_. The evaluator constructs a test that it gives to _subjects_, and those who pass the evaluation are deemed to satisfy the property. ("Society," of course, is not a monolithic actor with a single set of goals. As our model will illustrate, we intentionally conceive of society capaciously--encompassing both situations in which a governing body is imbued with some regulatory authority to oversee the activities of evaluators, as well as situations in which societal interests are manifested as the expression of public values, but without explicit organizational oversight.) There are several sources of'slippage' that are possible in this setting--that is, cases in which an assessment fails to meet its mandate or achieve broader societal goals. We would like a model that is capable of considering these discrepancies in a unified manner. The first source of slippage is the gap between someone's performance on a test and their underlying ability. A subject might have slept badly the night before a math test, leading to a score that does not reflect their skills. Or, a strategic test-taker might have chosen to invest in skills that boost their score on the assessment without improving underlying properties (i.e., some form of 'gaming'). A second source of slippage is the gap between the aim of the evaluator and the design of the evaluation. No evaluation perfectly measures its intended quantity. Myriad factors constrain the evaluator and limit the signal that an evaluation can capture, either by introducing noise or bias to the measurement. This type of misalignment has received significant attention in work on the role of _proxies_ in classification: practical evaluations generally need to rely on measurable properties that stand in for the property of interest, but which do not precisely coincide with it. The third source of slippage is the gap between the evaluator's aim and the true property of interest to society. Once we appreciate that the evaluator, like the subject, is also a strategic actor with their own self-interest, then we can see that the misalignment can also arise from forms of gaming or cheating by the evaluator: the evaluator might care about a property other than the one society is interested in assessing, and they might have correspondingly created a test that is better at measuring their property than it is at measuring the one of interest to society. There are thus multiple issues that we need to keep in focus for the purposes of our analysis: a given circumstance inhabited by the subject might or might not be sufficient to pass a test; a given way of passing the test might or might not correspond to what the evaluator is trying to measure; and what the evaluator is trying to measure might or might not correspond to the underlying societal values that motivated the test in the first place. To keep track of these issues we therefore introduce a formal model that includes all of these facets. The model cannot by itself resolve the underlying ethical questions about the behavior of the subject and the evaluator in any given situation, but it can provide precision about the nature of these situations as a starting point for ethical analysis. ## The Model In order to formalize the notion that the test is imperfectly measuring an underlying property of interest, we need to represent the idea that the true state of the subject is hidden and only partially observable. Therefore, the fundamental ingredient in our model is a set \(S\) of abstract _states_, where each state serves as a possible description of the subject. In general, we view the set of states as enormous, since the states need to be able to recognize fine-grained distinctions between subjects: if two subjects differ in a way that might be relevant to some form of evaluation, this should imply that these two subjects reside at different states. For example, if one is evaluating a student's ability to multiply numbers, then the state should be expressive enough to describe the student's aptitude at multiplication and how they came to acquire this aptitude. Similarly, if one is evaluating an athlete via a 100-meter sprint, then the state should describe the athlete's sprinting abilities, including information about their training up to this point, as well as the conditions under which the race is run. The fact that states are expressive enough to capture fine-grained differences between subjects means that for any particular subject, it will not in general be possible to learn their precise state from any limited amount of interaction with them. Any property can be described by the set of states at which it holds, and our discussion thus far has implicitly been concerned with four properties that can in general all be different from one another: * Society's initial property of interest--the concept with which we began--can be viewed as a set of states \(I_{\text{s}}\subseteq S\). * The evaluator might have motivations that are distinct from simply assessing the property \(I_{\text{s}}\); thus, we assume that the evaluator is interested in identifying subjects who belong to some possibly different set of states \(I_{e}\subseteq S\). * Since the state set is enormous, and states might differ in hard-to-discern ways, it is generally not possible to perfectly evaluate a particular property; as a result, the set of states that correspond to passing the evaluator's test is a set \(P\subseteq S\) that might be different from both \(I_{\text{s}}\) and \(I_{e}\). * Finally, the subject begins at some initial state \(s_{0}\in S\) and has a budget of effort that they can spend to move to a new state, \(s_{1}\in S\), with the goal of reaching a state that passes the test. Let \(R\subseteq S\) be the set of all states that the subject can reach using this budget of effort; thus, it is possible for the subject to pass the test if and only if there is a state in \(R\cap P\) -- both reachable and among the passing states. Now, for any collection of subsets \(C_{1},C_{2},\ldots,C_{k}\subseteq S\), let's say that two states \(s\) and \(s^{\prime}\) are _indistinguishable_ with respect to \(C_{1},C_{2},\ldots,C_{k}\) if for each \(C_{j}\), the state \(s\) belongs to \(C_{j}\) if and only if the state \(s^{\prime}\) does. Notice that indistinguishability is an equivalence relation, and so it divides the state space into equivalence classes. Since a given state can either belong or not belong to each of \(C_{1},C_{2},\ldots,C_{k}\), there are \(2^{k}\) possible equivalence classes, though some of them may be empty. Using the four subsets of the state space we have defined--\(I_{\text{s}}\), \(I_{e}\), \(P\), and \(R\)--the different scenarios of interest to us can be categorized by whether a given state belongs (or does not belong) to each of \(I_{\text{s}}\), \(I_{\text{e}}\), \(P\), and \(R\); that is, there is a different scenario for each of the \(2^{4}=16\) equivalence classes of indistinguishable states with respect to \(I_{\text{s}}\), \(I_{\text{e}}\), \(P\), and \(R\). These categorizations are illustrated in Figure 1. As discussed above, a state's membership in a given equivalence class does not convey normative information on its own, but this decomposition into equivalence classes provides a starting point for ethical analysis by systematically mapping the scenarios that can arise according to these four underlying dimensions. ## 3. Extended Examples In order to make the utility of our three-player model more concrete and to demonstrate how the interests of subjects, evaluators, and society can be variably aligned or misaligned, we discuss three example cases: hiring, grade inflation, and sports. ### Hiring In hiring, a variety of evaluations may be employed to assess candidates' fitness for a position. We can think of hiring as a multi-stage process in which an initial pool of candidates is winnowed down into progressively smaller sets; evaluations are conducted at each stage in order to select the candidates who will progress through the pipeline (Bartos et al., 2016). Recently, a good deal of research has focused in particular on initial screening steps in the hiring process. These could include algorithmic tools to analyze resumes; personality quizzes, games, and analysis of video interviews to predict a candidate's likelihood of job success; or the degree to which candidates exhibit certain qualities, like resourcefulness or grit (Kal well-resourced backgrounds do better across all stages of the hiring process due to a combination of economic advantages, social connections, and cultural resources that signal their social position to gatekeepers (i.e., hiring managers) (see also Bourdieu [(9)]). Consider the following hypothetical example. In some law schools, student services staff sponsor golf instruction for law students who are unfamiliar with the game. Though golf lessons may seem orthogonal to the ostensible substantive goals of legal education, the lessons are designed to equip students with the _cultural_ toolkit for job success. Golf is, traditionally, a sport strongly associated with economic privilege, and one historically off-limits to women and non-white players; as such, privileged white men are more likely to know how to play golf. Given the reality that many elite law firms are also disproportionately composed of privileged white men, and that those firms may seek to hire "the kinds of people" who know how to play golf (i.e., golf-playing is an indirect proxy for whiteness, maleness, and socioeconomic privilege), golf instruction in law schools can be understood as a means of trying to assist law students in signaling cultural fit. Indeed, law school golf programs are often explicitly directed toward women, and place emphasis on basic golf etiquette and literacy as well as skills. (For instance, participants in a golf program for women students at Arizona State University's Sandra Day O'Connor College of Law acknowledged that they were learning to play because "golf is an access issue" and that they "didn't want to be left out" of the networking opportunities that knowing how to play golf could yield [(3)]). Understood through the lens of our model, we can envision a law school graduate, seeking a job at an elite firm, who is competent in the practice of law (thereby belonging to a state in \(I_{\text{s}}\), society's property of interest). Imagine that our hypothetical job seeker does not come from an elite socioeconomic background, and has not played golf before. By taking golf lessons and developing a cultural facility with golf, she is able to--and does--pass the evaluator's test (that is, successfully interview for the job) in a hiring process; familiarity with golf has enabled her to reach a state that belongs to both \(P\) and \(R\). (Even if the hiring process doesn't include an explicit golfing component, we could imagine several ways in which this cultural knowledge might surface in an interview--for example, through discussion of a recent PGA tournament, or conversation with the hiring manager about local courses.) However, if we imagine that for the elite law firm, golf skills are valued because of their traditional correlation with a particular class background, and have served as a "cultural fit" proxy for reproducing the current demographics of the firm among new hires, then our job seeker has _not_ attained the evaluator's property of interest, \(I_{e}\); that property is out of alignment with the others in our model. As we've described, by considering alignments and misalignments among these states, our three-player model provides a mechanism for directing our attention to ethical implications of strategic behavior with more nuance. In a two-player model, we might simply view our job-seeker's golf lessons as strategic behavior against the evaluator's goals, and might seek ways to limit or discount its influence on the hiring process. The three-player model shows how the evaluator's interest \(I_{e}\) is itself out of step with both society's interest \(I_{s}\) and the interest of the job-seeker, who wishes to reach a state in \(P\). Instead, golf lessons may be recast as an effort to push back against an unjust exclusionary criterion, which serves to align the subject's strategic behavior with societal objectives. As such, it may be judged as ethically acceptable. Figure 1. Diagram representing relevant states in the three-party model of evaluation. Each position for the decision subject represents a state which may or may not be described by any of the following four properties: Attainable for the decision subject (\(R\)), valuable for society (\(I_{s}\)), desirable for the evaluator (\(I_{e}\)), and passing the evaluation (\(P\)). Not all Boolean combinations are depicted. Further, this example demonstrates an additional benefit of the three-player model. Much contemporary scholarship on fairness in hiring processes focuses exclusively on screening stages, when algorithmic tools are used to winnow down a set of candidates to a set to be "called back" for an interview. (Most social science audit studies of these processes also focus exclusively on these early hiring stages, as demonstrated by Quillian et al. (Quillian et al., 2018), for both pragmatic reasons and based on research ethics considerations.) But a good deal of biased and exclusionary hiring practice--that is, misalignment of an evaluator's objective and societal objectives--is likely to occur in _interview_ stages, which are often excluded from scrutiny by researchers studying the ethical dimensions of AI-driven tools or the fairness of hiring processes (Srivastava et al., 2017; Quillian et al., 2018). A broader conceptualization of strategic behavior offers a more inclusive "end-to-end" view of the hiring process and the ethical dimensions thereof. ### Grade Inflation _Grade inflation_ is a phenomenon in which the grades students are assigned for coursework tend to "inflate" (i.e., increase) over time. Empirical data demonstrate the existence of the phenomenon across colleges and universities: in one study of 200 U.S. schools (Srivastava et al., 2017), "A" grades comprised 43 percent of all letter grades issued in 2009, as compared to only 15 percent in 1960. Grade inflation is particularly pronounced among private colleges and universities, even controlling for student selectivity (Srivastava et al., 2017). How might we understand grade inflation through the lens of strategic evaluation? We can conceive of the evaluation as the issuance of a grade (or a set of grades) to a student based on course performance. If we think of \(P\) as representing the set of states resulting in a high grade, then the student has an interest in finding a state in \(P\) and \(R\): a reachable state in which they receive a high grade. The student presumably benefits from high grades (e.g., as a credential for a future job search). The educational institution, as evaluator, also has interests in issuing high grades: they have a reputational interest in ensuring that graduates are successful on the job market, and high grade point averages can help to set students up for such success. When grade inflation is particularly widespread, schools may find it difficult to "deflate" due to concern about harming students' career opportunities or becoming less competitive in recruiting new students (Srivastava et al., 2017). Schools may also be interested in keeping students satisfied via high grades for purposes of maintaining positive alumni relations, which bear reputational and economic dividends (e.g., they might result in greater donations to the school in the future). Accordingly, the institution has incentives to make sure that the set of states \(I_{e}\) it approves of contain many states in \(P\) and \(R\): reachable states conferring high grades. Instructors, whom we can think of as acting as agents of the educational institution, have their own set of interests, which might support grade inflation. Instructors may enjoy better interpersonal relationships with students when they assign them high grades, and empirical evidence suggests that instructors who issue high grades receive better evaluations from students, which may factor into faculty's own tenure and promotion evaluations (Quillian et al., 2018). (We could also, of course, conceive of instructors and educational institutions as separate "players" in our model that are potentially _misaligned_ in their interests regarding grading; we collapse them here for the sake of simplicity in illustration, since for purposes of our discussion they both have incentives for \(I_{e}\) to contain many states in \(P\) and \(R\)). Thus far, then, both the subject and the evaluator may be aligned in their interests, supporting an inflated grade regime. But society's interest--represented as \(I_{s}\)--may be misaligned. Indeed, we can think of the "inflation" of the grades as an enlargement of the set \(I_{e}\) relative to the set \(I_{s}\): the institution is willing to view many more states as deserving of high grades, whereas society might want \(I_{s}\) to be a smaller set that has fewer states in \(R\) (corresponding to fewer states achievable by students). There are many potential arguments for why societal interests might be poorly served by inflated grades. Grade inflation might be problematic to the extent that grades are useful tools for distinguishing among the performance of different students; if everyone gets an "A," it's not as clear which students truly excelled (Quillian et al., 2018). Similarly, some argue that grade inflation may diminish students' motivation to excel in coursework, because attaining a top grade takes relatively less work, thereby reducing students' capacity to reach their full learning potential. The ethical and social implications of grade inflation are strongly debated, both in the academic literature on the topic (Srivastava et al., 2017; Goyal et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017) and within institutions facing public pressures to rein in inflation. In situations like these, interests \(P\) and abilities \(R\) of the subject are aligned with the interests of the evaluator \(I_{e}\), but misaligned with societal interest \(I_{s}\), as illustrated by our three-party model. ### Sports The sports world is also a useful site for illuminating these dynamics, owing to its intentionally competitive design. In sports, acceptable means of reaching a particular objective (scoring, or winning a race) are generally made explicit via a set of detailed rules promulgated by the sport's association or governing body, and subsequently enforced by referees or other officials. Sports, therefore, are a natural setting for studying strategic behavior. A traditional analytic structure would posit that the organizer of the sporting event (the evaluator) creates a set of rules designed to measure athletes' (subjects) abilities, while athletes look for ways to gain an "edge" within the constraints of the rules. In particular, to prepare for an event, athletes train to improve speed, strength, and agility; they strategize about and prepare for likely competitive scenarios; they gather information about the strengths and weaknesses of the competition. A number of sports scandals and armchair debates involve the normative bounds of these behaviors, when such strategic efforts cross a line from gamesmanship into territory disallowed by the organizers--including doping, illegal sign-stealing, and other forms of (what is commonly perceived as) "heating." Consider, for example, a championship-level track and field meet. We often think of such events as having some of the most straightforward specifications--to run a certain distance as fast as possible, or to jump as far as possible--but of course they are also controlled by rules concerning allowable equipment, racing conditions, and substances (e.g. drugs) that an athlete is or isn't allowed to ingest while training or competing. We intuitively think of the "organizers" of the event as the enforcers of the rules, where the organizers represent some amalgam of the local operations of the meet and the international governing bodies for track and field. The two-party analysis of this setting would take this collection of organizers to be the evaluator, formulating and enforcing rules that apply to the athlete as subject. As with the other domains we've considered, this two-party interaction between the evaluator and the subject misses a number of the central issues that arise in the process of governing a sport like track and field. A salient example is the design of the track itself: at the 2021 Tokyo Olympics, a great deal of technology and money went into the creation of a "springy" track surface (i.e. rubber granules for better shock absorption) to enable the runners to increase their speed and increase their chances of breaking records (Steintein, 2017). This type of technology could be pushed much further than it was at the Olympic Games; what determined the limit of the track's springiness was not technology, but a sense that going too far would risk the athletes' safety, and cross a notional line that separates the act of running on a track from the act of running across a 400-meter trampoline. This notional line was therefore enforced by material-science specifications for allowable track surfaces defined by World Athletics, which governs track and field events (Bradley et al., 2017). In one sense, this example reveals a familiar kind of strategic interaction: the hosts of the track meet spend money to commission a track whose surface pushes up against the allowable specifications, and the governing body enforces rules designed to preserve the underlying intent of the activity. But in another sense, this strategic tension is happening _within_ the set of parties that a simpler analysis might have grouped together as a single "evaluator." This is precisely the richer view that a multi-party analysis makes possible: athletes would like to win races, and they work strategically within the rules enforced by the event organizer and the governing body to achieve this; event organizers would like to host track meets where world records are set, and they work strategically within the rules enforced by the governing body to achieve this. The 2021 Olympics is far from the only recent high-profile example of these issues in track and field; another recent instance is the attempt (with help from Nike as part of its _Breaking_' campaign) to create approved conditions under which it would be possible for an athlete to run a marathon in under two hours (Bradley et al., 2021). It is worth noting that once we move from a two-party view to a multi-party view, there is no reason we need to stop at three parties; for example, even the governing body of track and field is motivated to create conditions in which dramatic events happen in their sport in order to attract publicity and attention. In doing so, they operate strategically: for example, choosing how to set standards within informal constraints set by further parties, including the opinion of the public and the sports media about what constitutes a reasonable format for the event. Similar considerations arise in many other sports. A point worth highlighting is the contrast between technical restrictions on an allowable track surface and technical restrictions on allowable equipment, such as running shoes (which, like the track, are also made of rubber and designed to be springy). Though they appear similar, a key contrast is that strategic innovations in equipment are made by parties who are helping athletes; in contrast, strategic innovations in track surfaces are made by parties whom we typically think of as maintaining the integrity of the event--but whom, according to our model, we can also view as strategic actors motivated in part by their own aims. Since there are multiple scenarios in these settings that differ in subtle ways, our formalism in terms of subsets \(P\), \(R\), \(I_{\epsilon}\), and \(I_{s}\) can help clarify the distinctions among them. In applying the formalism to our examples here, we think of the underlying states as representing not only qualities about a competitor, but also different track meet scenarios and their outcomes. We focus on some definition of success -- such as whether a particular world record has been broken. \(P\) is then the set of states where this success outcome occurs; \(R\) is the set of states achievable by the athlete; and \(I_{\epsilon}\) and \(I_{s}\) are the states that are acceptable to the local organizer of the event and the global governing body, respectively. In this way, we can distinguish among the interpretation of states based on whether or not they belong to each of the four sets: * We start with the most straightforward case, in which an athlete breaks a record under conditions that are acceptable to both the event organizer and the governing body. This is simply a state in all four of \(P\), \(R\), \(I_{\epsilon}\), and \(I_{s}\). * Now consider the following hypothetical scenario, inspired by our discussion: an event organizer commissions a highly springy track, resulting in a new world record; but the track is later found to violate the allowable material-science specification for track surfaces. This would correspond to a state in \(P\), \(R\), and \(I_{\epsilon}\), but not \(I_{s}\). * i.e., an approved outcome that breaks two hours - - but because human runners are not able to attain this state, it is not in the set \(R\) of states reachable by the subject. We can think of it as an open question whether - - for this activity, with \(P\) corresponding to marathon times under two hours - - there in fact exists a state in all four of the sets \(P\), \(R\), \(I_{\epsilon}\), and \(I_{s}\)(Stein, 2017). * There are other scenarios that follow almost mechanically; for example, if an athlete breaks a world record, is subsequently disqualified by the local organizers, but has their time reinstated after a successful appeal to the governing body, this corresponds to a state in \(P\), \(R\), and \(I_{s}\), but not \(I_{\epsilon}\). ## 4. A Mechanical Understanding In the previous section, we discussed a number of evaluation scenarios in which social dynamics lead to strategies that may or may not serve the interests of evaluators, decision subjects, and society. We now argue that our model (described in Section 2) doesn't just capture these individual scenarios, but extends to a wide range of candidate and evaluator behaviors in various domains. Where the previous parts of this section focused mainly on identifying particularly evocative examples of behaviors and strategies, we suggest here that the model can also be useful in an _enumerative_ role: By varying its parameters, our model is able to portray all the possible stories we set out to describe. Given its structure as a three-party game, we can use the model to enumerate the scope of possible scenarios - both mundane and nuanced - where the interests of a subject, an evaluator, and society either align or diverge. Consider a particular domain where the subject of an evaluation (e.g., a job candidate or athletic competitor) is described by some state at the time she is evaluated. Recall we defined this state as \(s_{1}\in S\), the state a subject might occupy after exerting strategic effort. There are three qualities of this state which, we believe, capture much of the important social context for reaching descriptive or ethical conclusions about the evaluation. The first important quality, and perhaps the most straightforward, is whether the state 'passes' the evaluation (or otherwise performs favorably). This is true if \(s_{1}\in P\), where the group of passing states \(P\) is defined by criteria that are decided on, strategically, by the evaluator. The second important quality of state \(s_{1}\) is whether it genuinely represents an example of the quality that is desired by broader social interests. Does the subject actually excel at the activity that is supposedly being tested for? If so, we would say \(s_{1}\in I_{\text{s}}\). Finally, the third important quality is whether the candidate's state \(s_{1}\) serves the strategic interests of the evaluator. If passing the evaluation subject would be strategically beneficial for the evaluator, then \(s_{1}\in I_{\text{e}}\). All together, these qualities can help us characterize and make sense of the outcome of an evaluation. For ease of exposition in this section, we leave out the (fourth) question of whether \(s_{1}\) is a feasible state for the decision subject to reach (i.e. whether \(s_{1}\in R\)). It would not be difficult to extend our discussion to include a distinction between whether \(s_{1}\) belongs to \(R\) or not, but for now we focus our analysis on states in \(S\) without including this distinction. We therefore have a taxonomy of states based on distinguishing whether a state \(s_{1}\) belongs to \(P\), whether it belongs to \(I_{\text{s}}\), and whether it belongs to \(I_{\text{e}}\). For each combination of these three different qualities, we provide an example scenario that intuitively satisfies the given combination. To show how all the possible scenarios can arise in a single unified setting, we situate all of them in a stylized story of a law student applying for a job at a prestigious law firm. For pedagogical purposes, we imagine that the law firm has an overtly (and in our telling, somewhat cartoonishly) biased hiring process that grows out of the discussion in Section 3.1; specifically, the firm seeks candidates who grew up in affluent circumstances, and they attempt to discern this by inviting job applicants to play a round of golf during their on-site interview visit. Our formulation is therefore deliberately extreme so that it can make clear the distinctions among different scenarios; in more nuanced situations, we would have the same formal structure but potentially a more challenging interpretive task in distinguishing among different scenarios. Because our model contains membership in the sets \(P\), \(I_{\text{s}}\), and \(I_{\text{e}}\) as yes/no predicates, we do not need to be inventive to list a \begin{table} \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} Would passing serve \\ societal values? \\ \(s_{1}\in I_{\text{s}}\) \\ \end{tabular} & \begin{tabular}{c} Would passing serve \\ evaluator’s interests? \\ \(s_{1}\in I_{\text{e}}\) \\ \end{tabular} & \begin{tabular}{c} Passes? \\ \(s_{1}\in P\) \\ \end{tabular} & \begin{tabular}{c} **Example states and scenarios** \\ \end{tabular} \\ \hline Y & Y & Y & Candidate has a strong record and grew up playing golf. \\ \hline Y & Y & N & Candidate has a strong record and grew up playing golf. \\ \hline Y & N & Y & Candidate has a strong record and hadn’t played golf. \\ \hline Y & N & N & Candidate has a strong record and hadn’t played golf. \\ \hline Y & N & N & Candidate has a strong record and hadn’t played golf. \\ \hline N & Y & Y & She fails because she had few opportunities to network. \\ \hline N & Y & Y & Candidate has a weak record and grew up playing golf. \\ \hline N & Y & N & Candidate has a weak record and grew up playing golf. \\ \hline N & N & Y & Candidate has a weak record and hadn’t played golf. \\ \hline N & N & Y & Candidate has a weak record and hadn’t played golf. \\ \hline N & N & N & Candidate has a weak record and hadn’t played golf. \\ \end{tabular} \end{table} Table 1. A mechanistic example of different states. In this stylized story, a hiring decision is heavily dependent on an interview process in which playing golf — and an upbringing that involved time spent around golf courses — sometimes plays a significant role. We assume that this hypothetical evaluator is interested in golf-playing candidates in order to highlight the ways that an evaluator’s interests can diverge from societal goals and values. The example scenarios are individual instances within a broader set of states. range of different scenarios in which a prospective lawyer applies to such a firm; rather, we can literally build a table that mechanically enumerates all possible outcomes for these predicates. We do this in Table 1. For example, consider the hypothetical scenario where a fantastic and resourceful law student from a low-income background takes advantage of golf lessons offered through a university. If she receives a job offer in part because she was able to network with a partner over golf, her scenario represents a particular entry in our table. Since she is an otherwise fantastic lawyer, her candidacy for the job serves society's interests \(s_{1}\in I_{\mathsf{s}}\). However, the partner's preference for networking over golf suggests a latent prejudice, in which the candidate's background does not serve the internal interests and preferences of the firm, \(s_{1}\notin I_{\mathsf{e}}\). However, the candidate does pass, \(s_{1}\in P\). This scenario is depicted in the row of Table 1 corresponding \(s\in P\), \(s\in I_{\mathsf{s}}\), \(s\notin I_{\mathsf{e}}\). Now imagine a similar scenario, equivalent in every way, except that the candidate does not network over golf, and instead passes the evaluation because of her strong track record and depth of legal knowledge. This more straightforward case - where the candidate does not face prejudice in the evaluative process, and the firm simply hires somebody based on their strong record - corresponds to row 1 in Table 1. There are a few possible ways to interpret the taxonomy. Notice first that disparities between the evaluator's interests and society's (i.e. places where a state is in one of \(I_{\mathsf{e}}\) or \(I_{\mathsf{s}}\) but not the other) often suggest a place where the evaluator is applying a bias that is not socitically beneficial. Next, we observe that absent any regulatory intervention on society's behalf, disparities between the evaluator's interests and the outcome of the test represent _noise_ or _error_ in the evaluation. This is because if an evaluator has full reign over the criteria and standards composing the assessment, and still a candidate faces an outcome that is out-of-step with the evaluator's interests, then this simply suggests a noisy, or error-prone evaluation. A final observation is that, generally, a good evaluation is one where all probable and feasible states \(s_{1}\) pass (\(s_{1}\in P\)) if and only if they serve societal interests \(s_{1}\in I_{\mathsf{s}}\). In other words, a desirable evaluation is one where the values of these two predicates should match. ## 5. Taking an Ethical Perspective The framework developed so far, and expanded through the examples in the previous sections, provides several useful perspectives on the process of quantitative evaluation. First, it allows us to appreciate that evaluators can act strategically in their own interests: in other words, that gaming and strategic behavior are not only carried out by the subject of the evaluation, but by the parties designing the evaluation as well. In this way, it helps to recast normative judgments about a subject's behavior, interpreting deviations not as much from a fixed point but, instead, in light of the evaluator's own aims, which may themselves warrant scrutiny. The framework sheds light on the disparities that may arise between the interests of the evaluator and societal interests (i.e. social welfare) more broadly, whether expressed through collections of norms or through explicit regulation. By making these disparities a central focus of our framework, we can distinguish between cases in which the evaluator makes these disparities explicit as part of the evaluation itself, and cases in which these disparities remain covert and require additional scrutiny to unearth. This distinction is crucial for work that brings AI and machine learning to bear on quantitative evaluation: current work in this space has tended to shine a spotlight on the explicit, quantitative components of decision-making processes, giving less attention to the parts of the decision-making pipeline where an evaluator's aims might be unstated, under-specified, and difficult to discern. Our model, in which the evaluator is cast as a strategic actor, opens up the possibility of turning new modeling attention to these more implicit (and possibly covert) parts of the decision process (see Barabas et al. (2016)). Our model therefore also broadens the notion of governance and regulation for an algorithmic system that makes and enforces rules. Rather than conceiving of rule-making and rule-enforcement processes as undertaken by a single monolithic entity, as much of the literature on strategic behavior has tended to do, it becomes more tractable to analyze the internal tensions that exist within this process, between multiple rule-making parties with potentially divergent interests. Attempts to address strategic behavior in algorithmic systems, via either technical or policy mechanisms, are well-served by recognizing these complexities in real-world evaluations. Finally, our perspective has important implications for the moral scrutiny to which strategic behaviors are often subjected. A student found cheating on a math test, or an applicant embellishing a resume, might reasonably draw moral disapproval for those actions, and on its grounds warrant rejection, penalty or down-weighting. The language we use to describe such actions, e.g. "cheating," "gaming the system," or "honest effort," may convey ethical meaning, prejudging a case even when the underlying behaviors might be ambiguous. A distinctive virtue of our framing is that by conceiving of evaluators as potentially strategic actors, too, it allows the same moral scrutiny to be directed to them and not merely to those being evaluated. _Judgments about subjects depend on judgments about evaluators._ Normally, the outcome or score yielded by an evaluation mechanism is taken as legitimate grounds for choosing one candidate over another, or declaring one competitor victorious over another. If an evaluation mechanism is considered sound, that is, is considered to be an effective or reliable measure of a target quality, candidates who strategically alter their features to "beat" the mechanism in ways unforeseen, or unaccounted for by the evaluator are, in the first place, presumed to be behaving unethically. Some have pointed to differences among such workarounds that justify classifying them in different ways. Miller (2015), for example, classifies actions taken by a decision subject which are causally linked to the intended outcome as "improvement," otherwise, as "gaming," where the latter suggests unjustified success on a given performance metric. Greater subtlety in assessing subjects' behaviors in ethical terms seems important, but even here the focus of moral scrutiny has not shifted away from the subject of evaluation. Although the _quality_ of a given mechanism may be called into question for failing effectively to measure a target, the target of a measurement itself, typically, is taken as given, a fixed point, which is assumed to align with societal values _a priori_. Our framing treats target states as variable. In so doing it insists that the goals and methods of an evaluator are relevant factors in assessing the moral standing of decision subjects' strategic behaviors. Examples of morally questionable measurements include discriminatory, elitist, or overly demanding hiring procedures; even including some which purport to serve values like diversity, meritocracy and fairness. Tests can play insidious gate-keeping roles, and sports standards can be mired in corruption. Acknowledging that evaluators' strategies potentially diverge from societal mandates means that it may be unjust to pin responsibility for strategic behaviors solely on a decision subject. These dynamics suggest a need for a more nuanced, contextual assessment of the behaviors of decision subjects that takes into account the legitimacy of an evaluator's aims and methods. As the vast literature on lying suggests, absoluism aside, an act of lying may range in moral standing depending on morally relevant contextual considerations (Bradner et al., 2017). We can apply these arguments to our earlier example of law schools offering golf instruction. Such a behavior is a strategic response to an existing norm among law firms--namely, that a significant amount of communication and fraternization occurs over golf. Such a norm might introduce bias into hiring and promotion processes, because people comfortable and experienced on golf courses tend to be whiter and wealthier. As such, spending time in law school learning golf, though not traditionally thought of as causally linked to successful law practice, might be justifiable given the contextual norms and evaluation criteria. Passing negative moral judgments (or, scofling) at people learning to play golf misses the broader social forces inducing such behaviors, and can work to further exclude people not socially positioned to learn golf at a young age. _Evaluators can behave deceptively._ As we noted in Section 2, there are sources of slippage between performance on a test and a true property of societal interest. These can be described succinctly by the differences between \(I_{s}\), \(I_{e}\) and \(P\). When an evaluator's goal states \(I_{e}\) are transparent and accessible, rifts between \(I_{e}\) and \(I_{s}\) may be more easily identified. Observed discrepancies might require new forms of oversight and standards-setting so that evaluations do not skew towards states favorable to the evaluator that diverge from societal aims and values. These discrepancies show up in our example of Nike's _Breaking2_ campaign, in which the company sponsored a race that aimed to enable athletes, sporting Nike sneakers, to complete a marathon in under two hours. This goal diverges from traditional marathon goals which aim towards consistency among races. Nike's highly publicized goal drew attention, by design, to the ways that it set up the race conditions to help its runners, e.g. providing pacers and positioning them to reduce wind resistance. Even though Nike's sponsored runners outpaced the standing marathon world record, the explicit highlighting of changes to these conditions helps make clear the ways in which the improved time would not have met the governing body's requirements for an official world record. When an evaluator's goals are unclear, misleading or deceptive, however, it can be difficult to draw normative conclusions or assign responsibility. Differences between passing states \(P\) and societal goals \(I_{s}\) might arise because of benign and necessary practical limitations, such as measurement error, or they might arise because of blameworthy and pernicious strategic behaviors on the part of the evaluator. Undoubtedly, it is not always clear whether an evaluator is acting perniciously or in good faith. For example, academic departments, lacking diversity, could point to larger systemic issues that create barriers for under-represented minorities and claim that their interests and intentions are aligned with societal efforts to increase diversity. As non-diverse hiring persists, however, we may have cause to question whether these arguments are given in good faith. When an institution claims to have noble goals but outcomes diverge from \(I_{s}\), a possible explanation is that it is behaving strategically according to _covert_ interests in \(I_{e}\). By citing universal and unavoidable issues around measurement instrumentation, evaluators may be afforded some _wiggle room_ to dodge normative scrutiny even when they are acting in ways that are counter to society's interests.1 Examples include college admissions, where the underlying concept of societal interest -- college aptitude -- is fundamentally contested. In light of the broad disagreement over appropriate norms and standards, colleges employ concepts like 'holistic review' which are notably opaque. These strategies successfully avoid the fundamentally value-laden questions about what college aptitude is. They also afford some potentially self-dealing behavior, like earmarking applications from friends of trustees. Footnote 1: Furthering covert goals of the sort we describe might constitute manipulation, defined as _hidden influence_(Shen et al., 2017; Shen et al., 2017). The main point of this section is to emphasize that strategic gaming might be tolerated or possibly even encouraged when evaluators, through their evaluation mechanisms, reward capacities that are not in alignment with societal ends and values. Furthermore, strategic gaming may be ethically justifiable to achieve a favorable outcome in competitions where the rules have not been designed in good faith. As such, it is important to remain astute to efforts by evaluators to obfuscate evaluation mechanisms that are misaligned with societal values, which reduce the capacity to identify relevant excusing conditions and also the capacity to reliably assign moral responsibility within such systems, overall. ## 6. Further Related Work Here, we connect our work to the formidable literatures on both strategic algorithmic systems and theories of evaluation. We do not aim to provide an exhaustive review, but rather to situate our contribution and highlight relevant work. Our takeaway is that evaluation scenarios necessarily invoke _societal values_. These encoded values and commitments clarify the moral standing of strategic behaviors. ### Strategic Behavior A growing body of theoretical and applied work in machine learning focuses on dealing with strategic behavior and distribution shifts in response to algorithmic decisions. By this view, algorithms and metrics influence decisions that have an effect on the people and systems being measured, who therefore behave strategically to attain a desired outcome. In literature on _strategic classification_(K his classification outcome. The evaluator's goal is strong performance on a metric like classification accuracy in light of potential distribution shifts caused by strategic responses, which impede the evaluator's ability to observe the decision subject's "true" underlying labels. A variety of related models have been put forward for achieving a similar goal in other statistical settings, like regression (Kleinerberg and Raghavan, 2010; Raghavan, 2011; Raghavan, 2012), ranking (Kleinerberg and Raghavan, 2010), and in repeated games (Kleinerberg and Raghavan, 2010; Raghavan, 2011; Raghavan, 2012). _The influence of mechanism design._ Strategic models of ML tasks inherit a core assumption from behavioral economics and mechanism design: that social systems can be modeled as interactions among agents who behave according to rational self-interest. The tools of mechanism design have proven useful in designing systems with multiple agents to achieve certain desirable outcomes (Kleinerberg and Raghavan, 2010). As a result, approaches drawing from the mechanism design literature tend to focus on a certain set of goals that the evaluator might have related to the integrity and effectiveness of the assessment--for example, preventing strategic behavior from occurring in the first place, or salvaging "true" signal from manipulated features.2 These approaches typically conceive of the evaluator as solely interested in achieving a set of goals vis-a-vis the evaluated party (the decision subject). Footnote 2: It has been observed that applications of mechanism design can fall out of step with societal goals (Kleinerberg and Raghavan, 2010; Raghavan, 2011). Meanwhile, attempts to align mechanism design with broader social interests are burgoinging. See, e.g., Abebe and Goldner (2010), Finocchiaro et al. (2017). _Social costs, disparate effects._ In response, a chorus of literature invokes the "social costs" involved in algorithmic evaluation. These papers point out that assessments often involve powerful institutions setting the terms for distributing welfare and directing life outcomes--for example, through credit scoring, the provision of standardized tests in education, or the use of automated assessments in hiring. In defining societal considerations, these works tend to highlight the impact of an assessment on decision subjects. Kleinberg and Raghavan (2010) consider certain forms of strategic effort as utility-improving for both evaluator and decision subject. Milli et al. (2013) consider decision subject effort as a social cost that should be minimized. Hu et al. (2013) consider fairness in this context, finding that classification can exacerbate inequalities if decision subjects are afforded different budgets to strategically alter their features. By explicitly considering the impact of an evaluation on its subjects, these papers exemplify some of the social and ethical dimensions of evaluative settings where an evaluator's interests are misaligned with subjects'. Writing on the regulation of algorithmic decision-making and legal requirements aimed at transparency, Cofone and Strandburg (2012) find that algorithmic decisions tend to involve strategic behavior both from decision-makers and subjects. Decision-makers, who often cite undesirable 'gaming' from subjects as a reason to keep algorithms opaque, implicitly presume that that their interests align with society's. The paper makes the observation that the goals and behaviors of either player can be, plausibly, out of step with societal interests, so gaming may or may not be desirable. There is existing empirical work, too, on the contested and nuanced ethical boundaries of behaviors described by some as 'gaming the system,' especially in the context of internet platforms, where content creators use search engine optimization (Raghavan, 2011) and other practices geared towards courting viewers (Raghavan, 2011). In our hiring example, the practice of golf lessons arises not as malicious or manipulative efforts among decision subjects, but as a response to a practice among the evaluating law firm(s). Our model attempts to describe the fact that _both_ the evaluator and the subject engage in strategic behaviors to achieve their own interests (which may, or may not, diverge from societal goals). Thus, we believe, a fundamental question that must be asked in evaluative contexts is: what are the appropriate societal goals underpinning an evaluation? Answering this question may enable society to institute mechanisms that make sure evaluations serve these goals, especially in cases when institutional interests diverge from society's. To that end, our work re-visits models of evaluation and draws conclusions not just about the appropriateness of responses, but about the appropriateness of evaluation measurements themselves. ### Evaluation _Evaluations_ use measurements and observations to make a judgment of merit, worth or value (Raghavan, 2011; Raghavan, 2011). Although evaluations always involve empirical observation, not all forms of empirical observation constitute evaluation. Counting the number of yellow cars that pass on a highway is empirical measurement but is not an evaluation per se, because it does not help make a conclusion of merit, worth or significance. Notice that many real-world settings with strategic behavior involve an evaluation. Grades measure educational aptitude. Sports measure athletic excellence. Job interviews measure skills, experience and fit. Settings involving high-stakes social decisions frequently draw from measures to assess constructs that are, at times, murky. Qualities like deservignness, or promise, or good business instincts - these can be very difficult to ascertain. Evaluations may provide institutions with a seemingly less arbitrary way of making high-stakes decisions or allocating social welfare. When an institution is tasked with conducting an evaluation, there is typically some societal _interest_, or _value_, that a set of people wish to measure. That value can be highly contested--as with intelligence--or comparatively less contested--as with sprining speed. The key ingredient that constitutes an evaluation and not any other sort of description is its use as a stand-in for (or operationalization of) a _value_. A long line of scholars, especially in education and policy contexts, have developed theories and professional standards concerned with evaluating programs (Baker and Raghavan, 2010; Raghavan, 2010; Raghavan, 2011; Raghavan, 2011; Raghavan, 2011; Raghavan, 2011). A shared emphasis seems to be that evaluations should not unquestioningly adopt the goals of program facilitators but instead take a broader societal view. Such emphasis can be found, for example, in pushes for _stakeholder-based_ approaches to evaluation (Raghavan, 2011). A similar theme from literature on program evaluation and auditing is the need for _third-party_ or external oversight in high-stakes decision-making systems (Kleinerberg and Raghavan, 2010; Raghavan, 2011; Raghavan, 2011). These works make clear that internal auditing and self-evaluation often fall short in settings where firms behave in ways counter to society's interests. We use the word evaluation to highlight that the appropriateness of strategic behaviors is tied to questions of value: Only by understanding the values underlying a particular measurement can we conclude that a (strategic) behavioral response is appropriate or inappropriate. ## 7. Conclusion As machine learning and mechanism design expand into high-strakes social domains, they increasingly play a role in the creation of decision rules that affect people's lives. In cases where individuals respond strategically to these rules, it can be tempting to categorize these behaviors as gaming or cheating. This paper puts forward an expanded view of evaluative systems, where the decision subject isn't the only strategic actor who deserves social or ethical scrutiny. In hiring settings, for example, it is often the strategic methods and norms used to evaluate candidates that explain behavioral responses from decision subjects. In settings with grade inflation, schools and students align their interests and strategically behave in a way that undermines a broader societal measure of interest. Taking a normative perspective, we find that the moral standing of strategic behaviors often depends on the ways those behaviors are evaluated and motivated. We argue that questions about whether decision subjects are seen as 'gaming the system' need to be viewed in light of a parallel set of questions about the interests of the evaluating institution. Our expanded (three-player) model of evaluation is able to shed greater light on a variety of scenarios where certain strategies are either in line with or at odds with the interests of others. There are a number of promising directions for future work. Though we use a three-player model to illustrate external social considerations in evaluation systems, there is no reason to stop at three. Many systems of evaluation have a recursive structure, where each evaluator might need its own third-party oversight mechanism, creating a larger vertical hierarchy of participants. In addition to this type of vertical expansion of our model, we would welcome work aimed at disentangling the bundle of values we describe as'societal interests'--that is, a horizontal expansion in the sets of values that are juxtaposed against the actions of the evaluator. Delineating the norms and standards behind evaluations is a complex, context-dependent, and political undertaking with the potential to affect life-prospects in significant ways; failing to wrestle with this complexity may result in unfairly and illegitimately placing a thumb on the scale in favor of one or more of the stakeholders. ###### Acknowledgements. The authors would like to thank the members of the AI, Policy and Practice group (AIPP) at Cornell University and the Digital Life Initiative (DLI) at Cornell Tech for their feedback. In particular, we thank Solon Barocas, Smitha Milli, Kenny Peng, and Malte Ziewitz for illuminating conversations, suggestions and remarks. The work is supported in part by a grant from the John D. and Catherine T. MacArthur Foundation. Ben Laufer is additionally supported by a LinkedIn-Bowers CIS PhD Fellowship, a doctoral fellowship from DLI, and a SaTC NSF grant CNS-1704527. Jon Kleinberg is additionally supported by a Vannevar Bush Faculty Fellowship and a grant from the Simons Foundation. Helen Nissenbaum is also supported by a SaTC NSF grant CNS-1801501.
2305.07685
Synthetic data generation for a longitudinal cohort study -- Evaluation, method extension and reproduction of published data analysis results
Access to individual-level health data is essential for gaining new insights and advancing science. In particular, modern methods based on artificial intelligence rely on the availability of and access to large datasets. In the health sector, access to individual-level data is often challenging due to privacy concerns. A promising alternative is the generation of fully synthetic data, i.e. data generated through a randomised process that have similar statistical properties as the original data, but do not have a one-to-one correspondence with the original individual-level records. In this study, we use a state-of-the-art synthetic data generation method and perform in-depth quality analyses of the generated data for a specific use case in the field of nutrition. We demonstrate the need for careful analyses of synthetic data that go beyond descriptive statistics and provide valuable insights into how to realise the full potential of synthetic datasets. By extending the methods, but also by thoroughly analysing the effects of sampling from a trained model, we are able to largely reproduce significant real-world analysis results in the chosen use case.
Lisa Kühnel, Julian Schneider, Ines Perrar, Tim Adams, Fabian Prasser, Ute Nöthlings, Holger Fröhlich, Juliane Fluck
2023-05-12T13:13:55Z
http://arxiv.org/abs/2305.07685v1
Synthetic data generation for a longitudinal cohort study - Evaluation, method extension and reproduction of published data analysis results ###### Abstract Access to individual-level health data is essential for gaining new insights and advancing science. In particular, modern methods based on artificial intelligence rely on the availability of and access to large datasets. In the health sector, access to individual-level data is often challenging due to privacy concerns. A promising alternative is the generation of fully synthetic data, i.e. data generated through a randomised process that have similar statistical properties as the original data, but do not have a one-to-one correspondence with the original individual-level records. In this study, we use a state-of-the-art synthetic data generation method and perform in-depth quality analyses of the generated data for a specific use case in the field of nutrition. We demonstrate the need for careful analyses of synthetic data that go beyond descriptive statistics and provide valuable insights into how to realise the full potential of synthetic datasets. By extending the methods, but also by thoroughly analysing the effects of sampling from a trained model, we are able to largely reproduce significant real-world analysis results in the chosen use case. Synthetic Health Data Nutritional Studies Epidemiological Data Machine Learning ## 1 Introduction In biomedical research, scientific progress is often limited by the availability and quality of data. The results of any study can only be as good as the data on which the statistical analysis was based. Additionally, machine learning methods - including deep learning - are completely dependent on the quality and amount of the available training data. For some application fields it is extremely hard to obtain enough data to be able to draw any conclusions, for example in the case of rare diseases. In particular, to realise the full potential of deep learning methods, the amount of data should be larger than for more traditional machine learning approaches [1]. To increase availability of medical data, a legally compliant mechanism for sharing it between different institutions is needed. Furthermore, being able to share the data used in a publication is an important step in making its results reproducible in terms of the FAIR principles [2]. However, the storage, processing, and sharing of individual-level health data is tightly regulated and restricted by law, as health-related data are generally considered to be highly sensitive (e.g. [3] Art. 9). Sharing personal health data in compliance with laws and regulations, such as the European Union (EU) General Data Protection Regulation (GDPR), usually requires informed consent. However, this is often not feasible, for example if data is to be analysed retrospectively at a large scale. As an alternative, data can be anonymised in such a way that it cannot be traced back to specific individuals anymore. This typically requires significant modifications, e.g. by removing direct identifiers (such as names), and by coarsening indirect identifiers (such as age or geographic region). And this approach inevitably requires balancing the reduction of risks achieved through the removal of information with associated reductions in the utility of the data [4]. This also means that a complete anonymisation might not always be possible to achieve for many types of data, with genetic information being a common example. An alternative way to share data could be to use synthetic data generation methods that have been proven efficient in the past, e.g. by [5, 6, 7, 8, 9]. In this process, instead of modifying the original data to make them harder to re-identify, a completely new dataset is created, ideally with similar statistical properties as the real life data. In this study, we apply and adapt state-of-the-art algorithms to generate synthetic data for a defined use case, for which several downstream analyses have already been performed on the real data and their results have been published. In addition to analysing summary statistics of the dataset, this gives us the opportunity to gain deep insights into the possibilities and limitations of synthetic data in the specific use case. The original data that are used are parts of the data collected within the _Dortmund nutritional and anthropometric longitudinally designed (DONALD) study_, which is an ongoing nutritional cohort study capturing information about the diet and health of children in Dortmund, Germany since 1985 [10]. Participants are recruited as newborns, and then accompanied until young adulthood to get a holistic picture of their developing health and the dietary factors that influence it. The subset used during this study includes dietary data with focus on sugar intake based on all dietary records of participants aged three through 18 years between 1985 and 2016. The resulting dataset consists of structured health data, where the same 33 variables have been recorded over fixed yearly intervals. The data collected by the DONALD study have already been used for several nutritional studies, such as the recent analysis of sugar intake time trends by Perrar _et al._[11, 12], which was performed on the same subset used for this work. Hence, this subset, that will from now on only be called DONALD data, has the following properties that need to be taken into account when searching an appropriate synthetic data generation method: It is longitudinal, because its data have been collected over a series of 16 visits. Additionally, it contains static variables that are only collected during the first visit. It is also heterogeneous, since its columns consist of various different data types. Finally, it is incomplete in terms of longitudinality, as not all participants attended each annual visit. In the literature, a variety of different machine learning-based methods have been proposed to generate synthetic data, and we will discuss the following three classes that are commonly used (or combinations of them): probabilistic models, variational autoencoders, and generative adversarial networks (GANs). Especially GANs, originally developed by Goodfellow _et al._[13], are currently used in a variety of generative tasks and have proven successful in the generation of realistic fake images and in producing natural text data (e.g. [14, 15, 16, 17]). However, GANs have been proposed for the generation of continuous data [13]. Choi _et al._[18] address this by combining a GAN with an autoencoder in their architecture, which has been pre-trained to compress and reconstruct the original data. The resulting model, MedGAN, learns to generate EHRs. It provides an attractive solution to the generation of convincing electronic health records with continuous, binary and count features. Unfortunately, it cannot be used for the generation of DONALD data, since it is not able to generate longitudinal data. An alternative for longitudinal data has been developed by Esteban _et al._, who propose two different models (RGAN and RCGAN), both based on recurrent neural networks, to produce real-valued time series data [19]. Even though the model is able to learn longitudinality, it cannot cope with heterogeneous data types, as well as a mixture of static and longitudinal variables that are typical in clinical studies. A further alternative is timeGAN, another GAN-based method that learns an embedding to represent longitudinal data in a lower dimensional space [20]. However, timeGAN is unable to generate static covariates and has not been applied to the medical domain yet - hence, data incompleteness would potentially cause problems that require further work. Another promising method is called Variational Autoencoder Modular Bayesian Network (VAMBN), which has been, indeed, designed to generate fully synthetic data for mixed static and longitudinal datasets containing heterogeneous features with missing values [21]. To achieve this, a Heterogeneous-Incomplete Variational Autoencoder (HI-VAE) is combined with a Bayesian Network (BN). This works by splitting the data into modules, training a HI-VAE for each of them to produce encodings, and fitting the BN over all encodings of the modules. Longitudinally is handled by assigning the data of different visits to different modules. Due to its capabilities, VAMBN serves as a baseline algorithm for the present study. Because of the generative nature of the task - in contrast to discriminative tasks - the evaluation of the quality and usefulness of the data is not trivial, especially because it is performed on complex, heterogeneous health data. Different measures are used in different studies, probably due to different features having large variations in type and importance, which are highly dependent on the use case. Additionally, there is no standard terminology used by related studies. In this paper, we will follow the terminology from Georges-Filteau and Cirillo, who classify the metrics broadly into quantitative and qualitative metrics [22]. Here, qualitative analysis is based on visual inspection of the results done by field experts. For example, in _preference judgement_, given a pair of two data points - one real and one synthetic - the aim is to choose the most realistic one [18]. A similar method of this category is called _discrimination task_, where the expert is shown one data point at a time and needs to decide whether it is realistic or not. However, according to Borji, qualitative methods that are based on visual inspections are weak indicators and quantitative measures offer more convincing proofs of data quality at the dataset or sample level [23]. Georges-Filteau and Cirillo [22] further classify three subcategories of quantitative measures: comparisons between real and synthetic data on dataset level, comparisons of individual feature distributions, and utility metrics, which indicate whether the synthetic data can be used for real world analyses that were planned or already performed on the real dataset. Assessing the privacy of the data may be an even more difficult task, and is also handled differently in related works. While in some studies the risk of re-identification is assessed or controlled using empirical analyses (e.g. [18; 24]), another established method known as _differential privacy_[25] can provide theoretical probabilistic bounds for various types of privacy risks. This method has been adapted to the area of deep learning by adding a specified amount of noise during training [26]. Differential privacy can be integrated into all algorithms presented in this study. While acknowledging the importance of a more careful analysis of the privacy risks associated with synthetic data, we would like to point out that this is in practice a rather complex task, which we see out of the scope of this paper. In the present study, we make the following contributions: We apply the state-of-the-art algorithm VAMBN to generate synthetic data based on the DONALD dataset - a dataset from a nutritional cohort study. We extend VAMBN with a long short-term memory (LSTM) layer [27] to more effectively encode longitudinal parts of the data and show a significant increase in the ability to reproduce direct dependencies across time points. We evaluate our generated synthetic data on four different levels and show that while descriptive summary statistics and individual variable distributions can be efficiently reproduced with all chosen methods, direct dependencies can only be reproduced by our proposed extension. With this, we apply real-world experiments together with domain experts and gain valuable insights on how to exploit the potential of fully synthetic datasets. ## 2 Material and Methods ### DONALD Data The method of the used DONALD dataset has been described in detail by Perrar _et al._[11; 12]. The main content of the data for each record is the nutrient intake, e.g. fat or carbohydrate intake, as well as the intake of different types of sugars, measured as a percentage of the total daily energy intake in kilocalories per day (%E). Across all available records from the 1,312 participants, the dataset spans the ages three through 18, containing 36 variables. The only non-longitudinal variables are a personal number to identify each individual (pers_ID), an identification number for each family (fam_ID), and the sex of the participant. So a complete participant record contains 530 variables (3 static + [16 visits \(\times\) 33 longitudinal variables]). Note that some participants have not been part of the study for all 16 visits, for example because they are still under 18, and that some of the yearly visits may also have been skipped for unknown reasons. Figure 1 explores the amount of missingness per participant and visit. Apart from missed visits, there is almost no missingness in the data. The only exceptions are two variables that describe the overweight status and the education level of the mother of the participants (_m_ow_ and _m_schulab_), which have small amounts of missingness (1.25% and 0.17%, respectively). For missing values in the original analyses ([11; 12]), the respective median of the total sample was used (n = 38 for maternal overweightness, n = 5 for maternal educational status). For the application of VAMBN, the data need to be grouped into different modules. This has been initially done by experts and resulted in four modules for the DONALD data: Times (T), Nutrition (N), Anthropometric (A), and Socioeconomic (S). Over the study course, varying settings have been tested. An overview can be found in the Appendix in Table 1. Moreover, for each setting, there are a few static variables - also called covariates - that are not grouped (i.e., the family number and the sex of the participant). #### 2.1.1 Pre-processing The data are stored in a tabular format with one row per visit per participant. As an identifier, the personal number is given so that each row can be assigned to a specific participant. As can be seen in Fig. 1, the participants do not necessarily attend all 16 visits. To be able to apply the synthetic data generation methods to the DONALD data, the single rows of the dataset need to be mapped to the different visits based on the age of the participant. Thereby, we define 16 visits from zero to 15, where visit zero happens with the age of three and visit 15 with the age of 18, respectively. This results in a tabular format where each row corresponds to one single participant, including all different visits, so there are 530 columns. In this result, the degree of missingness becomes visible for every row. #### 2.1.2 Post-processing To perform the subsequent expert analysis, we first need to convert the data back to the original format. This is done by mapping the data back to 16 rows per participant, i.e. one row per visit. Note that the synthetic data do not contain any random missingness or missed visits. While we are analysing the effect of sample size, we apply the following further post-processing steps to ensure that the datasets are as close as possible to the original data: * The synthetic dataset must contain the same amount of participants. * The amount of items, i.e. visits, need to correspond to the original data. * The fraction of sexes in the synthetic and real cohorts should be similar. For the evaluation, we distinguish between raw and post-processed output to fully evaluate the algorithms (see Section 2.3 for details). Note that for both output formats the first step - i.e. mapping the data back to 16 rows per participant - has been applied. ### Model As baseline model, we use a _Variational Autoencoder Modular Bayesian Network_, short VAMBN [21], which has been designed to generate fully synthetic data for longitudinal datasets containing heterogeneous features and missing values. To achieve this, a Heterogeneous-Incomplete Variational Autoencoder (HI-VAE) [28], which is able to handle incomplete and heterogeneous data, is combined with a conditional Gaussian Bayesian Network (BN) [29]. To apply VAMBN, the dataset is first split into modules, which are encoded by individually trained HI-VAE modules. Thereby, different variables of the dataset are grouped together based on context, preferably together with domain experts. For each module at each time point (i.e. visit), an HI-VAE is trained, learning a low dimensional Gaussian mixture model of the input data. The resulting embeddings (consisting of discrete as well as Gaussian variables) are then used as input for a _Modular Bayesian Network_ (MBN), learning dependencies between these modules. At this point, auxiliary variables are introduced as missingness indicators for entire visits. For structure learning, so-called black and white lists Figure 1: Missingness in the DONALD dataset. In (a), the amount of visits that have been attended by the participants in total can be seen. (b) depicts the number of participants that have attended a specific visit in blue and in red, the number of participants who entered the study at this specific visit is shown. are employed that prevent or enforce certain edges, respectively, in order to constrain the space of admissible graph structures as much as possible. From this MBN, synthetic embeddings can be drawn, and subsequently decoded by their respective HI-VAE modules to produce the final synthetic data. We refer to [21] for a more detailed and mathematically precise explanation. Our extension has primarily been developed to improve VAMBN's ability to reconstruct direct mathematical dependencies between variables from different time points due to the longitudinal nature of our dataset. General ideaA simple approach to enable VAMBN to learn correlations is placing the correlated variables in the same module, because then the HI-VAE can account for their dependencies on its own. But this is limited to correlations at one specific visit, as the same variable observed at different time points is split into separate HI-VAE modules. The resulting embeddings are subsequently modelled via the BN. However, depending on the quality of the HI-VAE embeddings, this separation may result in a weakening of the temporal correlation structure after data synthesis. This is why in our proposed extension, all visits of each variable group are instead simultaneously encoded by one HI-VAE, which is extended by an LSTM encoder. As a result, longitudinal variables share one embedding, and thus the BN is only used to learn dependencies between the embeddings of all the different variable groups, standalone variables, and missingness indicators. ArchitectureTo be able to encode all \(v\) visits of all \(d\) variables from a module with a single HI-VAE, some changes to the pipeline have to be made. Firstly, the _Evidence Lower BOund (ELBO)_ - that is used as an optimisation criterion during training of the autoencoder - is now calculated as a sum over the contributions of all \(v\cdot d\) values, to be able to account for their heterogeneous data types and varying missingness over time. Secondly, to be able to learn dependencies over the time steps better, a Recurrent Neural Network (RNN) is inserted before the HI-VAE's recognition model (i.e., the encoder). Depending on the amount of visits in the dataset, the RNN may need to be able to represent long-term dependencies in its output. Hence, we chose a Long Short-Term Memory (LSTM) layer, due to its ability to prevent the vanishing gradients problem even when being trained on many time steps. The dimensionality of its output, which gets passed into the recognition model, is configurable. Our chosen setting can be found in the Appendix in Table A3. This leads to further changes that need to be done to the generative model (i.e., the decoder): The intermediate representation vector \(Y\) is now also \(v\) times larger, to account for the increased number of data points encoded by the embedding \(z\). Here, \(Y\) is a single homogeneous intermediate representation vector, produced by a Deep Neural Network (DNN) \(g(z)\) (introduced by Nazabal _et al._[28] in order to cope with statistical dependencies across heterogeneous data types). Note that here \(g\) learns any dependencies between variables across different time points, before \(Y\) is consumed by the DNNs that each parameterise one of the \(v\cdot d\) distributions of attributes at specific visits. Separate parameterisations for different visits are required, since the same variable may be distributed very differently at varying time points. The Bayesian Network still works the same as in the original VAMBN approach, although it is smaller due to the merging of all visits into one module (i.e. node of the BN). To analyse the effect of the two described changes in the architecture, we compare the following VAMBN variants: 1. Original VAMBN implementation 2. VAMBN - Memorised Time Points (VAMBN-MT): as described above, all \(v\) visits of a module's \(d\) variables are encoded in one HI-VAE model, its ELBO is calculated as a sum over all \(v\cdot d\) contributions, and the LSTM is inserted before the encoder. 3. VAMBN - Flattened Time Points (VAMBN-FT): to judge the added value of the LSTM, we replace it with a standard feedforward network. An overview of the applied architectures can be seen in Fig. 2. ### Evaluation Metrics We analyse the synthetic data on different levels in order to be able to judge its quality. The methods can be divided into the following four categories - thereof, the two latter methods are dataset-specific, since our focus lies on the utility of the generated data. We analyse the **individual variable distributions** to ensure that they are correctly distributed across the entire synthetic population. Therefore, we provide summary statistics and density plots. Moreover, we quantify the differences between real and synthetic data distributions using the Jensen-Shannon (JS) divergence that measures the relative distance between two probability vectors [30]. The output ranges from 0 to 1 with 0 indicating equal distributions. Data are binned in order to get probability vectors. We use _numpy's_ method _histogram_bin_edges_ to determine the optimal bin size per variable by choosing the maximum of _Sturges' rule_[31] and the _Freedman Diaconis Estimator_[32]. Since correct distributions alone are not a sufficient indicator of convincing data however, we furthermore evaluate the reproduction of **correlations between the variables** generated by the generative model. To account for this, the Pearson correlation coefficients for all pairs of variables are calculated, including correlations across time points, and are visualised in a heatmap. For quantification, we determine the relative error \(\epsilon\) of the correlation matrices as can be seen in Eq. 1. Therefore, the Frobenius norm of the difference of the \(real\) and \(virtual\) correlation matrices is divided by the Frobenius norm of the \(real\) data correlation matrix. Hence, the value can range between 0 and infinity with 0 indicating a perfect reproduction of the correlations. \[\epsilon=\frac{||real-virtual||}{||real||} \tag{1}\] Even if the variable correlations are good, this still does not guarantee that the result is convincing. Variables in the original data may not only correlate, but even have direct mathematical relationships that need to be met for the synthetic data to be realistic and plausible. The existence of such **direct dependencies** is unique to the given dataset, so this work will mention them for the DONALD dataset and analyse to what degree they are met by the synthetic data. Finally, the direct practical utility of the synthetic data can be tested by running **real-world analyses** on both the original and synthetic datasets, comparing their results. For the DONALD data, this means conducting the same time and age trend analyses in _added sugar_ intake following the methods by Perrar _et al._[11; 12], to investigate whether we Figure 2: Overview of the developed architecture. For all settings, the real-world data are pre-processed and then embeddings are learned by an variational autoencoder (HI-VAE). Afterwards, the embeddings of the different modules are fed into a modular Bayesian Network, from which we can sample new data (that need to be decoded again by the autoencoder to be human-readable). The three shown settings differ in the neural network structure of the autoencoder (specifically, the encoder). This reduces the complexity of the Bayesian Network. In setting 2, the structure of the feedforward network is changed in such a way that all visits per module are encoded together, i.e. learned in one model. In setting 2, the default feedforward network is then changed to an LSTM layer in order to better cope with longitudinal dependencies. can reproduce the results. In this analysis, polynomial mixed effects regression models have been determined. We use their _unadjusted models_ for time and age trends. Note, that we re-built the polynomial mixed-effects models in \(R\), that were originally coded in _SAS_, thus the values for the original data can deviate slightly from the values determined by Perrar _et al._[12]. Additionally, since our aim was not to investigate intake trends, but to generate a direct comparison between original and synthetic data, we have simplified the presentation of the trend analyses, i.e. separate presentation of age and time trends. ### Experimental Setup An overview of our experimental setup can be seen in Table 1. For the evaluation of individual variable distributions, correlations between variables, and direct dependencies, we compare our two developed methods (VAMBN-FT and VAMBN-MT) with the baseline approach (VAMBN). Because we judge their effectiveness relative to the real data, we choose the same sample size (\(N=1,312\)). For the subsequent real-world analyses, we choose VAMBN-MT for all experiments, but additionally investigate the influence of varying module selections - dependent on the research question. The module selections can be found in the Appendix in Table A2. For each selection, we sample 1,312 and 10,000 participants, respectively. While we want the smaller dataset to resemble the original data as much as possible, we apply the previously described post-processing (such as inserting the same percentage of missingness). In addition, we investigate the difference when using a much larger dataset without any post-processing. For each experiment and sample size, we sample 100 synthetic datasets (from the same model). To prevent influencing the time trend with outliers, we omit time points that are larger than the maximum value observed in the original data (i.e. lie in the future). For each time point, to represent all 100 trend functions, we plot their means along with their 2.5% and 97.5% quantiles. For all performed experiments, we use the same black- and white lists and hyperparameter settings that can be found in the Appendix (see Table A3). ## 3 Results We evaluated the generated synthetic data on four different levels (described in Section 2.3). The individual variable distributions are presented in Section 3.1, followed by the investigation of correlations between different variables in Section 3.2. Moreover, use-case specific direct dependencies are evaluated in Section 3.3. Finally, in Section 3.4, the results of the real-world analyses can be found. ### Individual Variable Distributions In order to compare the distribution between the real and synthetic data, individual distributions were plotted as a first step. Examples of these distributions can be found in Table 2, Figure 3, and Figure 4, respectively. Overall, it can be seen that the summary statistics are very similar for all the different generated datasets and match the distributions of the real variables. For example, for the total sugar intake (\(ZUCK\_p\)), the mean of the real data amounts to 26.96, whereas the means of the synthetic data amount to 26.12, 26.1 and 26.09 for the three methods. As an example, the distribution for the third visit is visualised in Fig. 3, where slight differences between the three methods are visible and VAMBN-MT best matches the original distribution. This is also indicated by the determined JS-divergences of 0.15, 0.11 and 0.09 for the three methods, respectively. Discrete variables mostly get reconstructed correctly by VAMBN and its extensions as well. Three examples can be seen in Figure 4. In Fig. 4c, a clear improvement can be seen for \begin{table} \begin{tabular}{l l c c c} Evaluation Method & Method & Sample Size & Modules* & Post-processed \\ \hline Distributions, Correlations, & VAMBN & & & \\ and Direct Dependencies & VAMBN-FT & 1,312 & (i) & No \\ & VAMBN-MT & & & \\ \hline & & 1,312 & (i) & Yes \\ & & 10,000 & & No \\ Real World Analysis & VAMBN-MT & 1,312 & (ii) & Yes \\ & & 10,000 & & No \\ & & 1,312 & (iii) & Yes \\ & & 10,000 & & No \\ \hline \multicolumn{4}{l}{*See Table A2 for module settings.} \\ \end{tabular} \end{table} Table 1: Overview of the experimental setup VAMBN-MT. The averaged JS-divergence over all variables (both discrete and continuous) and across all visits results in \(0.15\pm 0.11\) for VAMBN, in \(0.12\pm 0.08\) for VAMBN-FT and \(0.12\pm 0.09\) for VAMBN-MT. ### Correlations between Variables We use correlations between variables as an indicator of the extent to which the properties and dependencies of the real data are represented in the synthetic data. We checked for dependencies both within the same visit and across different visits. In the heatmap representation of the Pearson correlation matrices between all variables of all visits (Figure 5), it can be seen that absolute correlation values are generally higher in the original data (Fig. 5a). With VAMBN, we obtain significantly smaller pair-wise correlations than in the real data and get a relative error of 0.86. With the VAMBN-FT version, the error is reduced to 0.74 (see Fig. 5c) and the best result is achieved with VAMBN-MT with an error of 0.70 (see Fig. 5d). In general, also in the original data, correlations are higher between variables of the same module than between variables of different modules, which indicates a good module selection. ### Direct Dependencies The chosen DONALD dataset contains several interesting direct dependencies, representing case-specific expert knowledge, that can be used to further judge the quality of the synthetic data by investigating whether these dependencies can be correctly reproduced. Therefore, we chose two of these: Firstly, the boolean variable \(m\_schulab\), indicating \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{Variable} & Dataset & Mean & SD & Median & 25\% & 75\% & JS-div. \\ \hline \multirow{3}{*}{Fett\_p} & Real Data & 34.35 & 5.66 & 34.34 & 30.64 & 38.09 & - \\ & VAMBN & 34.46 & 5.98 & 33.96 & 30.34 & 38.08 & \(0.11\pm 0.02\) \\ & VAMBN-FT & 34.48 & 6.07 & 33.99 & 30.23 & 38.26 & \(0.10\pm 0.01\) \\ & VAMBN-MT & 34.48 & 6.08 & 34.03 & 30.26 & 38.18 & \(0.11\pm 0.01\) \\ \hline \multirow{3}{*}{EW\_p} & Real Data & 12.99 & 2.20 & 12.85 & 11.50 & 14.27 & - \\ & VAMBN & 13.14 & 2.18 & 12.96 & 11.63 & 14.47 & \(0.09\pm 0.02\) \\ & VAMBN-FT & 13.19 & 2.23 & 13.0 & 11.64 & 14.54 & \(0.09\pm 0.02\) \\ & VAMBN-MT & 13.23 & 2.3 & 13.04 & 11.6 & 14.64 & \(0.09\pm 0.01\) \\ \hline \multirow{3}{*}{ZUCK\_p} & Real Data & 26.96 & 6.72 & 26.75 & 22.37 & 31.31 & - \\ & VAMBN & 26.12 & 5.88 & 25.69 & 22.26 & 29.28 & \(0.15\pm 0.02\) \\ & VAMBN-FT & 26.1 & 6.86 & 25.51 & 21.28 & 30.27 & \(0.11\pm 0.02\) \\ & VAMBN-MT & 26.09 & 7.25 & 25.63 & 20.95 & 30.55 & \(0.09\pm 0.02\) \\ \hline \multirow{3}{*}{ZUZU\_p} & Real Data & 13.12 & 5.68 & 12.48 & 9.09 & 16.28 & - \\ & VAMBN & 12.63 & 5.13 & 11.85 & 9.31 & 14.96 & \(0.15\pm 0.03\) \\ & VAMBN-FT & 12.92 & 6.01 & 11.94 & 8.73 & 15.92 & \(0.10\pm 0.01\) \\ & VAMBN-MT & 12.98 & 6.34 & 11.86 & 8.53 & 16.22 & \(0.10\pm 0.02\) \\ \hline \multirow{3}{*}{age} & Real Data & 9.27 & 4.43 & 8.98 & 5.14 & 12.98 & - \\ & VAMBN & 10.56 & 4.62 & 10.57 & 6.62 & 14.57 & \(0.21\pm 0.02\) \\ \cline{1-1} & VAMBN-FT & 10.56 & 4.63 & 10.55 & 6.69 & 14.44 & \(0.21\pm 0.02\) \\ \cline{1-1} & VAMBN-MT & 10.56 & 4.63 & 10.53 & 6.59 & 14.49 & \(0.21\pm 0.01\) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary statistics for individual variables across all visits, covering mean, standard deviation (SD), the upper and lower quartiles (i.e. 25% and 75%, respectively), and the Jensen-Shannon divergence (JS-div.) Figure 3: Distribution of the total sugar intake (\(ZUCK\_p\)) at visit three. whether the participant's mother has had 12 years of school education. In the original dataset, this variable mostly stays the same, and might increase from 0 to 1 if a participant's mother receives further education during the study. Logically, the indicator never changes from 1 to 0, however. In Figure 6, we show the value of the graduation status over all 16 visits for 100 randomly chosen samples for each of the three experiments, respectively. It gets evident that the original VAMBN approach is not able to correctly reproduce dependencies across time, because the graduation status of the mother often changes back and forth which is not plausible (see Fig. 5(a)). We see an improvement in the VAMBN-FT experiment (Fig. 5(b)), which makes sense because here, the HI-VAE module receives all visits as one input, containing all variables involved in this dependency. But still, the result is not satisfactory. In contrast, only a few errors are left when using VAMBN-MT, as can be seen in Fig. 5(c). Figure 4: Distribution of different discrete variables at different visits. Figure 5: Heatmaps of the Pearson correlation matrices of all visits for every variable for the real data (a) and the synthetic data produced by the three different approaches (b-d). Note, that we inserted the same amount of missingness to the synthetic data to get comparable results. Their columns are sorted by variable groups (standalone > time > anthropometric > socioeconomic > nutrition, within groups alphabetical by variable name) first, indicated by the letters below (a). Within each variable, columns are sorted by visit ascending. Thus, the visible squares formed in the heatmap each correspond to the correlation of two variables over their 16 visits. As a second direct dependency, we investigated the relationship between the variables _alter_ and _time_, which describe the exact age of the child and the study, respectively. They are both given as positive real-numbered values and form the variable group times. Again, for all three experiments, their summary statistics seemed plausible for all visits: The age of synthetic participants was around the expected age, with some random variance. And the age of the study was commonly anywhere between 0 and 30 years - which is also plausible, since the study had always included children of any age over the course of its existence. When looking at an individual child over multiple visits however, the age of the study kept fluctuating, often even getting younger. Whereas in the original data, if the age has increased by \(\Delta_{alter}\) then the increase in time \(\Delta_{time}\) is exactly the same. For the virtual participants produced by the three experiments, the error \(\Delta_{time}\) - \(\Delta_{alter}\) has been calculated for any two consecutive visits, so 15 times per participant. Figure 6(a) shows the results across all visits of all participants for all three experiments. Here again, we see the worst result for the baseline approach, and the best result for VAMBN-MT. The error in passing time is clearly smaller than in the other two VAMBN versions. Hence, we can conclude that our LSTM adaption significantly improves the reconstruction of direct dependencies between variables within and across time points. However, these observed improvements are limited to variables across different visits with direct mathematical relationships. Similar relationships existing within a single visit are mostly unaffected by the extension, as Figure 6(b) shows. ### Real-World Analyses To finally judge the usefulness of synthetic data, we investigate and compare their performance on real-world analyses. Because VAMBN and VAMBN-FT show a lower performance in reproducing correlations between variables and direct Figure 6: Examples of reconstructions of the boolean variable \(m\_schulab\) across time. This variable indicates whether the mother of the participant has attended 12 years of school education. Whenever it was set to _True_ (light blue) at some point, it can logically not change back to _False_ (dark blue) in a later visit. The different subfigures represent the results for the three different approaches, respectively. In each case, 100 samples have been drawn randomly. Figure 7: Reproduction of direct mathematical dependencies. In (a), the error in passing time between the child’s age and the study time is shown. It is calculated between any two successive visits, where \(\Delta_{alter}\) is the time passed for the child, and \(\Delta_{time}\) is the time passed for the study. In (b), the error in reconstructing a direct mathematical relationship within a single visit is shown, i.e. the sum of proteins (EW_p), fats (Fett_p) and carbohydrates (KH_p). A perfect reconstruction of the three variables should always sum to 100. dependencies of the dataset compared to VAMBN-MT, only VAMBN-MT's performance on real-world analyses is investigated. In Figure 8, the age and time trends predicted by the polynomial mixed-effects regression models for the _added sugar_ intake can be seen. The determined trends, including significance levels, can be found in the Appendix in Table A4. The age trend can be reproduced very well by the synthetic data and we see the same progression. Whereas the added sugar increases from the beginning up to the age of 10, it slowly decreases again afterwards. It can be seen that the variance (indicated as confidence intervals by the error bar), gets higher with increasing age. This may be due to the fact that the data basis is more incomplete for older participants. Whereas the same trend can be seen by both module selection settings (i) and (ii), selection (ii) is able to reproduce approximately same values (i.e. in terms of amplitude). In this selection, the two dependent variables _age_ and _added sugar_ are learned together in one module. Nevertheless, we achieve the same trend with module setting (i), presumably because the varying age of the participant is implicitly available to the network by the number of the visit for each module. The time trend can be approximated with the _VAMBN-MT (ii) model_ (visualised in black in Fig. 8b). In this setting, the variables _time_ and _added sugar_ (\(zuzu_{p}\)) are learned together in one module. In contrast, the model trained with setting _(i)_, where the two mentioned variables are encoded by two different autoencoders, the overall descending trend cannot be reproduced - i.e., there is no change in added sugar consumption over the years. **Effects of sampling:** From each trained model, an infinite amount of datasets can be sampled. Whereas resampling the data lead to no visible changes in the previously shown analyses (i.e. variable distributions, correlations between variables, and direct dependencies), an effect can be seen for the real-world analyses. Here, we realised that the sample size plays an important role for the magnitude of this effect. For \(N=10,000\) (without any post-processing), we see the same overall trend for all analyses, with only small variances in amplitude. However, for \(N=1,312\), which corresponds to the amount of samples in the original data, and including missingness, we sometimes even experience differences in the trend. In the Appendix in Figures A1a and A1b, examples of good and bad results can be seen for both sample sizes, respectively. This finding is in line with the observation that we get much higher variances in terms of confidence intervals for the smaller sample size as compared to the larger one, as can be seen in Fig. A2. Additionally, the sample size has an effect on the significance level. For the sample size of 10,000, all of the 100 sampled datasets show statistically significant results for both age and time trends - i.e. they have p-values below 0.05 for all terms (linear, quadratic and cubic). In contrast, for the sample size of 1,312, only 70.71% and 22.22% of the samples produce statistically significant age and time trends, respectively. **Effects of real data quality:** The significance levels found in the age and time trends of the real data influence the ability to reproduce the results with synthetic data. In addition to the _added sugar_, Perrar _et al._ also investigated age and time trends for the _total sugar_ consumption. Whereas the age trends are all statistically significant (i.e. \(p<0.05\)), this is not the case for the time trend, where both linear and cubic terms have p-values of 0.8951 and 0.1620, respectively. This is reflected in our analyses, as we are able to reproduce the age, but not the time trend. This is summarised in Fig. A3. ## 4 Discussion Due to legitimate privacy concerns, access to personal health data is strictly regulated [3], even for scientific purposes. However, the availability of data is fundamentally necessary for scientific progress. For deep learning methods to be successfully applied, it is also crucial that the available training datasets are sufficiently large [1]. To satisfy both privacy and usability demands, a classic approach is to anonymise datasets, which reduces disclosure risks and may allow sharing of the data even without informed consent. But anonymisation can severely limit the utility of medium to high-dimensional data for statistical analyses [4], and there are still recent results proclaiming to be able to re-identify individuals from anonymised data [33; 34]. For these reasons, fully synthetic data generation methods are explored as an alternative to anonymisation. Their idea is that generating and sharing synthetic data may offer a better compromise between privacy of the individuals who appeared in the original data set, and the usability of the shared data [35]. A lot of research has been already performed in the area of synthetic data generation methods. Common applications are based on image, text or tabular data generation from various domains (e.g. [14; 15; 16; 17]). However, the heterogeneity, the incompleteness, and the longitudinality of the DONALD dataset complicate the construction of generative models that produce useful synthetic data, making the dataset particularly interesting for the development of new methods and their evaluation. Due to the challenging properties of the dataset, the majority of published methods are not suitable for the task at hand. Therefore, we chose VAMBN [21] as a baseline method that is indeed designed to handle longitudinal, heterogeneous, and incomplete datasets. It does so by combining HI-VAE [28], a generative model that can encode and reconstruct heterogeneous incomplete data, with a Bayesian Network. In our study, we showed that VAMBN is able to reproduce summary statistics and individual variable distributions effectively. However, pair-wise correlations between different variables in the synthetic data are often not captured well (see Fig. 5). Moreover, VAMBN fails to learn use case specific direct dependencies over time, such as the graduation status of the mother of the participants or the linear correlation between the participant's age and the time point of the study (see Figures 6 and 7a). Therefore, we proposed an extension of VAMBN, called _VAMBN-Memorised Time points_ (MT), that incorporates an LSTM network, which was designed to have a long short-term memory, i.e. to better cope with time dependencies within the data. In our extension, several visits of the same variable were modelled within one module instead of splitting them into different modules that are connected via the BN. Hence, this extension firstly requires each HI-VAE module to map the data for all visits of its variable group to a single encoding, which is a process assisted by the newly introduced LSTM layer. And secondly, the extension must be able to reconstruct all data points of all visits at once from the singular encoding, which is done in a manner that ensures that all dependencies can be learned. In order to judge the improvement from including the LSTM layer, we compared both baseline and VAMBN-MT to _VAMBN-Flattened Time points_ (FT), which also reframes the visits so that they are learned together, but only uses the default feedforward network in the HI-VAE encoder (see Figure 2). In terms of pair-wise correlations between variables, VAMBN-MT clearly outperforms the two other approaches (Fig. 5). Also the direct dependencies across visits (i.e. the education status of the mother and the linear relationship between _time_ and _age_) are greatly improved with VAMBN-MT. An essential part of the quality analysis of synthetic data is to test whether they can be used for real-world analyses. Therefore, we use VAMBN-MT to reproduce time and age trends for added sugar intake, as done by Perrar _et al._[12]. Here, we could successfully reproduce the age trend and approximate the time trend very well (see Fig. 8). Because we realised that the real-world analyses can vary between different datasets that were sampled from the same model (which is not the case for the previous analyses), we systematically determined the the effect of sampling by means of drawing new samples from each model 100 times and varying the sample size by generating datasets of size 1,312 (=size of the original data) and 10,000 (with and without post-processing, respectively). Thus, we draw the following conclusions: Firstly, a larger dataset leads to more stable age and time trends that only differ in amplitude but not in the progression of the trend itself (compare Fig. A1). Larger datasets also show less variance (in terms of confidence intervals) (Fig. A2). Moreover, whereas analyses with a small dataset lead to statistically significant results only in 22%-70% of cases, we get 100% significance for the large datasets. In addition, we could show that the selection of the modules needs to be done in dependence to the research question because correlations are lost if variables are trained with separate autoencoders. Finally, we investigated the total sugar intake, that does not show Figure 8: Comparison of age and time trends for added sugar intake predicted by polynomial mixed-effects regression models. The age and time trends of added sugar intake is shown for the real data (red) and the synthetic data (blue setting (i), and black setting (ii)) in subfigures a) and b), respectively. Whereas setting (i) corresponds to the original module selections proposed by the experts, in setting (ii) the dependent variables _time_, _age_ and _added sugar_ are in one module, hence learned by the same autoencoder model. The error bar for the synthetic data indicates the confidence intervals calculated across 100 independently sampled datasets. For all synthetic datasets, the sample size is \(N=10,000\). statistically significant results in the real data for linear and quadratic time trends, and conclude that a non-significant trend cannot be reproduced with the synthetic data (see Figure A3). Altogether, we showed the importance of use case-specific evaluations incorporating expert knowledge, especially in terms of direct dependencies. With the real-world analyses, we got a demonstration of the usefulness of synthetic data and gained valuable insights into the effects of resampling and the chosen sample size, the selection of variables that need to be learned together, and the importance of the significance level of the real-world experiments. ## 5 Conclusion and Outlook In this study, we generated synthetic data for a longitudinal study from the nutritional domain and performed an in-depth quality analysis, going beyond summary statistics and individual variable distributions. As we realised restrictions in the reproduction of use case specific direct dependencies across time points of current state-of-the-art methods, we developed VAMBN-MT, an LSTM-based extension of VAMBN that outperforms the original approach and is even able to reproduce real-world analyses. We highlighted the drastic increase in synthetic data quality achieved by incorporating expert domain knowledge and choosing a sufficiently large sample size. We showed that the resulting synthetic data can be a valuable source for real-world analyses. As a next step in our research, we plan to investigate the privacy risks associated with the data generated by our model to gain further insights into the risk-utility trade-off provided and to ultimately unlock the potential benefits of using synthetic data. ## Availability We provide the source code for the adapted algorithm under [https://github.com/nfdi4health/vambn-extensions-evaluations/](https://github.com/nfdi4health/vambn-extensions-evaluations/). ## Acknowledgements This work was done as part of the NFDI4Health Consortium (www.nfdi4health.de). We gratefully acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- project number 442326535. The DONALD Study is financially supported by the Ministry of Science and Research of North Rhine-Westphalia, Germany. Trend analyses of the original data were part of a project funded by the German Federal Ministry of Food and Agriculture (BMEL) through the Federal Office for Agriculture and Food (BLE), grant 2816HS024. ## References * [1] Md Zahangir Alom, Tarek M. Taha, Chris Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamma Nasrin, Mahmudul Hasan, Brian C. Van Essen, Abdul A. S. Awwal, and Vijayan K. Asari. A state-of-the-art survey on deep learning theory and architectures. _Electronics_, 8(3):292, 2019. * [2] Mark D. Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E. Bourne, Jildau Bouwman, Anthony J. Brookes, Tim Clark, Merce Crosas, Ingrid Dillo, Olivier Dumon, Scott Edmunds, Chris T. Evelo, Richard Finkers, Alejandra Gonzalez-Beltran, Alasadar J. G. Gray, Paul Groh, Carole Goble, Jeffrey S. Grethe, Jaap Heringa, Peter A. C. 't Hoen, Rob Hooft, Tobias Kuhn, Ruben Kok, Joost Kok, Scott J. Lusher, Maryann E. Martone, Albert Mons, Abel L. Packer, Bengt Persson, Philippe Rocca-Serra, Marco Roos, Rene van Schaik, Susanna-Assunta Sansone, Erik Schultes, Thierry Sengstag, Ted Slater, George Strawn, Morris A. Swertz, Mark Thompson, Johan van der Lei, Erik van Mulligen, Jan Velterop, Andra Waagmeester, Peter Wittenburg, Katherine Wolstencroft, Jun Zhao, and Barend Mons. The FAIR guiding principles for scientific data management and stewardship. 3(1):160018. Number: 1 Publisher: Nature Publishing Group. * [3] Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation). _OJ_, 2016. * [4] Charu C. Aggarwal. On k-anonymity and the curse of dimensionality. In _Proceedings of the 31st international conference on Very large data bases_, VLDB '05, pages 901-909. VLDB Endowment. * [5] Yang Lei, Joseph Harms, Tonghe Wang, Yingzi Liu, Hui-Kuo Shu, Ashesh B. Jani, Walter J. Curran, Hui Mao, Tian Liu, and Xiaofeng Yang. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. 46(8):3565-3581 _eprint: [https://onlinelibrary.wiley.com/doi/pdf/10.1002/mp.13617](https://onlinelibrary.wiley.com/doi/pdf/10.1002/mp.13617). * [6] Philipp Wendland, Colin Birkenbihl, Marc Gomez-Freixa, Meemansa Sood, Maik Kschischo, and Holger Frohlich. Generation of realistic synthetic data using multimodal neural ordinary differential equations. 5(1):1-10. Number: 1 Publisher: Nature Publishing Group. * [7] Meemansa Sood, Akrishta Sahay, Reagon Karki, Mohammad Asif Emon, Henri Vrooman, Martin Hofmann-Apitius, and Holger Frohlich. Realistic simulation of virtual multi-scale, multi-modal patient trajectories using bayesian networks and sparse auto-encoders. 10(1):10971. Number: 1 Publisher: Nature Publishing Group. * [8] Andre Goncalves, Priyadip Ray, Braden Soper, Jennifer Stevens, Linda Coyle, and Ana Paula Sales. Generation and evaluation of synthetic patient data. 20(1):108. * [9] Debbie Rankin, Michaela Black, Raymond Bond, Jonathan Wallace, Maurice Mulvenna, and Gorka Epelde. Reliability of supervised machine learning using synthetic data in health care: Model to preserve privacy for data sharing. 8(7):e18910. * [10] AE Buyken, U Alexy, M Kersting, and T Remer. Die DONALD Kohorte. _Bundesgesundheitsblatt-Gesundheitsforschung-Gesundheitsschutz_, 55(6):875-884, 2012. * [11] Ines Perrar, Alena M. Schadow, Sarah Schmitting, Anette E. Buyken, and Ute Alexy. Time and age trends in free sugar intake from food groups among children and adolescents between 1985 and 2016. 12(1):E20. * [12] Ines Perrar, Sarah Schmitting, Karen W Della Corte, Anette E Buyken, and Ute Alexy. Age and time trends in sugar intake among children and adolescents: results from the DONALD study. _European journal of nutrition_, 59(3):1043-1054, 2020. * [13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. _Advances in neural information processing systems_, 27, 2014. * [14] Ming-Yu Liu, Xun Huang, Jiahui Yu, Ting-Chun Wang, and Arun Mallya. Generative adversarial networks for image and video synthesis: Algorithms and applications. 109(5):839-862. Conference Name: Proceedings of the IEEE. * [15] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. pages 8110-8119. * [16] Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, and Xiang Ren. Generating natural language adversarial examples on a large scale with generative models. * [17] Sandeep Subramanian, Sai Rajeswar, Francis Dutil, Chris Pal, and Aaron Courville. Adversarial generation of natural language. In _Proceedings of the 2nd Workshop on Representation Learning for NLP_, pages 241-251. Association for Computational Linguistics. * [18] Edward Choi, Siddharth Biswal, Bradley Malin, Jon Duke, Walter F Stewart, and Jimeng Sun. Generating multi-label discrete patient records using generative adversarial networks. In _Machine learning for healthcare conference_, pages 286-305. PMLR, 2017. * [19] Cristobal Esteban, Stephanie L. Hyland, and Gunnar Ratsch. Real-valued (medical) time series generation with recurrent conditional GANs. * [20] Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time-series generative adversarial networks. _Advances in neural information processing systems_, 32, 2019. * [21] Luise Gootjes-Dreesbach, Meemansa Sood, Akrishta Sahay, Martin Hofmann-Apitius, and Holger Frohlich. Variational autoencoder modular bayesian networks for simulation of heterogeneous clinical study data. _Frontiers in big Data_, 3:16, 2020. * [22] Jeremy Georges-Filteau and Elisa Cirillo. Synthetic observational health data with GANs: from slow adoption to a boom in medical research and ultimately digital twins? * [23] Ali Borji. Pros and cons of GAN evaluation measures. * [24] Andre Goncalves, Priyadip Ray, Braden Soper, Jennifer Stevens, Linda Coyle, and Ana Paula Sales. Generation and evaluation of synthetic patient data. _BMC medical research methodology_, 20(1):1-40, 2020. * [25] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Shai Halevi and Tal Rabin, editors, _Theory of Cryptography_, Lecture Notes in Computer Science, pages 265-284. Springer. * [26] Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_, CCS '16, pages 308-318. Association for Computing Machinery. * [27] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. _Neural computation_, 9(8):1735-1780, 1997. * [28] Alfredo Nazabal, Pablo M Olmos, Zoubin Ghahramani, and Isabel Valera. Handling incomplete heterogeneous data using vaes. _Pattern Recognition_, 107:107501, 2020. * [29] David Heckerman and Dan Geiger. Learning bayesian networks: a unification for discrete and gaussian domains. In _Proceedings of the Eleventh conference on Uncertainty in artificial intelligence_, UAI'95, pages 274-284. Morgan Kaufmann Publishers Inc. * [30] Frank Nielsen. On the jensen-shannon symmetrization of distances relying on abstract means. _Entropy_, 21(5), 2019. * [31] Herbert A. Sturges. The choice of a class interval. 21(153):65-66. Publisher: New York. * [32] David Freedman and Persi Diaconis. On the histogram as a density estimator:l 2 theory. 57(4):453-476. * [33] Aloni Cohen. Attacks on deidentification's defenses. In _31st USENIX Security Symposium (USENIX Security 22)_, pages 1469-1486, Boston, MA, August 2022. USENIX Association. * [34] Luc Rocher, Julien M. Hendrickx, and Yves-Alexandre de Montioye. Estimating the success of re-identifications in incomplete datasets using generative models. 10(1):3069. Number: 1 Publisher: Nature Publishing Group. * [35] Khaled El Emam, Lucy Mosquera, and Richard Hoptroff. _Practical synthetic data generation: balancing privacy and the broad availability of data_. O'Reilly Media, 2020. * [36] W. N. Schofield. Predicting basal metabolic rate, new standards and review of previous work. * [37] W. Sichert-Hellert, M. Kersting, and G. Schoch. Underreporting of energy intake in 1 to 18 year old german children and adolescents. 37(3):242-251. ## Appendix ### DONALD Data In Table 1, metadata for the DONALD dataset can be seen that are described in the following. Each participant gets a personal number - a randomly generated unique identifier. In addition, each participating family gets a randomly generated family number. A family consists of the mother, the father, the biological, the adoptive, and the foster children. The sex of each participant is noted as either male or female. For every three-day weighed dietary record, the age of the participant is noted together with the date of the dietary record. Thereby, a specific time variable is created for time trend analyses, where the first included record in this evaluation was considered the baseline time, i.e., time = 0. Therefore, time ranged between 0 and 31 years. Most variables describe the nutrient intake as a percentage of the total energy intake per day (%E/d). This total energy intake is also given, in kilocalories per day (kcal/d). In addition, a numerical value for the basal metabolic rate (BMR) is reported, which is the amount of calories that are burnt during rest [36]. Another variable introduced is the boolean variable of underreporting. Thereby, the plausibility of the reported values is determined based on a comparison to standard value ranges according to Sichert-Heilert _et al._[37]. Two further measures are reported for each visit concerning the weight of the participant. The first one is the determination of obesity according to Cole _et al._ (2000) and the other one is the well-known body mass index (BMI). Finally, socioeconomic factors are taken into account, consisting of the BMI of the mother of the participant, whether she is currently employed, and whether she completed her A-level (in German _Fachabitur_ or _Abitur_ - corresponding to 12 years of school education). For every three-day weighed dietary record, the number of weekdays or weekend days was recorded as well (_wo_tage_). ### VAMBN Settings Table 2 summarises the different module selections that need to be decided before the use of VAMBN. Our black- and whitelists, which are needed for the Bayesian Network learning, contain the basic constraints that an edge can never go back in time, and that the standalone variables cannot depend on any other variable. Moreover, the used hyperparameters for training can be found in Table 3. \begin{table} \begin{tabular}{l l l l} Variable Description & \multicolumn{1}{c}{Variable name} & \multicolumn{1}{c}{Variable type} & Unit \\ \hline Personal number & pers\_ID & numerical & - \\ Family number & fam\_ID & numerical & - \\ Gender & sex & categorical & - \\ \hline Age on first day of protocol & age & numerical & years \\ Time variable & time & numerical & years \\ \hline Daily energy intake & e\_cal & numerical & kcal/d \\ Protein intake & EW\_p & numerical & \%E/d \\ Fat intake & Fett\_p & numerical & \%E/d \\ Carbohydrate intake & KH\_p & numerical & \%E/d \\ Glucose intake & Gluc\_p & numerical & \%E/d \\ Fructose intake & Fruc\_p & numerical & \%E/d \\ Galactose intake & Galac\_p & numerical & \%E/d \\ Monosaccharide intake & MS\_Sacch\_p & numerical & \%E/d \\ Saccharomyces intake & Sacch\_p & numerical & \%E/d \\ Maltose intake & MALT\_p & numerical & \%E/d \\ Lactose intake & LACT\_p & numerical & \%E/d \\ Disaccharide intake & DISACCH\_p & numerical & \%E/d \\ Total sugar intake & ZUCK\_p & numerical & \%E/d \\ Added sugar & ZUZU\_p & numerical & \%E/d \\ Free sugar & free\_s\_p & numerical & \%E/d \\ Free sugar from juice & fs\_saft\_p & numerical & \%E/d \\ Free sugar from fruits and vegetables & fs\_obge\_p & numerical & \%E/d \\ Free sugar from sugar and sweets & fs\_sp\_p & numerical & \%E/d \\ Free sugar from bread and cake & fs\_bc\_p & numerical & \%E/d \\ Free sugar from other sources & fs\_oth\_p & numerical & \%E/d \\ Free sugar from dairy products & fs\_dai\_p & numerical & \%E/d \\ Free sugar from Sugar Sweetened Beverages (SSB) & fs\_ssb\_p & numerical & \%E/d \\ Number of weekdays & wo\_tage & categorical & - \\ \hline Basal metabolic rate (BMR) & bmr & numerical & - \\ Underreporting & underrep & categorical & - \\ Overweight status & ovw & categorical & - \\ Body Mass Index (BMI) & bmi & numerical & kg/m2 \\ \hline Overweight status of the mother & m\_ovw & categorical & - \\ Current employment of the mother & m\_employ & categorical & - \\ High maternal educational status of the mother & m\_schulab & categorical & - \\ \hline \end{tabular} \end{table} Table 1: Overview of the variables contained within the DONALD dataset \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Learning rate} & Times & \\ & Nutrition & \\ & Anthropometric & \\ & Socioeconomic & \\ \hline \multirow{3}{*}{Batch Size} & Times & \\ & Nutrition & \\ & Anthropometric & \\ & Socioeconomic & \\ \hline \multirow{3}{*}{Y-Dimensionality} & Times & \\ & Nutrition & \\ & Anthropometric & \\ & Socioeconomic & \\ \hline \multirow{3}{*}{S-Dimensionality} & Times & 1 \\ & Nutrition & \\ & Anthropometric & 2 \\ \hline \multirow{3}{*}{LSTM-Dimensionality} & Times & \\ & Nutrition & \\ & Anthropometric & 20 \\ \hline \hline \end{tabular} \end{table} Table 16: Modelue selections for different VAMBN-based models*.
2302.03444
Random Walk on a Rough Surface: Renormalization Group Analysis of a Simple Model
The field theoretic renormalization group is applied to a simple model of random walk on a rough fluctuating surface. We consider the Fokker--Planck equation for a particle in a uniform gravitational field. The surface is modelled by the generalized Edwards--Wilkinson linear stochastic equation for the height field. The full stochastic model is reformulated as a multiplicatively renormalizable field theory, which allows for application of the standard renormalization theory. The renormalization group equations have several fixed points that correspond to possible scaling regimes in the infrared range (long times, large distances); all the critical dimensions are found exactly. As an example, the spreading law for particle's cloud is derived. It has the form $R^2(t)\simeq t^{2/\Delta_{\omega}}$ with the exactly known critical dimension of frequency $\Delta_{\omega}$ and, in general, differs from the standard expression $R^2(t)\simeq t$ for ordinary random walk.
N. V. Antonov, N. M. Gulitskiy, P. I. Kakin, D. A. Kerbitskiy
2023-02-07T12:58:56Z
http://arxiv.org/abs/2302.03444v1
# Random Walk on a Rough Surface: Renormalization Group Analysis of a Simple Model ###### Abstract The field theoretic renormalization group is applied to a simple model of random walk on a rough fluctuating surface. We consider the Fokker-Planck equation for a particle in a uniform gravitational field. The surface is modelled by the generalized Edwards-Wilkinson linear stochastic equation for the height field. The full stochastic model is reformulated as a multiplicatively renormalizable field theory, which allows for application of the standard renormalization theory. The renormalization group equations have several fixed points that correspond to possible scaling regimes in the infrared range (long times, large distances); all the critical dimensions are found exactly. As an example, the spreading law for particle's cloud is derived. It has the form \(R^{2}(t)\simeq t^{2/\Delta_{\omega}}\) with the exactly known critical dimension of frequency \(\Delta_{\omega}\) and, in general, differs from the standard expression \(R^{2}(t)\simeq t\) for ordinary random walk. stochastic growth, kinetic roughening, random walk, renormalization group ## 1 Introduction Over decades, stochastic growth processes, kinetic roughening phenomena and fluctuating surfaces or interfaces have been attracting constant attention. The most prominent examples include deposition of a substance on a surface and the growth of the corresponding phase boundary; propagation of flame, smoke, and solidification fronts; growth of vicinal surfaces and bacterial colonies; erosion of landscapes and seabed profiles; molecular beam epitaxy and many others; see [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] and references therein. Another vast area of research is that of diffusion and random walks in random environment such as disordered, inhomogeneous, porous or turbulent media; see, e.g. [14; 15; 16; 17]. In this paper, we study a simple model of a random walk on a rough fluctuating surface. We consider the Fokker-Planck equation for a particle in a uniform gravitational field. The surface is modelled by the generalized Edwards-Wilkinson linear stochastic equation for the height field [1]. The generalized model involves two arbitrary exponents: \(\varepsilon\) and \(\eta\), related to the spectrum and the dispersion law of the height field, respectively. Detailed description of the model and its relation to various special cases is given in Sec. 2. Using the general Martin-Siggia-Rose-De Dominicis-Janssen theorem, the original stochastic problem is reformulated as a certain field theoretic model. This allows one to apply the well-developed formalism of Feynman diagrammatic techniques, renormalization theory and renormalization group (RG). The model is shown to be multiplicatively renormalizable, so that the RG equation can be derived in an standard way. The corresponding renormalization constants and the RG functions (anomalous dimensions and \(\beta\) functions) are explicitly calculated in the leading one-loop order of the RG perturbation theory. These issues are discussed in Secs. 3 and 4. The RG equations have two Gaussian (free) fixed points and two nontrivial ones. Those points are infrared (IR) attractive depending on the values of the parameters \(\varepsilon\) and \(\eta\), which implies the existence of scaling (self-similar) asymptotic regimes in the IR range (long times and large distances) for various response and correlation functions of the model (Sec. 4). The critical dimensions for those regimes are found exactly as functions of \(\varepsilon\) and \(\eta\). As an indicative application, the time dependence of the mean-square radius of a cloud of randomly walking particles is obtained (Sec. 5). It is described by a power law with the exponent that depends on the fixed point, is known exactly as a function of \(\varepsilon\) and \(\eta\) and, for nontrivial points, differs from the ordinary random walk: \(R^{2}(t)\simeq t\). Some implications and possible generalizations are discussed in Sec. 6. ## 2 Description of the model We consider the following stochastic problem for a random walk: \[\partial_{t}x_{i}=F_{i}({\bf x})+\zeta_{i},\quad\langle\zeta_{i}(t)\zeta_{j}( t^{\prime})\rangle_{\zeta}=2v_{0}\delta(t-t^{\prime}). \tag{1}\] Here \({\bf x}(t)=\{x_{i}(t)\}\) is the coordinate of the particle, \(i=1\ldots d\), where \(d\) is an arbitrary (for generality) dimension of the \({\bf x}\) space, \(\zeta_{i}=\zeta_{i}(t)\) is a Gaussian random noise with zero mean and a given pair correlation function, \(v_{0}>0\) is the diffusion coefficient, and \(F\) is an external "drift" force.1 The probability distribution function \(P(t,{\bf x})\) satisfies the (deterministic) Fokker-Planck equation Footnote 1: Here and below, the subscript \(0\) refers to bare parameters which will be renormalized in the following. \[\left\{\partial_{t}+\partial_{i}(F_{i}-v_{0}\partial_{i})\right\}\;P(t,{\bf x} )=0 \tag{2}\] (here and below, summation over repeated indices is implied). For a particle in a constant gravitational field one has \[F_{i}=-\lambda_{0}\partial_{i}h, \tag{3}\] where \(\lambda_{0}=mg\), \(g\) is the gravitational acceleration, \(m\) is the particle's mass, and \(h\) is the height of its location. The simplest model of a surface roughening, proposed within the context of landscape erosion, is the one due to Edwards and Wilkinson [1]. In the continuous formulation, it is described by the diffusion-type stochastic equation for the height field \(h=h(t,{\bf x})\): \[\left\{\partial_{t}-\kappa_{0}\partial^{2}\right\}h(t,{\bf x})=f(t,{\bf x}), \tag{4}\] where \(\kappa_{0}>0\) is (a kind of) surface tension coefficient, \(\partial^{2}=\partial_{i}\partial_{i}\) is the Laplace operator and \(f\) is a Gaussian random noise with zero mean and a given pair correlation function. The most popular choices are the white noise \[\langle f(t,{\bf x})f(t^{\prime},{\bf x}^{\prime})\rangle_{f}=D_{0}\delta(t-t ^{\prime})\delta({\bf x}-{\bf x}^{\prime}) \tag{5}\] with the positive amplitude \(D_{0}>0\), and the quenched noise; the simplified version of the latter is \[\langle f(t,{\bf x})f(t^{\prime},{\bf x}^{\prime})\rangle_{f}=D_{0}\delta({\bf x }-{\bf x}^{\prime}). \tag{6}\] In this paper, we consider a generalized equation \[\left\{\partial_{t}+\kappa_{0}k^{2-\eta}\right\}h(t,{\bf x})=f(t,{\bf x}), \tag{7}\] written here in the symbolic notation with \(k\) being the wave number, 2 while the correlation function is taken in a power-like form: Footnote 2: Detailed discussion of fractional derivatives can be found in [15]. \[\langle f(t,\mathbf{x})f(t^{\prime},\mathbf{x}^{\prime})\rangle_{f}=D_{0}\delta(t -t^{\prime})\,\int\frac{d\mathbf{k}}{(2\pi)^{d}}\,k^{2-d-y}\,\exp\{i\mathbf{k} (\mathbf{x}-\mathbf{x}^{\prime})\}. \tag{8}\] Here \(\eta\) and \(y\) are arbitrary exponents and \(d\) is the dimension of space. Clearly, the choice \(\eta=0\), \(2-d-y=0\) corresponds to the model (4), (5); as we will see, the model (4), (6) can also be obtained from (7), (8). The choice \(\eta\neq 0\) can be justified by the ideas of self-organized criticality (SOC) according to which the evolution of a sandpile surface is not an ordinary diffusion-type process but involves several discrete steps: expectation period, reaching a threshold and avalanche; see, e.g. [18]. For a linear stochastic equation with a Gaussian additive random noise, the field \(h\) is also a Gaussian field defined by its pair correlation function. For the model (7), (8), the latter has the following form in the Fourier (\(\omega\)-\(\mathbf{k}\)) representation \[D_{h}(\omega,k)=\frac{D_{0}\,k^{2-d-y}}{\omega^{2}+[\kappa_{0}k^{2-\eta}]^{2} }=\frac{g_{0}u_{0}v_{0}^{3}\,k^{2-d-\eta-\epsilon}}{\omega^{2}+[u_{0}v_{0}k^{ 2-\eta}]^{2}}. \tag{9}\] In the second relation we introduced the new variables: the exponent \(\epsilon\) and the amplitudes \(g_{0}\), \(u_{0}\), defined by the relations \[\epsilon=y-\eta,\quad\kappa_{0}=u_{0}v_{0},\quad D_{0}=g_{0}u_{0}v_{0}^{3}. \tag{10}\] They are convenient, in particular, because the equal-time correlation function \[D_{h}(k)=\int\frac{d\omega}{2\pi}D(\omega,k)\propto g_{0}\,v_{0}^{2}\,k^{-d-\epsilon} \tag{11}\] involves the parameters \(g_{0}\), \(\epsilon\), while the dispersion law \(\omega(k)\propto u_{0}v_{0}k^{2-\eta}\) is expressed only via \(u_{0}\), \(\eta\). The model (9) includes two special cases interesting on their own. In the limit \(u_{0}\to\infty\) and \(g_{0}^{\prime}=g_{0}/u_{0}\) fixed, the function \(D(\omega,k)\) becomes independent of the frequency \(\omega\), and the field \(h(t,\mathbf{x})\) becomes white in time. Indeed, one obtains in the (\(t\)-\(\mathbf{k}\)) representation \[D(t-t^{\prime},k)=\delta(t-t^{\prime})\,g_{0}^{\prime}\,v_{0}^{2}\,k^{-2-d- \epsilon+\eta}. \tag{12}\] In the limit \(u_{0}\to 0\) and \(g_{0}\) fixed, the function \(D_{h}(k)\) in (11) remains finite, so that (9) tends to \[D_{h}(\omega,k)=\pi\delta(\omega)\,g_{0}\,v_{0}^{2}\,k^{-d-\epsilon}, \tag{13}\] which corresponds to the time-independent (quenched or frozen) field \(h\). Surprisingly enough, for \(\epsilon=4-d\), this reproduces the model (4), (6) where one has \(D_{h}\propto\delta(\omega)/k^{4}\). Substituting the gravitational force (3) with the random height field from (7), (8) into the Fokker-Planck equation (2) turns the latter into a stochastic equation in its own right. This completes formulation of the problem. **3. Field theoretic formulation and renormalization of the model** According to the general theorem (see, e.g. Sec. 5.3 in monograph [19]), the full stochastic problem (2), (3), (7), (8) is equivalent to the field theoretic model for the doubled set of fields \(\Phi=\{\theta^{\prime},h^{\prime},\theta,h\}\) with the De Dominicis-Janssen action functional: \[\mathcal{S}(\Phi) =\theta^{\prime}\left[-\partial_{t}\theta+\nu_{0}\partial^{2} \theta+\lambda_{0}\partial_{i}(\theta\partial_{i}h)\right]+\mathcal{S}_{h}(h^ {\prime},h), \tag{14}\] \[\mathcal{S}_{h}(h^{\prime},h) =\frac{1}{2}h^{\prime}D_{f}h^{\prime}+h^{\prime}\left[-\partial_ {t}+\kappa_{0}k^{2-\eta}\right]h. \tag{15}\] Here \(D_{f}\) is the correlator (8), \(\theta\) is the density field, \(h\) is the height field and \(\theta^{\prime}\), \(h^{\prime}\) are the corresponding Martin-Siggia-Rose response fields; all the needed integrations over their arguments \(x=\{t,\mathbf{x}\}\) and summations over repeated indices are implied. The field theoretic formulation means that various correlation and response functions of the original stochastic problem are represented by functional averages with the weight \(\exp\mathcal{S}(\Phi)\). The field \(h^{\prime}\) can easily be removed by Gaussian integration, then \(\mathcal{S}_{h}(h^{\prime},h)\) would be replaced with \(\mathcal{S}_{h}(h)=-hD_{h}^{-1}h/2\) with \(D_{h}\) from (9), but the expanded representation (15) is more convenient for the renormalization purposes. The constant \(\lambda_{0}\) can be removed by rescaling of the fields \(h,h^{\prime}\) and other parameters. Thus, in the following, with no loss of generality, we set \(\lambda_{0}=1\). The model (14), (15) corresponds to Feynman diagrammatic technique with bare propagators \(\langle\theta^{\prime}\theta\rangle_{0}\), \(\langle hh\rangle_{0}\), \(\langle h^{\prime}h\rangle_{0}\) (the latter does not enter into relevant diagrams) and the only vertex \(\theta^{\prime}\partial_{i}(\theta\partial_{i}h)\). It is well known that analysis of ultraviolet (UV) divergences is based on analysis of canonical dimensions, see, e.g. [19] (Secs. 1.15, 1.16). In contrast to conventional static models, dynamic ones have two independent scales: a time scale \([T]\) and a spatial scale \([L]\); see [19] (Secs. 1.17, 5.14). Thus, the canonical dimension of any quantity \(F\) (a field or a parameter) is determined by two numbers: the frequency dimension \(d_{F}^{\omega}\) and the momentum dimension \(d_{F}^{k}\): \[[F]\sim[T]^{-d_{F}^{\omega}}\left[L\right]^{-d_{F}^{k}}.\] The dimensions are found from obvious normalization conditions \[d_{\mathbf{k}}^{k}=-d_{\mathbf{x}}^{k}=1,\quad d_{\mathbf{k}}^{\omega}=d_{ \mathbf{x}}^{\omega}=0,\quad d_{\omega}^{k}=d_{t}^{k}=0,\quad d_{\omega}^{ \omega}=-d_{t}^{\omega}=1\] and from the requirement that all terms in the action functional be dimensionless with respect to both the canonical dimensions separately. The total canonical dimension is defined as \(d_{F}=d_{F}^{k}+2d_{F}^{\omega}\) (the coefficient 2 follows from the relation \(\partial_{t}\propto\partial^{2}\) in the free theory). In the renormalization procedure, \(d_{F}\) plays the same role as the conventional (momentum) dimension does in static models; see Sec. 5.14 in [19]. Canonical dimensions of all the fields and parameters of our model are given in Table 1. It also involves renormalized parameters (without subscript "0") and the reference mass \(\mu\), an additional parameter of the renormalized theory; they all will appear later on. Note that for the fields \(\theta^{\prime}\), \(\theta\) all these dimensions can be unambiguously defined only for the product \(\theta^{\prime}\theta\). Formally, this follows from the invariance of the action functional (14) under the dilatation \(\theta^{\prime}\to\lambda\theta^{\prime}\), \(\theta\to\lambda^{-1}\theta\). As can be seen from Table 1, the model becomes logarithmic (both coupling constants \(g_{0}\), \(u_{0}\) become dimensionless) for \(\eta=y=0\) (or equivalently for \(\varepsilon=y=0\)) and arbitrary \(d\).3 According to general strategy of renormalization, the exponents \(\eta\), \(y\) or \(\varepsilon\) that "measure" deviation from logarithmicity, should be treated as formal small parameters of the same order. The UV divergences manifest themselves as singularities at \(y\to 0\), _etc._ in the correlation functions; in the one-loop approximation, they have the form of simple poles. Footnote 3: Although \(u_{0}\) is not an expansion parameter in perturbation theory, its renormalized counterpart is dimensionless, enters into renormalization constants and RG functions and should be treated on equal footing with \(g_{0}\). We also recall that \(\lambda_{0}=1\). The total canonical dimension of a certain 1-irreducible Green's functions is given by \[d_{\Gamma}=(d+2)-\sum_{\Phi}d_{\Phi}N_{\Phi}, \tag{16}\] where \(N_{\Phi}\) are the numbers of the fields \(\Phi=\{\theta^{\prime},h^{\prime},\theta,h\}\) entering the Green's function and \(d_{\Phi}\) are their total canonical dimensions. The formal index of divergence \(\delta_{\Gamma}\) is the total dimension of the Green's function in the logarithmic theory (\(y=\eta=0\)), that is, \(\delta_{\Gamma}=d_{\Gamma}|_{y=\eta=0}\). Superficial UV divergences, whose removal requires introducing counterterms, can be present in the Green's function \(\Gamma\) if \(\delta_{\Gamma}\) is a non-negative integer. When analyzing the divergences in the model (14), (15), the following additional considerations should be taken into account; see, e.g. [19] (Sec. 5.15) and [20] (Sec. 1.4). (i) For any dynamic model of this type, all the 1-irreducible functions without the response fields contain closed circuits of retarded propagators \(\langle\theta\theta^{\prime}\rangle_{0}\) and vanish. Thus, it is sufficient to consider the functions with \(N_{\theta^{\prime}}+N_{h^{\prime}}\geq 1\). (ii) For all non-vanishing functions, \(N_{\theta^{\prime}}=N_{\theta}\) (otherwise no diagrams can be constructed). Formally, this is a consequence of the invariance of the action functional (14) with respect to dilatation \(\theta^{\prime}\to\lambda\theta^{\prime}\), \(\theta\to\lambda^{-1}\theta\). (iii) Using integration by parts, one derivative in the vertex can be moved onto the field \(\theta^{\prime}\), i.e. \(\theta^{\prime}\partial_{i}(\theta\partial_{i}h)\simeq-(\partial_{i}\theta^{ \prime})(\partial_{i}h)\theta\). Thus, in any 1-irreducible diagram, each external field \(\theta^{\prime}\) or \(h^{\prime}\), "releases" the external momentum, and the real index of divergence decreases by the corresponding number of units, i.e. \(\delta^{\prime}=\delta-N_{\theta^{\prime}}-N_{h}\). Furthermore, these fields enter the counterterms only in the form of spatial gradients. This observation excludes the counterterms \(\theta^{\prime}\partial_{i}\theta\) and \((\theta^{\prime}\theta)^{2}\), the latter allowed by the formal index for \(d\leq 2\). (iv) It is clear that the fields \(\theta^{\prime}\), \(\theta\) do not affect the statistics of the field \(h\). In the field theoretic terms, this "passivity" means that any 1-irreducible Green's function with \(N_{\theta^{\prime}}=0\), \(N_{\theta}>0\) and \(N_{h}+N_{h^{\prime}}>0\) vanishes: no corresponding diagrams can be constructed. Taking into account these considerations one obtains: \[\delta=(d+2)-d(N_{\theta}+N_{h^{\prime}}),\quad\delta^{\prime}=(d+2)-(d+1)N_{ \theta}-N_{h}-dN_{h^{\prime}} \tag{17}\] \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline \(F\) & \(\theta^{\prime}\theta\) & \(h^{\prime}\) & \(h\) & \(\nu_{0}\), \(\nu\) & \(g_{0}\) & \(u_{0}\) & \(g\), \(u\) & \(\mu\),\(m\) \\ \hline \hline \(d_{F}^{k}\) & \(d\) & \(d+2\) & \(-2\) & \(-2\) & \(\varepsilon\) & \(\eta\) & 0 & 1 \\ \hline \(d_{F}^{k}\) & 0 & \(-1\) & 1 & 1 & 0 & 0 & 0 & 0 \\ \hline \(d_{F}\) & \(d\) & \(d\) & 0 & 0 & \(\varepsilon\) & \(\eta\) & 0 & 1 \\ \hline \end{tabular} \end{table} Table 1: Canonical dimensions for the action functional (14), (15). (we recall that \(N_{\theta^{\prime}}=N_{\theta}\), so that only \(N_{\theta}\) is indicated). Then the straightforward analysis shows that the superficial divergences in our model are present only in the 1-irreducible functions \(\langle\theta^{\prime}\theta\rangle\) and \(\langle\theta^{\prime}\theta h\rangle\), and the corresponding counterterms necessarily contract to the forms \(\theta^{\prime}\partial^{2}\theta\) (\(\delta=2\), \(\delta^{\prime}=1\)) and \((\partial_{i}\theta^{\prime})(\partial_{i}h)\theta\) (\(\delta=2\), \(\delta^{\prime}=0\)). Such terms are already present in the action (14), which means that our model (14), (15) is multiplicatively renormalizable with only two independent renormalization constants \(Z_{1}\) and \(Z_{2}\). The renormalized action has the form \[{\cal S}_{R}(\Phi)=\theta^{\prime}\left[-\partial_{t}\theta+Z_{1}\nu\partial^ {2}\theta+Z_{2}\partial_{i}(\theta\partial_{i}h)\right]+{\cal S}_{hR}(h^{ \prime},h), \tag{18}\] which is naturally reproduced as renormalization of the field \(h\) and the coefficient \(\nu_{0}\); no renormalization of the product \(\theta\theta^{\prime}\) is needed: \[\nu_{0}=\nu Z_{\nu},\quad Z_{\nu}=Z_{1},\quad Z_{h}=Z_{2},\quad Z_{\theta \theta^{\prime}}=1. \tag{19}\] The functional (15) is not renormalized, \({\cal S}_{hR}(h^{\prime},h)={\cal S}_{h}(h^{\prime},h)\), but it should be expressed in renormalized variables taking into account Eqs. (8) and (9): \[g_{0}=g\mu^{y}Z_{g},\quad u_{0}=u\mu^{y}Z_{u},\quad\kappa_{0}=\kappa Z_{\kappa}, \tag{20}\] where the renormalization mass \(\mu\) is introduced so that renormalized couplings \(g\) and \(u\) are completely dimensionless. Then it follows from the absence of renormalization of \({\cal S}_{h}\) that \[Z_{h}Z_{h^{\prime}}=1,\quad Z_{h^{\prime}}^{2}Z_{g}Z_{u}Z_{v}^{3}=1,\quad Z_{u }Z_{v}=Z_{\kappa}=1. \tag{21}\] Along with (19) this finally gives the following relations: \[Z_{g}=Z_{2}^{2}Z_{1}^{-1},\quad Z_{u}=Z_{1}^{-1},\quad Z_{v}=Z_{1}. \tag{22}\] We calculated the renormalization constants \(Z_{1}\) and \(Z_{2}\) in the leading one-loop approximation (the first order of the perturbative expansion in \(g\)). It is sufficient to find them for \(\eta=0\), because the anomalous dimensions in the minimal subtraction (MS) renormalization scheme are independent of the parameters like \(\eta\) and \(y\), while the exponent \(y\) alone provides UV regularization. Then one obtains: \[Z_{1}=1-\frac{g}{y}\,\frac{C_{d}}{2d}\,\frac{(u-1)}{(u+1)^{2}}\,\quad Z_{2}=1+ \frac{g}{y}\,\frac{C_{d}}{2d}\,\frac{1}{(u+1)^{2}}, \tag{23}\] with the higher-order corrections in \(g\). Here \(C_{d}=S_{d}/(2\pi)^{d}\), \(S_{d}=2\pi^{d/2}/\Gamma(d/2)\) is the surface area of the unit sphere in \(d\)-dimensional space. It is convenient to absorb overall factors into the coupling constant \(g\), which gives \[Z_{1}=1-\frac{g}{y}\,\frac{(u-1)}{(u+1)^{2}}\,\quad Z_{2}=1+\frac{g}{y}\, \frac{1}{(u+1)^{2}}. \tag{24}\] For \(\eta\neq 0\), the expressions (23), (24) would be infinite sums; see, e.g. [21; 22]. **4. RG equations, RG functions, and fixed points** Since our model is multiplicatively renormalizable, the corresponding RG equations are derived in a standard fashion. In particular, for a certain renormalized (full or connected) Green's function \(W^{R}\) the RG equation reads: \[\left\{{\cal D}_{\mu}+\beta_{g}\partial_{g}+\beta_{u}\partial_{u}-\gamma_{\nu}{ \cal D}_{\nu}-\sum_{\Phi}N_{\Phi}\gamma_{\Phi}\right\}\,W^{R}(g,u,\nu,\mu;\dots )=0. \tag{25}\] Here the ellipsis stands for other variables (times and coordinates or frequencies and momenta), \(\partial_{x}=\partial/\partial x\), \({\cal D}_{x}=x\partial_{x}\) for any variable \(x\) and the sum runs over all fields \(\Phi=\{\theta^{\prime},h^{\prime},\theta,h\}\). The coefficients in the RG differential operator (25) - the anomalous dimensions \(\gamma\) and the \(\beta\) functions - are defined as: \[\gamma_{\kappa}=\widetilde{\cal D}_{\mu}\,\ln Z_{\kappa}\quad\mbox{for any}\,\,\,\,\alpha,\quad\beta_{g}=\widetilde{\cal D}_{\mu}\,g,\quad \beta_{u}=\widetilde{\cal D}_{\mu}\,u, \tag{26}\] where \(\widetilde{\cal D}_{\mu}\) is the differential operation \({\cal D}_{\mu}\) at fixed bare (unrenormalized) parameters; see, e.g. Secs. 1. 24, 1. 25 in the monograph [19]. From (19)-(22) and definitions (26) it follows that \[\gamma_{\theta\theta^{\prime}}=0,\quad\gamma_{h}=-\gamma_{h^{ \prime}}=\gamma_{2},\quad\gamma_{g}=2\gamma_{2}-\gamma_{1},\quad\gamma_{u}=- \gamma_{v}=-\gamma_{1}, \tag{27}\] \[\beta_{g}=g[-\varepsilon-\gamma_{g}],\quad\beta_{u}=u[-\eta- \gamma_{u}]. \tag{28}\] From (28) and the one-loop result (24) one obtains: \[\gamma_{1}=g\,\frac{u-1}{(u+1)^{2}},\quad\gamma_{2}=-g\,\frac{1}{(u+1)^{2}}\,, \tag{29}\] \[\beta_{g}=g\left[-\varepsilon+g\,\frac{2u}{(u+1)^{2}}\right]\,,\qquad\beta_{u }=u\left[-\eta+g\,\frac{u-1}{(u+1)^{2}}\right]\,, \tag{30}\] with the higher-order corrections in \(g\). The IR asymptotic behaviour of the Green's functions is determined by IR attractive fixed points of the corresponding RG equations. The coordinates of fixed points \(g^{*}\), \(u^{*}\) are found from the requirement that all the \(\beta\) functions vanish simultaneously: \[\beta_{g}(g^{*},u^{*})=\beta_{u}(g^{*},u^{*})=0. \tag{31}\] The type of a fixed point is determined by the matrix of derivatives \(\Omega_{ij}=\partial_{i}\beta_{j}(g^{*})\) at the given point \(g_{i}=\{g,u\}\): for an IR attractive point all the eigenvalues should have positive real parts. Analysis of the expressions (30) reveals four fixed points: (i) Gaussian (free) fixed point: \[g^{*}=0\,,\quad u^{*}=0; \tag{32}\] (ii) nontrivial fixed point: \[g^{*}=\frac{2(\varepsilon-\eta)^{2}}{\varepsilon-2\eta}\,,\quad u^{*}=\frac{ \varepsilon}{\varepsilon-2\eta}. \tag{33}\] The point (i) is IR attractive for \(\varepsilon<0\), \(\eta<0\), while the point (ii) is IR attractive for \(\varepsilon>0\), \(\eta<\varepsilon/2\). Two more points are found in the following way. In order to explore the limiting case \(u\to\infty\) with \(g/u\) fixed, we have to pass to new variables: \(g^{\prime}\equiv g/u\) and \(w\equiv 1/u\). For this case we obtain \[\beta_{g^{\prime}}=g^{\prime}\left[\eta-\varepsilon+\frac{g^{\prime}}{w+1} \right]\,,\qquad\beta_{w}=w\left[\eta+g^{\prime}\,\frac{w-1}{(w+1)^{2}}\right]\,. \tag{34}\] Finding the zeros of the \(\beta\) functions, we find two additional fixed points: (iii) Gaussian (free) fixed point: \[g^{\prime*}=0\,,\quad w^{*}=0; \tag{35}\] (iv) nontrivial fixed point: \[g^{\prime*}=\varepsilon-\eta\,,\quad w^{*}=0. \tag{36}\] The point (iii) is IR attractive if \(\varepsilon>0\,,\varepsilon/2<\eta<\varepsilon\), and the point (iv) is IR attractive if \(\varepsilon<0\,,\eta>0\) or \(\varepsilon>0\,,\eta>\varepsilon\). The general stability pattern of the fixed points in the \(\varepsilon\)-\(\eta\) plane is shown in Fig. 1. In the one-loop approximation, the regions of IR stability for all the points are given by sectors that cover the full plane without gaps or overlaps between them. Some remarks are in order. Clearly, the Gaussian points correspond to cases, in which the dynamics of the field \(\theta\) is not affected by the statistics of the height field \(h\) (only in the leading order of the IR asymptotic behaviour!). In these cases, we deal with an ordinary random walk. The point (iv) corresponds to the limiting case (12) when the field \(h\), in comparison with \(\theta\), behaves as if it was \(\delta\)-correlated in time. However, we did not find a nontrivial point that would correspond to the frozen limit (13). This follows from the fact that the function \(\beta_{g}\) in (30) becomes trivial for \(u\to 0\): \(\beta_{g}=-\varepsilon g\). The similar triviality Figure 1: Regions of stability of the fixed points (i)–(iv). was observed earlier in models of diffusion in time-independent potential vector fields where it was shown to be exact in all orders of perturbation theory [23; 24]. Since those models have a close formal resemblance with the limit (13) of our model and its special case (4), (6), we believe that in the latter cases \(\beta_{S}\) is also trivial exactly. **5. Critical dimensions and scaling behaviour** Existence of IR attractive fixed points of the RG equations implies existence of the scaling behaviour of the correlation functions in the IR range. In dynamical models, the critical dimension of any quantity \(F\) (a field or a parameter) is given by the expression (see, e.g. Secs. 5.16 and 6.7 in [19] and Sec. 2.1 in [20]): \[\Delta_{F}=d^{k}_{F}+\Delta_{\omega}d^{\omega}_{F}+\gamma^{*}_{F},\quad\Delta _{\omega}=2-\gamma^{*}_{\nu} \tag{37}\] (with the standard normalization convention that \(\Delta_{\mathbf{k}}=-\Delta_{\mathbf{x}}=1\)). Here and below \(\gamma^{*}\) denotes the value of the anomalous dimension \(\gamma\) at a fixed point. For the Gaussian points (i) and (iii), one has \[\Delta_{\theta^{\prime}\theta}=d,\quad\Delta_{\omega}=2. \tag{38}\] For the fixed point (ii), one obtains the exact results from the relations (27) and definitions (28): \[\Delta_{\theta^{\prime}\theta}=d,\quad\Delta_{\omega}=2-\eta. \tag{39}\] As already mentioned, the point (iv) corresponds to the limit (12), where the propagator \(\langle hh\rangle_{0}\) becomes \(\delta\)-correlated in time. As a result, closed circuits of retarded propagators \(\langle\theta\theta^{\prime}\rangle_{0}\) appear in almost all diagrams relevant for renormalization procedure and they therefore vanish. The only exception is the one-loop diagram contributing to \(Z_{1}\). Thus, one has \(Z_{2}=1\) identically, while \(Z_{1}\) is given exactly by the one-loop expression, cf. the discussion of Kraichnan's rapid-change model of passive scalar advection [25]. Then one readily derives exact expressions for the critical dimensions: \[\Delta_{\theta^{\prime}\theta}=d,\quad\Delta_{\omega}=2-\varepsilon+\eta. \tag{40}\] As an illustrative application, consider the mean-square distance of a random walker on a rough surface. For such particle that started moving at \(t=0\) from the origin \(\mathbf{x}=0\), it is given by: \[R^{2}(t)=\int d\mathbf{x}\,x^{2}\langle\theta(t,\mathbf{x})\theta^{\prime}(0, \mathbf{0})\rangle, \tag{41}\] where \(t>0\) is a later time and \(\mathbf{x}\) is the corresponding current position. Substituting the scaling representation for the linear response function \[\langle\theta(t,\mathbf{x})\theta^{\prime}(0,\mathbf{0})\rangle\simeq r^{- \Delta_{\theta\theta^{\prime}}}\,F(tr^{-\Delta_{\omega}}) \tag{42}\] gives: \[R^{2}(t)\propto t^{(d+2-\Delta_{\theta\theta^{\prime}})/\Delta_{\omega}}. \tag{43}\] Taking into account the exact relation \(\Delta_{\theta^{\prime}\theta}=d\), valid for all fixed points (i)-(iv), one arrives at the spreading law \[R^{2}(t)\propto t^{2/\Delta_{\omega}}, \tag{44}\] with the exact expressions \(\Delta_{\omega}=2\) for the points (i), (iii), \(\Delta_{\omega}=2-\eta\) for (ii) and \(\Delta_{\omega}=2-\varepsilon+\eta\) for (iv). ## 6 Conclusion We studied a model of a random walk of a particle on a rough fluctuating surface described by the Fokker-Planck equation for a particle in a constant gravitational field, while the surface was modelled by the (generalized) Edwards-Wilkinson model. The full stochastic problem (2), (3), (7), (8) is mapped onto a multiplicatively renormalizable field theoretic model (14), (15). The corresponding RG equations reveal two Gaussian (free) and two nontrivial fixed points, which means that the system exhibits various types of IR scaling behaviour (long times, large distances). Although the practical calculation is confined within the leading one-loop approximation, the main critical dimensions are found exactly. As an illustrative example we considered the mean-square displacement of a walking particle (in other interpretation, the radius of particles' cloud). It shows that the particle is not trapped in a finite area but travels all across the system with a spreading law similar to the ordinary random walk but, in general, with different exponents; see (44) and the text below. As one can see, even a comparatively simple model demonstrates interesting types of IR behaviour. Thus, it is interesting to study more involved situations. There are several directions of possible generalization. A linear stochastic equations like (4), (7) (corresponding to Gaussian statistics for the height field) can be replaced by nonlinear models like the Kardar-Parisi-Zhang [2] or Pavlik's [5,8] ones. On some occasions, motion of a particle is not an ordinary random walk (1) but is described, e.g. by Levy flights; see, e.g. [15]. This possibility is supported by the ideas of self-organized criticality that the underlying surface evolves via avalanches [18], while the particle can "glade" upon the surface. If so, it is natural to replace the Laplace operator in the Fokker-Planck equation (2) by a fractional derivative: \(-\partial^{2}\sim k^{2}\to k^{2-\eta^{\prime}}\) with a certain new exponent \(\eta^{\prime}\). It is especially interesting to include anisotropy (as a consequence of an overall tilt of the surface). This can be done by describing the field \(h\) by the Pastor-Satorras-Rothman model for eroding landscape [9,10] or the Hwa-Kardar model of a running sandpile [26,27]. This work remains for the future and is partly in progress. ## Acknowledgments The Authors are indebted to M.A. Reiter for discussion. The work of P.I.K. was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS", project 22-1-3-33-1 and by the Ministry of Science and Higher Education of the Russian Federation, agreement 075-15-2022-287.
2310.17620
Radar-Only Off-Road Local Navigation
Off-road robotics have traditionally utilized lidar for local navigation due to its accuracy and high resolution. However, the limitations of lidar, such as reduced performance in harsh environmental conditions and limited range, have prompted the exploration of alternative sensing technologies. This paper investigates the potential of radar for off-road local navigation, as it offers the advantages of a longer range and the ability to penetrate dust and light vegetation. We adapt existing lidar-based methods for radar and evaluate the performance in comparison to lidar under various off-road conditions. We show that radar can provide a significant range advantage over lidar while maintaining accuracy for both ground plane estimation and obstacle detection. And finally, we demonstrate successful autonomous navigation at a speed of 2.5 m/s over a path length of 350 m using only radar for ground plane estimation and obstacle detection.
Timothy Overbye, Srikanth Saripalli
2023-10-26T17:39:57Z
http://arxiv.org/abs/2310.17620v1
# Radar-Only Off-Road Local Navigation ###### Abstract Off-road robotics have traditionally utilized lidar for local navigation due to its accuracy and high resolution. However, the limitations of lidar, such as reduced performance in harsh environmental conditions and limited range, have prompted the exploration of alternative sensing technologies. This paper investigates the potential of radar for off-road local navigation, as it offers the advantages of a longer range and the ability to penetrate dust and light vegetation. We adapt existing lidar-based methods for radar and evaluate the performance in comparison to lidar under various off-road conditions. We show that radar can provide a significant range advantage over lidar while maintaining accuracy for both ground plane estimation and obstacle detection. And finally, we demonstrate successful autonomous navigation at a speed of 2.5 m/s over a path length of 350 m using only radar for ground plane estimation and obstacle detection. ## I Introduction Off-road robotics have emerged as a significant area of research in recent years. Perception of the off-road environment is one of the greatest challenges to successful autonomous operation due to its unstructured nature and complexity. Additionally, real world operation should also consider environmental effects such as dust and weather such as rain and fog. For safe and effective operation the perception system should be able to identify terrain features such as slope and roughness, and possible obstacles such as trees, other vehicles, dense vegetation, etc. Traditionally lidar and vision have been the primary sensors used in off-road robotics for local navigation. Lidar offers high-resolution spatial data that enables robots to perceive and navigate their surroundings. And vision is capable of providing dense semantic information. However, both lidar and vision have inherent limitations that hinder their effectiveness in certain off-road scenarios. For instance, the performance of both lidar and vision can be significantly affected by adverse environmental conditions such as dust, fog, and rain. Lidar is also not capable of providing the same resolution as vision while vision is still unable to provide accurate depth information. As a result, there is a growing interest in exploring alternative sensing technologies that can complement existing sensors in off-road robotics. In this paper we will be looking at radar sensors, which offers potential advantages over lidar in specific situations. Radar can provide longer range sensing capabilities, making it particularly useful for higher speed operation where extended perception range is crucial. Furthermore, radar has the ability to penetrate dust and light vegetation, which can be beneficial in off-road environments. In this paper, we focus on off-road local navigation and assess the performance of radar in comparison to lidar. By adapting methods used for lidar we show that off-road navigation is possible with radar alone. ## II Related Work There has been a lot of work on radar SLAM, using radar to detect hidden obstacles, and the performance of radar in difficult environments such as smoke, rain, and snow. There has also been some work looking at learning based approaches to obstacle identification using radar. Radar has also been used to supplement various other sensors. However there has not been significant work using radar as the only sensor for autonomous navigation. Much of the recent work with radar for autonomous systems has been on the use of radar for Simultaneous Localization and Mapping (SLAM). The work by [1] presents two distinct approaches for radar-based SLAM. One approach employs the Fourier-Mellin transform for registering radar images in a sequence, while the other capitalizes on movement distortions in data collected from a rotating range sensor to achieve localization and mapping. The authors demonstrate the effectiveness of both methods on real-world data. Many of these studies have also looked at a comparison of radar and lidar for SLAM. The work by [2] presents a principled comparison of the accuracy of a novel radar sensor against that of a Velodyne lidar, for localization and mapping. The study by [3] presents a comparative evaluation of millimeter-wave radar and two-dimensional scanning lasers in dusty and rainy conditions, assessing sensor performance and their effects on terrain mapping and localization. [4] explored the fusion of lidar and radar data for SLAM in harsh environments. They show that while radar alone can Fig. 1: Our vehicle navigating an off-road trail based on a radar map. achieve good results in fog, lidar alone struggles. Although better results are achieved with a fusion of radar and lidar. A more recent study by [5] extends the radar SLAM topic by comparing three localization systems: radar-only, lidar-only, and a cross-modal radar-to-lidar system across varying seasonal and weather conditions. their comparison shows that, with modern algorithms, lidar localization may not preform as poorly in bad weather as other studies have shown. However, they do find that radar localization can achieve competitive accuracy to lidar with a much smaller map. Recent work has also produced good results by taking existing high resolution lidar maps and using radar to localize the vehicle on these maps [6][7]. These methods, while primarily of benefit to on road use cases, allow the use of preexisting lidar maps rather than having to create new radar maps for radar localization. Also of interest is recent work on obstacle detection using radar. This is of particular interest for off-road robotics due to radar's ability to penetrate some types of vegetation as well as dust and rain. Earlier work [8] presents the use of radar as a redundant method of sensing especially under dusty conditions where lidar preforms poorly. This work is continued in [9] where radar is used identify the ground in low visibility conditions such as dust or looking into the sun and dust or dawn. Their methods produce good results in situations where lidar or vision might be blinded but are limited to short range sensing. The work in [10] presents initial experimental results of radar's ability to detect obstacles obscured by vegetation, enhancing the perception system's capabilities. Another work, [11] details two strategies for terrain traversability assessment: one using stereo data and the other using an integrated radar-stereo system to detect and characterize obstacles, showing the usefulness of radar in identifying and assessing obstacles in outdoor environments. A similar problem is presented in [11] where radar and stereo vision are used to both estimate the ground plane and detect obstacles. In this work radar is primarily used for obstacle detection with the stereo camera using a machine learning algorithm to estimate the ground plane. The work in [12] compares the quality of lidar and radar for obstacle detection in a dusty off-road environment. Here the radar is proposed to be used as a backup system with the lidar handling most of the work. They find that the lidar produces better results but is blocked by dust, when this happens the radar can take over. Finally, the work in [13] demonstrates the benefits associated with lidar and radar sensor fusion, particularly in detecting objects partially obscured by light to medium vegetation. In this paper we look at a geometric approach based off our previous work G-VOM [14], an open source lidar mapping package, to use radar alone for off-road navigation. Despite the differences in the sensors types, this can still achieve good results. We also discuss some of the advantages that our method provides over lidar only navigation. ## III Approach ### _Sensors_ Before we discuss the differences between radar and lidar it is worth taking a brief look at how they both work and the differences in the returned data. Both technologies rely on time of flight measurements of electromagnetic radiation. Lidar typically functions within the infrared section of the spectrum, while radar operates at higher wavelengths, within the radio band of the spectrum at 77-81 GHz. This attribute allows radar to penetrate some objects that are opaque to lidar. Such as environmental effects including dust and rain, and low density obstacles such as vegetation. However, more important for this work is the difference in the shape of the emitted beam and how the returns are processed into data. Lidar beams can be thought of as rays with very little width. These rays, although they exhibit a slight divergence, do so minimally, with just 0.18\({}^{\circ}\) in our case. This also means that, at long ranges or with small objects, where lidar might not receive any returns radar will still see it, although the return intensity will be lower for smaller objects. Additionally, due to the ability of radar waves to "see through" objects, multiple returns will be picked up for each emitted beam. Fig. 2 shows how this works for both sensors. On the left is a radar looking at the blue circle, five beams are shown and each beam is broken up into five range bins. For each range bin the radar reports the intensity of the reflected beam. The two beams on the right that don't intersect the object all have low returns. The six bins that intersect the object provide a higher return. And finally, the three bins behind the object provide a lower return, effectively the shadow of the object. The top shows this with larger objects where lidar receives multiple returns and the object covers multiple radar bins. The bottom shows a small object where lidar receives no returns but the object still occupies part of a radar bin. This does mean that for every scan of the radar every range bin for every scan will return intensity data between 0.0 and 1.0. However, most of this data will be nothing but background noise. So before any mapping is done a threshold is applied to the radar data. For this work we used a threshold of 0.26 for autonomous operation with any returns below the threshold discarded. This threshold does allow some false positive returns however we also provide some analysis with a threshold of 0.31. This higher threshold reduces the false positive values but also produces some false negatives. Our system uses an Ouster OS1-128 lidar sensor with a horizontal resolution of 1024 azimuths and a scanning rate of 20 hz and a Navtech CIR-DEV-X radar sensor with a horizontal resolution of 400 azimuths, a scanning rate of 4 hz, and a range bins of 0.044 m mounted on a Clearpath Robotics Warthog (Fig. 5). Note that while the Ouster lidar provides a 3D scan, the Navtech radar only provides a 2D scan. No pre-processing was applied to the lidar data. For the radar data, we applied a manually set threshold to filter out noise and used odometry information to compensate for the vehicle's motion. ### _Mapping_ We adapted the GPU-accelerated Voxel Local Mapping (G-VOM) system [14], originally developed for lidar-based mapping, to work with radar data. The primary modifications to G-VOM relate to the additional importance of intensity information and ground plane estimation. First, while G-VOM already stores the number of hits per voxel, in our modification it also stores the average radar intensity for each voxel. This intensity is used to determine if a voxel is solid or not. Voxels with an average intensity above a set threshold are assumed to be solid, while voxels below the threshold are considered passable. Fig. 3 shows an example of the average intensity map. Next is the ground plane estimation, for each vertical column in the voxel map a weighted average of voxel heights is taken. The weighting for each voxel is the product of the average intensity and the number of hits for that voxel. This approach accounts for the unique characteristics of radar data and enables more accurate ground estimation. Finally we need to determine where obstacles are. Solid voxels above the ground by an obstacle height threshold are classified as obstacles to be avoided during navigation. Fig. 4 shows both the resulting elevation map and obstacle voxels that are derived from the map in Fig. 3. Once all these terrain metrics have been computed they are used to generate several output maps. First a validity map that indicates whether there is valid data in each cell. Next a slope map for valid cells. And finally an obstacle map. Each of these maps is assigned a cost and a weight, and the final cost map is obtained by summing the weighted maps. The generation of the output maps and the cost map is discussed in more detail in [14]. ## IV Results ### _System Overview_ We implemented our system on a Clearpath Robotics Warthog, shown in Fig. 5, with an Ouster OS1-128 lidar and a Navtech CIR-DEV-X radar sensor. The lidar was mounted level with the vehicle and the radar was mounted tilted down with a 2.5\({}^{\circ}\) angle. Since the radar only provides a 2D scan of the environment the motion of the vehicle was used to create the 3D map. The testing environment was the Texas A&M University RELLIS Campus, depicted in Fig. 6. We employed Direct lidar Odometry (DLO) [15] to supply odometry data. Our adaptation of G-VOM was configured with a voxel resolution of 0.4 m and a map size of 256x256x64. The voxel map was created using only data from the radar using a return intensity threshold of 0.26. Finally, we incorporated the planning methods described Fig. 4: An example of the voxel elevation map and obstacle map. Black voxels are obstacles, colored voxels are the elevation map with red showing low slope and yellow showing high slope. Fig. 3: An example of the voxel return density map. Each voxel is colored with the average radar return intensity for that voxels, red is low intensity, green is medium, and blue is high. Voxels are also displayed with 90% transparency. Fig. 2: A figure showing the difference between a radar cone (left) and a lidar beam (right) and how it sees objects (blue circle). Shown with a large object on top and a small object on bottom. in our previous work [16],[17], alongside the controller presented in [18]. ### _Autonomous Navigation_ The data presented in the following sections was collected during autonomous operation of the vehicle. We utilized prerecorded GPS waypoints, spaced 20 m apart, to provide the global path. The vehicle traversed this path at a speed of 2.5 m/s, resulting in a total driven path length of 350 m. Fig. 7 displays the actual path driven by the vehicle in relation to the given GPS waypoints. The terrain (Fig. 6) was a mix of grass and gravel with some dense vegetation and solid obstacles, both natural and artificial. The vehicle completed the entire path successfully without requiring any manual interventions. ### _Comparison of Lidar vs Radar Sensor Data_ In the previous section we showed that it is possible to navigate off-road terrain only using radar data. Now we will discuss the difference between the raw data produced by each sensor and some of the advantages radar provides. The following figures (8,9,10) show histograms displaying the percentage of lidar and radar points returned at different ranges. Range bins of 1 m are used for this analysis. As mentioned earlier, the radar data must be thresholded before returns can be interpreted as points. We present histograms at the threshold used for autonomous operation, 0.26, and a higher threshold of 0.31. The higher threshold is presented as the lower threshold does lead to some false positive points. The higher threshold doesn't have these false positives but it was found to work less well during autonomous operation. Looking at these figures we immediately see that both radar and lidar show a large number of points near the vehicle with an approximately exponential decay in the number of points as range increases. It is also clear that the radar has a significantly longer range than the lidar. With lidar points having a maximum range of approximately 50 m where the radar still has points beyond 250 m. Figures 11,12,and 13 show this better by zooming into the tails of each histogram, showing a upper bin size of 0.25%. We can see that the lidar has a bin size of 0.25% at approximately 25 m (Fig. 11), the lower radar threshold shows the same bin size at 55 m (Fig. 12), more than twice the range of the lidar. Even at the higher threshold we still see the radar having a 0.25% range of over 50 m (Fig. 13). Looking at the maximum range the difference is even greater, with just above 50 m for the lidar and over 250 m for the radar at both threshold values. That means the radar has a maximum range over five times that of lidar. But how does this compare when looking at a real object? Fig. 5: The vehicle, a Clearpath Robotics Warthog, showing the mounting of the lidar, an Ouster OS1-128, and the radar, a Navtech CIR sensor. Fig. 8: A histogram showing the percentage of lidar points returned in each range bin. Fig. 6: The vehicle at the test site at the beginning of the path. Fig. 7: The final driven path overlaid with the gps waypoints. The red circle is the start of the path and the red x is the end. Terrain features have been annotated. Fig. 14 shows both a radar and lidar view from where the vehicle is in Fig. 6. This is at the maximum range the vehicle is able to detect the barrier in lidar. We found that the barrier could be easily seen by the radar at a range of 80 m whereas it was not visible to the lidar until the vehicle was within 40 m. This matches well with our 0.25% range, showing the radar with twice the effective range of the lidar. ### _Comparison of Lidar vs Radar Maps_ In the previous section we showed that the radar can see significantly farther than the lidar. Now we compare the radar based height map with the more traditional lidar based maps using the lidar maps as the ground truth. Fig. 15 shows a top down view of the difference between the radar and lidar height maps. Note that not only does the resulting radar map have a noticeably larger range but the error, calculated as the absolute difference between the radar and lidar height map, is also very low. With only small regions with an error greater than 1 m. But this is only one frame of data, Fig. 16 shows the mean error and standard deviation of the error over the entire path length. We can see that the maximum mean error Fig. 11: A histogram showing the percentage of lidar points returned in each range bin. Zoomed to show a bin height of 0.25% and below. Fig. 12: A histogram showing the percentage of radar points returned in each range bin with a threshold of 0.26. Zoomed to show a bin height of 0.25% and below. Fig. 10: A histogram showing the percentage of radar points returned in each range bin with a threshold of 0.31. Fig. 9: A histogram showing the percentage of radar points returned in each range bin with a threshold of 0.26. over the entire path is only 0.4 m with the majority of the error being less than 0.3 m. The standard deviation follows a similar trend with the maximum also being 0.4 m and most being under 0.3 m. Looking again at Fig. 15 we can see that this large standard deviation is easily explained by large regions of the height map near the vehicle having a lower error with smaller regions far from the vehicle having a much larger error. We found that, since the elevation mapping relies on the average of many scans, the resulting elevation map becomes more accurate the closer it is to the vehicle. That is good for elevation but we are also interested in obstacle detection. It is harder to quantitatively define the quality of obstacle maps. However, as with the raw sensor data, the radar appears to detect positive obstacles at a range of at least 50 m as shown in Fig. 17. This is right at the edge of the map given the map has a side length of 102.4 m and the vehicle is at the center of the map. Qualitatively, successful autonomous operation without collisions over a path length of 350 m, as described above, demonstrates that the obstacle detection is sufficient. ## V Conclusions In this paper, we have presented an evaluation of radar alone as a potential alternative to lidar for off-road local navigation. By adapting existing lidar-based methods for use with radar, we have demonstrated the feasibility of radar only navigation in off-road conditions. We show the successful navigation of an off-road ground vehicle using only radar for obstacle detection and terrain estimation. Additionally, we show that radar has a significantly longer range than lidar, offering more than twice the effective detection range. This increased range was apparent in both histogram analysis and real-world object detection. Even despite radar's lower resolution, the generated maps based on radar data exhibited a similar level of accuracy as those created from lidar data. Finally, our findings support the notion that radar can serve as a viable alternative or complement to traditional lidar-based approaches, ultimately enhancing the robustness and reliability of navigation systems in off-road environments. Fig. 16: A figure showing the error and standard deviation over time between the lidar and radar height maps. Fig. 17: A figure showing the distance that an object is seen in the radar map. 50 m Fig. 14: A top down view of lidar data (black) and radar data (colored) showing detection of the barrier in radar at 80 m. Note that the farthest lidar points are only at 40 m. Fig. 15: A figure showing the difference between the lidar and radar height maps. The red region is where there is radar data but no lidar data. Pink regions are greater than 1 m error. Green through blue is 0 m to 1 m error.
2309.02428
Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework
The burgeoning growth of public domain data and the increasing complexity of deep learning model architectures have underscored the need for more efficient data representation and analysis techniques. This paper is motivated by the work of (Helal, 2023) and aims to present a comprehensive overview of tensorization. This transformative approach bridges the gap between the inherently multidimensional nature of data and the simplified 2-dimensional matrices commonly used in linear algebra-based machine learning algorithms. This paper explores the steps involved in tensorization, multidimensional data sources, various multiway analysis methods employed, and the benefits of these approaches. A small example of Blind Source Separation (BSS) is presented comparing 2-dimensional algorithms and a multiway algorithm in Python. Results indicate that multiway analysis is more expressive. Contrary to the intuition of the dimensionality curse, utilising multidimensional datasets in their native form and applying multiway analysis methods grounded in multilinear algebra reveal a profound capacity to capture intricate interrelationships among various dimensions while, surprisingly, reducing the number of model parameters and accelerating processing. A survey of the multi-away analysis methods and integration with various Deep Neural Networks models is presented using case studies in different application domains.
Manal Helal
2023-09-05T17:56:22Z
http://arxiv.org/abs/2309.02428v3
# Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework ###### Abstract The burgeoning growth of public domain data and the increasing complexity of deep learning model architectures have underscored the need for more efficient data representation and analysis techniques. This paper is motivated by the work of (Helal, 2023) and aims to present a comprehensive overview of tensorization. This transformative approach bridges the gap between the inherently multidimensional nature of data and the simplified 2-dimensional matrices commonly used in linear algebra-based machine learning algorithms. This paper explores the steps involved in tensorization, multidimensional data sources, various multiway analysis methods employed, and the benefits of these approaches. A small example of Blind Source Separation (BSS) is presented comparing 2-dimensional algorithms and a multiway algorithm in Python. Results indicate that multiway analysis is more expressive. Contrary to the intuition of the dimensionality curse, utilising multidimensional datasets in their native form and applying multiway analysis methods grounded in multilinear algebra reveal a profound capacity to capture intricate interrelationships among various dimensions while, surprisingly, reducing the number of model parameters and accelerating processing. A survey of the multi-away analysis methods and integration with various Deep Neural Networks models is presented using case studies in different application domains. ## 1 Introduction The motivation to write the book (Helal, 2023) and summarise the tensorisation step in this paper is the increased size of the public domain data and the increased size of deep learning models' architecture. The data are intuitively multidimensional but simplified as a 2-dimension matrix form for simpler representation and application of linear algebra algorithms. Using the multidimensional dataset in their given form and applying multiway analysis methods using multilinear algebra provide better expressive models of the multiway interactions of the different dimensions, and surprisingly, lead to fewer parameters and faster processing, not the expected dimensionality curse from the increased dimensions. Various papers and projects following different standards present the current advances in multiway analysis. The main objective is to explain in order all the required theoretical background to understand these methods, provide a survey on the existing high dimensional datasets, tensorisation steps of 2-dimensional datasets, and the available methods for multiway analysis and compressed deep learning models. This paper systematically navigates the essential theoretical foundations required to comprehend these methods, offering insight into the intricacies of high-dimensional datasets, tensorization procedures for transforming 2-dimensional data, and an extensive inventory of multiway analysis techniques and their synergy with compressed deep learning models. The second section provides an in-depth literature review, while the third section introduces a tensorization framework tailored to enhance the expressive power of multidimensional data. Subsequently, the fourth section elucidates the tangible benefits of adopting these approaches through illustrative case studies drawn from tensorized machine learning and deep learning literature. Then, the conclusion section summarises key concepts, delineates existing challenges, and outlines promising avenues for future research in the evolving landscape of tensorization, multiway analysis, and their integration with deep learning models. ## 2 Literature Review This section reviews the literature on multiway analysis methods, multiway dataset sources, and the tensorisation methods of 2-dimensional datasets. The tensorisation of data given in 2-dimensional format can be as simple as reshaping the data into an n-d array of n being equal to any higher order value to represent each mode independently. For example, a dataset of student grades in all school grades, all cohorts over many years, all subjects, and all exams per subject are usually given in the 2-D format as shown in Table 1. The most intuitive tensorisation of this dataset is a reshape such that mode 1 is the academic year (the cohort), mode 2 is the school grade, mode 3 is the student, and mode 4 is the subject. We can also include each assessment in isolation, but we can take the average or the final assessment as the aggregate for this 4-mode 4-D array. This mapping is from the 2-dimensional coordinate space to the 4-d coordinate space. Coordinates are defined by their basis vector, which defines the unit step into the coordinate. For instance, the academic year is expected to have unit bases of one year per basis. This might be all required for this kind of data from the year the school was established to the current year. However, the student ID might not be incremented by 1 for every new student and might be randomised for each cohort or re-used across the cohorts. For the student ID, hashing into sequential values might be required for a given dataset. Some datasets have minimum and maximum values of acceptable range, and a transformation with respect to the rate of change will be required. For example, for ten academic years, nine school grades, 100 unique students as they level up from a school grade to the next, ten subjects, this will create a Tensor \(\mathcal{X}\in\mathbb{R}^{10\times 9\times 100\times 10}\). We can retrieve a specific student record in Python Numpy structure, but identifying the student index i, and retrieve all other modes as \(\mathcal{X}[:,:,i,:]\). Other deterministic and stochastic approaches are discussed in the literature and surveyed (Debals and De Lathauwer, 2015). These will be further explained below. ### Multiway Analysis Methods. The multiway analysis methods take as input a tensor of n-mode and apply various factorisation, regression, clustering, or completion of missing values algorithms. The application of these algorithms is themselves Machine Learning (ML) algorithms that provide insight into the dataset, not just data pre-processing steps. However, they can also be used as a data pre-processing step to better represent \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Academic Year & School Grade & Student ID & Subject & Assessment & Assessment &... \\ \hline \hline 1 & & & & & & \\ \hline 2021 & 1 & 12345 & Math & 50 & 60 & \\ \hline 2022 & 1 & 14734 & Science &... &... & \\ \hline \end{tabular} \end{table} Table 1: MATRIX FORM OF HIGH DIMENSIONAL DATASET the dataset before applying an ML or Deep Learning (DL) model that might be tensorised itself or not. Chapter three of (Helal, 2023) presents the foundation of Multilinear analysis that can be summarised in Table 2. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & Manifolds & Hilbert Space & Curves & Riemannian Geometry & Differential geometry on Manifolds \\ \hline Geometric Space & Collection of points and not vectors, on a local Euclidean space & Hilbert Space \(\mathcal{H}\) is a generalisation of Euclidian & Non-Euclidean spaces & Riemannian Manifolds & Mapping the data to manifolds with curvature by discrete curvatures in & Mapping the data to manifolds with curvature by change of basis using the Jacobian matrix determinant. \\ & forming a Smooth & & elliptic geometry & to higher dimensions and & \\ & Curved Spaces to & & & & \\ & Capture Intrinsic & & & & \\ & Geometry of the data & & & & \\ & & & & & \\ \hline Applications & Dimensionality & Maximize Separation, & Capture Curve Shape & Preserve Intrinsic & Capture Local Geometry for applications such as Geometry for & Shape Analysis, Medical Imaging, Computer Vision \\ & Revolution, Visualization & Data & Orbongality for applications such as Quantum Mechanics, & Motion Tracking, & Applications such as & \\ & Function Approximation & Signature Recognition & Robotics, & Shape \\ & & & & & \\ \hline Similarity or & Various metrics such as: Geodesic Distance, & Inner Product using a Kernel function (K[x, x]) = (\(\theta\)[x]). Only(x) & Orc Length, Frechet Distance such as in higher & Geodesic, Riemannian metric & Intrinsic Metrics, Curvature information. This can be achieved by Differential Forms: (Tensors) \\ & Intrinsic Metric, and & & & & \\ & even the Euclidean distance measures & & & & \\ & quantified by Root & & & & \\ & Mean Square Error & & & & \\ & (RMSE) & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ \hline Mathematical & (Riemann, 1868) & (Hilbert, 1898) & (Fréchet, 1906) & (Riemann, 1868) & (Gauss, 1828) \\ Reference & First & & & & & \\ \hline Machine Learning & (Tenenbaum, Silva and & (Cover and Hart, 1967) & (Gawila, 1999) & (Hétcher _et al._, 2004) & (Penned and Thirion, 1995) \\ Earlier Adoption & Langford, 2000) & & & & \\ \hline \end{tabular} \end{table} Table 2: Multilinear Algebra & Tensors The first column describes the mathematical structure of the Manifold Learning algorithms that remain in Euclidean space and focuses on finding a lower dimensional representation of a given dataset. The second column is the Kernel Trick that maps the dot product as the similarity measure on two vectors, operating in the infinite dimension Hilbert Space \(\mathcal{H}\), using a kernel function that does not do the actual mapping to the higher dimension. This is useful when the dataset is not linearly separable in the lower dimension and can be linearly separable in the higher dimension. The separation linear hyperplane in the higher dimension will be non-linear when projected back to the lower dimension. The third column employs the curved coordinates employing polynomial functions of quadratic degree or higher to fit a dataset. This method increases the number of features to study the interactions by learning the weights of the different polynomial combinations of the given features. This can be as simple as using the Python sklearn.preprocessing.PolynomialFeatures function to apply the regression on the transformed features. Global polynomial optimisation is achieved in Tensor form using tensor decomposition approaches (Marmin, Castella and Pesquet, 2020). This is useful for various function approximation applications for multilinear functions, non-linear functions with polynomials, matrix-matrix multiplication, and systems of polynomial equations. The fourth column expands on the curvatures to map the dataset onto curved coordinates in the higher dimensions introduced by the German mathematician Bernhard Riemann. The Dual space mapping enables applying differential geometry on a coordinate-free approach. The latter uses the Jacobian matrices of the mapping functions to transform the coefficients in one coordinate system/basis to another, and the Hessian matrices, which is the Laplacian of a function to measure the function's curvature as the divergence of its gradient invariant of change of basis. The last three columns use tensors that can benefit from the tensorisation step explained in this paper, and then the multiway analysis methods to provide further insights. The following is an overview of prior surveys and approaches, focusing on the seminal work by (Kolda and Bader, 2009). #### 2.1.1 Tensor Decompositions Matrices are factorised using various methods to reduce their dimensionality and identify dominant factors. Projective Methods such as PCA, SVD and others are commonly used, but they lose the non-linear structure of the data. Embedding learning methods keep the non-linear structure while mapping a dataset to a lower dimensional embedding without learning the manifold, such as Multidimensional scaling (MDS). Other methods also learn the manifold, such as Isometric Feature map (Isomap), Locally Linear Embedding and Spectral Clustering. These methods do not work on higher dimensional datasets. Chapter two (Helal, 2023) summarises the algorithmic details of these methods with Python examples. The linear algebra foundations of these algorithms are summarised in Chapter One. The following multidimensional factorisation/decomposition methods are used in high-dimensional datasets, which are explained in chapter 4 of the same book, with the mathematical foundations in chapter three. #### 2.1.1.1 Candecomp/parafac (Cp) Decomposition CP decomposition is the multiway extension of SVD and is usually implemented in various constraints and approaches. SVD of dataset X is computed as \(X=USV^{T}=\sigma_{1}u_{1}v_{1}^{T}+\sigma_{2}u_{2}v_{2}^{T}+\cdots+\sigma_{r}u _{r}v_{r}^{T}\). This can be expressed as the summation of outer products of the vectors of the most dominating columns in U and most dominating rows in V, in the order of the singular values \(\sigma\) from the highest \(\sigma_{1}\) to the lowest given rank \(\sigma_{r}\), which are diagonalised in S. This enables the approximate reconstruction of matrix \(\chi\) as \(\chi\) from its dominant components: \(\hat{\chi}\ =\sum_{j=1}^{r}\sigma_{j}u_{j}v_{j}^{T}\). The N dimensions generalisation is defined as \(\chi\in\mathbb{R}^{I_{1},I_{2},...,I_{N}}\approx\left[\lambda;A^{(1)},\ A^{(2)},...\,,A^{(N)}\right]=\sum_{r=1}^{R}\lambda_{r}\ a_{r}^{(1)}\circ a_{r}^{(2)} \circ...\ a_{r}^{(N)}=\Lambda\times_{1}A^{(1)}\times_{2}A^{(2)}\,...\,\times_{N }A^{(N)}\)., where \(\Lambda\in\mathbb{R}^{r,r,...,r}\) is a diagonal core tensor of rank \(r\) such that \(\lambda_{r}=\Lambda_{r,r,...,r}\). Figure 1 illustrates 2D and 3D SVD. #### 2.1.1.2 Tucker Decomposition The Tucker decomposition is the most cited and is considered a higher-order (or multiway) PCA. It decomposes a tensor into a core tensor (not a diagonal core tensor of weights as in CP decomposition) multiplied by a factor matrix along each mode. PCA for a dataset \(x\) can be calculated using a direct projection matrix such that \(y=U^{T}x\), where \(y\in\mathbb{R}^{P}\) is the projected data, \(U\in\mathbb{R}^{m\times p}\) is the projection matrix containing the p Eigenvectors and \(x\in\mathbb{R}^{m}=(x_{m}-\bar{x})\) is the centred m-dimensional dataset standardised to zero mean. PCA can also be calculated using an iterative constraint optimisation method, using a scatter matrix \(S_{T}\ =\ XX^{T}\). The first component is calculated as \(\widehat{u_{1}}=u_{1}^{T}S_{T}u_{1}-\lambda(u_{1}^{T}u_{1}-1)\), where the Lagrange multiplier \(\lambda\) accounts for the normalisation constraints that the principal component should be a unit vector. Then, we minimise by differentiating with respect to \(u_{1}\): and set to \(0\), \(\frac{\delta\widehat{u_{1}}}{\delta u_{1}}=S_{T}u_{1}-\lambda u_{1}=(S_{T}- \lambda\Omega)u_{1}=0\), where \(\lambda\) and \(u_{1}\)are an Eigenvalue and its corresponding Eigenvector of \(S_{T}\). Iterate through the required components while adding new constraints that each new component is perpendicular/orthogonal to all previous components, and repeat the differentiation step. For example, the second component would be: \(\widehat{u_{2}}=u_{2}^{T}S_{T}u_{2}-\lambda(u_{2}^{T}u_{2}-1)-\mu(u_{2}^{T}u_{ 1})\). The project matrix is the assembled columns of \(\widehat{u_{t}}\) components. The project data is just the simple multiplication \(y\ =\ U^{T}x\)(Burges, 2009). Tucker decomposition for a given tensor dataset \(\chi\in\mathbb{R}^{I_{1},I_{2},...,I_{N}}\) is computed as \(\chi\approx\left[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! large-scale tensorial dataset. We can work with networks of tensors, such as each tensor representing a multiway dataset; particular indices/features in a tensor connect to other indices/features in another multiway dataset tensor representation as summation indices, enabling contraction or left as free indices in the final tensor shape. The final tensor shape is the dataset a machine learning or deep learning algorithm should use, identifying some indices/features as predictors and others as target/outcome variables. There are various approaches to producing tensor networks, such as Matrix Product State (MPS) /Tensor Train (TT), Tensor Ring (TR), Matrix Product Operator (MPO), Tree Tensor Network / Hierarchical Tucker, Projected Entangled Pair States (PEPS), and Multi-scale Entanglement Renormalization Ansatz (MERA). For example, the TT decomposition is: \(\chi=\mathcal{A}_{1}\times_{3,1}\mathcal{A}_{2}\)... \(\times_{N}\mathcal{A}_{N}\), where \(\mathcal{A}_{n}\in\mathbb{R}^{R_{n-1}n,R_{n}}\), \(\text{R}_{0}=\text{R}_{N}=1\); \(\text{n}=1\), \(2\),..., N, such that \(\mathcal{A}_{1}\) and \(\mathcal{A}_{N}\) are of lesser rank than the internal core tensors. Another example is the TR decomposition computed as \(\chi=\mathfrak{R}(\mathcal{A}_{1},\mathcal{A}_{2},...\ \mathcal{A}_{N})=\sum_{\alpha_{1}, \alpha_{2},...,\alpha_{N}=1}^{R_{1},R_{2},...,R_{N}}a_{1}\big{(}\alpha_{1},\alpha_{2}\big{)}^{\alpha}a_{2}\big{(}\alpha_{2},\alpha_{3}\big{)}^{\text{ }}...\ ^{\circ}a_{N}\big{(}\alpha_{N},\alpha_{1}\big{)}\), where the TR rank is defined for each mode as (R1, R2,..., Rn), core tensors \(\mathcal{A}_{n}\in\mathbb{R}^{R_{n},R_{n},R_{n+1}}\), \(\text{n}=1\), \(2\),..., N, and last one \(\mathcal{A}_{N}\in\mathbb{R}^{R_{n}J_{n},R_{1}}\) completes the circular connection with the first one, such that \(\text{R}_{1}=\text{R}_{N+1}\). These are illustrated in Figure 3. Other Tensor decomposition methods include Non-negative Tensor Factorization (NTF), INdividual Differences in SCALing (INDSCAL), CANonical Decomposition with LINEar Constraints (CANDELINC), PARAFAC2, DEDICOM, PARATUCK2, Hierarchical Tucker(HT), Tree Tensor Network States (TTNS) among others in the literature. All these methods require estimating the rank. Finding the tensor rank is an NP-hard problem, and methods like Alternating Least Squares (ALS) are used. The method is based on fixing A and B to solve C, then fixing A & C to solve B, then fixing B and C to solve A, and repeating until convergence, which is defined as the case when the error is not significantly decreasing. ALS reduces a non-convex optimisation problem to convex subproblems. #### 2.1.2 Tensor Completion Tensor completion is an extension to the matrix completion class of problems that aims to interpolate missing values in a dataset from the given values. A 3-way association tensor completion application example is presented in (Huang et al., 2021). The dataset used in this paper is collected from the HMDD (the Human microRNA Disease Database), which is a database that curates biological lab experiment-supported evidence for human microRNA (miRNA) and disease associations. Auxiliary data are the disease descriptors collected from Medical Subject Heading (MeSH), a comprehensive controlled vocabulary thesaurus about life science, to calculate disease semantic similarity. Previous work focused on predicting whether a miRNA-disease association exists or not (binary classification/prediction). Instead of building a binary graph association, this paper presented the dataset in a 3-way structure of miRNA-disease-type triplets as a tensor. It introduced Tensor Decomposition methods to solve the prediction task, such that the type explains the roles of miRNAs in disease development or identification. The paper proposed a novel method, Tensor Decomposition with Relational Constraints (TDRC), incorporating biological features (miRNA-miRNA similarity and disease-disease similarity) as relational constraints to further the existing tensor decomposition methods. Figure 3: (a) TT decomposition, TR decomposition They formulated the data as a set of miRNAs \(\mathcal{E}=\{e_{1},e_{2},...,e_{m}\}\), a set of diseases \(D=\{d_{1},d_{2},...,d_{n}\}\) and a set of association types \(\mathcal{R}=\{r_{1},r_{2},...,r_{t}\}\), the authors constructed a multi-relation bipartite graph \(\mathcal{G}\). A triple \((e_{i},d_{j},r_{t})\) as a link in the graph \(\mathcal{G}\) denoting an association between the miRNA \(e_{i}\) and the disease \(d_{j}\) with the type \(r_{t}\cdot\mathcal{G}\) is a binary three-way tensor \(\mathcal{X}\in\{0,1\}^{m\times n\times t}\) with miRNA mode, disease mode, and type mode, where each slice is the adjacency matrix with regard to a type of miRNA-disease association. The tensor \(X\) is extremely sparse, with many unknown entries, and thus, reaching the goal only by using known links is challenging. The auxiliary data, such as miRNA functional similarity matrix and disease-disease semantic similarity matrix, were used to constrain the tensor completion performed by the CP tensor decomposition method. This is an example of integrating multiple datasets that would form a sparse large tensor and constrain the size with known information from other datasets. #### 2.1.3 Tensor Regression Regression is a supervised ML algorithm that fits a dataset to a mapping function from the features \(\mathbf{x}\) to the target \(\mathbf{y}\) by learning the weights of the features that reduce the residual error. The 2-D common linear regression equation is \(y=\epsilon+\sum_{i=0}^{N}w_{i}x_{i}\), such that the \(\mathbf{w}\) is the estimated regression parameters, including \(w_{0}\) as the bias and \(x_{0}=1\). It can be non-linear using polynomial feature scaling, as mentioned above. It can also use any other non-linear parameters such as exponential, trigonometric, and power functions. Other variants include a multivariate regression model, which is when \(\mathbf{x}\) is multiple predictors and \(\mathbf{y}\) can be multiple responses. There are also non-parametric regression models, such as Gaussian Processes (GP), Artificial Neural Networks (ANN), Decision Trees, and Support Vector Regression (SVR). These methods usually require much more sample data than parametric methods (Hou, 2017). A simple tensor regression model is defined as: given an Nth-order tensor \(\chi\in\mathbb{R}^{I_{1},I_{2},I_{3},...,I_{N}}\), and the output \(\mathcal{Y}\) could be a tensor of any order required to represent the dependent variable(s); the regression equation is: \(\mathcal{Y}=f(\mathcal{X})+\epsilon\) The f function for linear regression can be the dot product defined in the generalised linear tensor regression model as: \(\mathcal{Y}=\left\langle\mathcal{X},\mathcal{B}\right\rangle+\epsilon\) Such that the dot product of the predictor and \(\beta\) as the coefficient tensor in the same size as the predictor \(\chi\), capturing its tensor covariate, and is added to \(\epsilon\) as the tensor representation error or bias. Similar non-linear functions in the higher order can be employed. Prediction or reconstruction/interpolation can occur based on a dataset of M samples as follows: \(\hat{x}_{I_{1},I_{2},...,I_{N}}=\sum_{k=1}^{M}\left\langle x_{k},\beta_{k} \right\rangle.\) Solving \(\left\langle\mathcal{X},\mathcal{B}\right\rangle\) by vectorising, both tensors will produce a huge number of parameters. For example, an MRI dataset \(\mathcal{X}\in\mathbb{R}^{128\times 128\times 128}\) will require 2,097,152 + and five usual covariates parameters to estimate, which is intractable. Using the unsupervised PCA produces the most dominating principal components that are irrelevant to the input, lose the multiway structural relationship, and are difficult to interpret. The CP Tensor Regression defines the \(\mathcal{B}\) tensor in terms of its rank-R CP decomposition, a \(\left[B_{1},B_{2},...,B_{N}\right]\) with \(B_{n}=\left[b_{1}^{(n)},...,b_{R}^{(n)}\right]\in\mathbb{R}^{I_{n}R}\), such that \(y=\left\langle\mathcal{X},\mathcal{B}\right\rangle+\epsilon\)=\(\left\langle\mathcal{X},\mathcal{\Sigma}_{r=1}^{R}\ b_{r}^{(1)}\circ b_{r}^{(2)}\circ...\ b_{r}^{(N)}\right\rangle+\epsilon\) where \(\mathbf{y}\) is a scalar output. This reduces the number of parameters from O(\(\mathbb{I}^{N}\)) to the scale of O(NIR) while also producing reasonable reconstruction accuracy. For example, the previous MRI example parameters can be reduced to \(389=5+128\times 3\) for a rank-1 model and to \(1\), \(157=5+3\times 128\times 3\) for a rank-3 model. Tucker decomposition is more flexible than CP and accurately captures the multiway structural relationships in the core tensors. Tucker Tensor Regression defines the \(\mathcal{B}\) tensor in terms of its Tucker decomposition as: \[\sum_{r_{1}=1}^{R_{1}}...\sum_{r_{n}=1}^{R_{N}}...g_{r_{1}..r_{n}}b_{r_{1}}^{(1) }\circ b_{r_{2}}^{(2)}\circ...b_{r_{n}}^{(N)}\] such that \(y=\langle\mathcal{X},\mathcal{B}\rangle+\epsilon\) where \(\mathcal{G}\in\mathbb{R}^{R_{1},R_{2},...,R_{N}}\) with entries \(\left\{g_{r_{1}..r_{n}}\right\}_{r_{1}=1,...,r_{n}=1}^{R_{1},...,R_{N}}\). The factor matrices are defined as \(B_{n}\in\mathbb{R}^{R_{n}R_{n}}\) along different modes. This reduces the number of parameters from O(\(\mathbb{I}^{N}\)) to the scale of O(\(\mathbb{N}\)Ir + \(\mathbb{I}^{N}\)), which is higher than the CP regression parameters of O(NIR) but more parsimonious modelling of the input data when R\(\ll N\). An example application presented in (Li, Zhou and Li, 2013) shows that for a tensorial dataset representing neuroimaging data as a 3D signal \(\mathcal{X}\in\mathbb{R}^{16\times 16\times 16}\) using a Tucker model with multilinear rank = (2, 2, 5), the number of parameters is 131, while using a 5-component CP regression model yields 230 parameters. #### 2.1.4 Tensor Clustering Clustering is an unsupervised machine learning approach such that given an unlabelled data matrix X, it can be represented as X=AB\({}^{\intercal}\), such that each row in A (the canonical basis vector) selects a row in B, which contains the clustering vectors. Estimating A and B (two unknowns) from X enables multiway clustering. Dictionary Learning Algorithms and Source Separation Algorithms, such as Independent Component Analysis, attempt to estimate two matrices for the given data matrix, among others, are examples of unsupervised ML algorithms. All these algorithms are expanded to multiway higher dimension formulation (Acar and Yener, 2009). ### Multiway (Tensorised) dataset sources This section surveys the datasets to use for multiway analysis. The dataset can be tensorised from vector or matrix forms. Tensors can be formed as well by integrating multiple datasets using various data fusion algorithms. However, many data sources are available in tensor form as well. #### 2.2.1 Traditional Datasets Kaggle and UCI, among other public domains, are repositories containing massive amounts of datasets on various application domains and ready to use for various ML algorithms. The vast majority of these datasets are 2-dimensional in nature and are ready for the applications of linear algebra-based ML algorithms. Data fusion techniques can combine the relevant datasets into a multi-modal ML model to create a high dimensional tensor of the various modes to be the input of a multiway multi-modal ML algorithm (BaltruSaitis, Ahuja and Morency, 2017). The process of tensorisation requires understanding the different modes in one 2-D dataset or collected from multiple 2-D datasets. This analysis of the modes helps to decide how to integrate the modes from different datasets or divide a mode from one dataset into different modes as the application requires. Dividing a mode into more modes is called segmentation (Debals and De Lathauwer, 2015). An example of integration is hospitals' trial statistics, which might have a common mode between the different hospital datasets to integrate on. An example of dividing a column from one dataset into two modes is division by year from a date column and having a different mode for each year, gender, age groups or others. #### Graphs or Networks Datasets Sensor networks collect high-dimensional datasets naturally. For example, (Almomani, Al-Kasasbeh and AL-Akhras, 2016) proposed a simulated dataset for Wireless Sensor Networks (WSN) that contains sensors in different locations that gather various information and send them in messages collaboratively to Base stations. These sensors usually have limited energy and memory and need to optimise the communication of messages using clustering algorithms such as the Low Energy Aware Cluster Hierarchy (LEACH), which is a routing protocol that optimises the energy consumption by organising sensor nodes into clusters to distribute the energy among all nodes in the network. This dataset can form a Tensor \(\mathcal{X}^{N,B,K,M}\) for N sensors, B Base Stations, K Clusters of sensors, and M Messages between sensors and other sensors, Base stations, or cluster heads. Knowledge Graphs are also naturally high-dimensional datasets. These graphs usually employ RDF triplets of two entities and a relationship. The entities can be of various types, creating a mode per type, and similarly, the relationship can be of various types, and create further modes for the different types. A bibliography network can contain authors as the main entity type and co-authoring as the relationship. This would create a Tensor \(\mathcal{X}^{\text{n}\times\text{n}\times\text{m}}\), of n entities x n entities x m relationships. Various datasets are considered knowledge graphs, such as: * The Citation network in the Cora dataset introduced by (McCallum et al., 2000), * Kinships in Australian Tribes (Dousset et al., 2010), * Nations clusters of countries, clusters of interactions between countries, and clusters of country features such as indicators selected from the (The World Bank, 2023). * UMLS (Unified Medical Language System) (Bodenreider, 2004). * Semantic Web's Linked Open Data (LOD) contains millions of entities, hundreds of relations and billions of known facts (Khusro, 2014). * YAGO 2 ontology contains 4.3x1014 possible triplets as a knowledge base about people, cities, countries, movies, and organisations (Hoffart et al., 2013). * Dbpedia ontology is an open knowledge graph crowd-sourced from the web contents to create structured knowledge such as the Dbpedia-Person dataset (Mendes et al., 2011). * Multi-modal datasets include the Visual Question Answering (VQA) dataset (Goyal et al., 2017) and Multi-modal sentiment analysis (Das and Singh, 2023), among others. #### Image and Video Datasets The famous example of the MNIST dataset of handwritten digits contains images of 28\(\times\)28 pixels, usually flattened as 784 pixels in one-row values per image, representing grey shade as integers between 0 and 255. If kept in a 2D form, the covariance matrix will be smaller. This is the reason why 2D Convolutional Neural Networks (CNN) are much faster and more accurate than Dense Layers for spatial datasets like images. Data in the tensor form can represent a coloured image with pixel values for red, green, and blue as three different values in the (RGB) frames stacked into a 3rd-order tensor. Similarly, a video dataset can include the 3rd-order coloured image frames extended with the time dimension in a 4th-order tensor. For a video example illustrated in Figure 4, the first two dimensions are spatial rows and columns of 128 x 88 dimensionality and a time third dimension of 20 frames. A Linear Subspace Learning (LSI) vectorisation in (a) performed by the product of the number of dimensions in each mode results in a large covariance matrix of 189 GB memory fingerprint and the resulting processing time. On the other hand, a Multi-linear Subspace Learning (MSL) tensor-based analysis performing the sum of three smaller covariance matrices results in 95.8KB of memory fingerprint and reduced processing time. Possible video datasets that can create higher tensors by adding other modes for other ML tasks include the Human limb motion dataset. These datasets collect videos about persons doing specific movements, using video cameras detecting the 3D position of infrared markers placed on each person's legs, arms or anywhere. A similar dataset was collected by (Della Santina _et al._, 2017). Tensors can be created for the regular 4-way video recordings, adding modes for a specific person, specific sensor, specific motion, or any other scene analysis objectives. Another example is (Soliman _et al._, 2019) dataset that contains 1000 violence and 1000 non-violence videos collected from YouTube videos. The violent videos have many real street fight situations in several environments and conditions. The non-violence videos are gathered from various human actions like sports, eating, walking, and so forth. #### 2.2.4 Health and Biomedical Datasets Brain-computer interface (BCI) based on EEG signals are naturally multi-mode due to the data recording mechanism. For example, signals are recorded by multiple sensors (electrodes) in multiple trials and epochs for multiple subjects and with different tasks, conditions..., and so forth. This dataset can be represented in rank n tensors to enable multi-way, multi-block data analysis techniques. Magnetic resonance imaging (MRI), functional MRI, PET, and MEG datasets are also naturally multi-mode. For example, A NIFTI file for a typical MRI scan stores the voxel values in an array of numbers. The coordinates for a single voxel within a NIFTI image volume can be specified as a 3-dimensional index (x, y, z) or a 4-dimensional index (x, y, z, t) for time. Then the subject is another mode, then the aim of the experiment is another, the resolution and so forth. Similar datasets can be found at OpenNeuro (Markiewicz _et al._, 2021). ### 2.3 Tensorisation Methods. Tensorization is the process of creating multidimensional datasets from existing 2-dimensional datasets or generally transforming a lower dimensional array into a higher dimension array by merging multiple datasets and segmenting one dataset. This can be simple reshaping and can be using various Figure 4: LSL vs MSL using tensorised data compression (Lu, Plataniotis and Venetsanopoulos, 2011) deterministic and stochastic methods. This step needs to be carefully considered if data collection is done from scratch. #### 2.3.1 Reshaping Given a dataset in matrix form, tensorisation can be simple multiple indexing, pivot table transformation, or actual transformation into the higher space. The accompanying source code shows examples of public domain datasets tensorised using different methods. The first data set is Global Temperature for 100 cities records from 01/11/1743 to 01/09/2013 (Rohde and Hausfather, 2020). There are 239177 rows, such that the number of records per city is different. Only the date and city can be coordinates as they contain different information. Country, latitude, and longitude columns are unique for each city, and converting them to new coordinates is not reasonable. The second dataset (_Gender Pay Gap Dataset,_ no date) tensorizes wages per gender, region, age, degree, and occupation up to rank-5 tensors. Table 3 summarises the different data structures and estimates the memory size and sparsity. #### Sparse Structures: This case-by-case tensorization showed the critical need for sparse arrays. scipy.sparse Python package offers data structures that work for 2-dimensional arrays only. The source code showed possible attempts to create sparse arrays using scipy.sparse data structures if they advance their implementation to nd-arrays. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Data & Pandas indexing & Pivot Table & Tensor \\ \hline Order-one tensor, temperature per city & 239177 rows x 2 columns & 100 rows for 1 index column & (100) - 100\% dense, no need to sparsify, quantise over cities is not useful \\ \hline Order-two tensor, temperature per city and latitude & 239177 rows x 3 columns & 100 rows for 2 indices columns & (100, 99) = 9,900 \\ & = 717,531 & Only 2 values are zero, this is dense, and sparsifying is not useful, and quantisation of cities and dependent latitude is not useful. \\ \hline Order-three tensor, temperature per city, latitude and longitude & 239177 rows x 4 columns & 100 rows for 3 indices columns & (100, 99, 272) = 2,692,800 \\ & = 956,708 & & 1\% dense and 99\% sparse. Using sparse arrays saved 94\% of the memory of nd-array and \(>\) 99\% of pandas frames memory. \\ \hline Order-four tensor, temperature per city, latitude, longitude and date – all the dataset & 239177 rows x 5 columns & 228175 rows for 4 indices columns & (100, 99, 272, 3239) = 8,721,979,200 values \\ & = 1,195,885 & 4 indices columns & \\ \hline & & Sparse as only 239177 non-zero values are found in the dataset. We can also quantise for the 271 years only by averaging the temperature for each year for every city, or for each month for seasonality \\ \hline Order-two tensor, wage per gender & 33398 rows x 3 columns & 2 rows for 1 index column & 2 values \\ \hline Order-three tensor, wage per gender and region & 33398 rows x 4 columns & 10 rows for 2 indices columns & (2, 5) = 14 values \\ \hline Order-three tensor, wage per gender, region and age & 33398 rows x 4 columns & 323 rows for 3 indices columns & (2, 5, 40) = 400 values with 323 non zero \\ \hline Order-four tensor, wage per gender, region, age, degree, and occupation & 33398 rows x 5 columns & 957 rows x for 4 indices columns & (2, 5, 40, 3, 975) = 1170000 values with 21845 non-zero. This is 99.4\% sparse to use only 4.44\% memory columns \\ \hline \end{tabular} \end{table} Table 3: Different Data structures compared with tensorized data Meanwhile, a dictionary data structure of n-dimensional indices tuple as key and aggregated value pairs are created for sparse n-dimensional arrays. The problem of custom data structures will also require rebuilding custom linear algebra operations and ML algorithms. Current 2-dimensional ML algorithms advise against sparse datasets and use dimensionality reduction to remove the sparsity. However, it can be very useful to show the multiway interactions between the different features. **Quantisation:** It is also important to quantise while doing the coordinate change as a safe model reduction method. Choosing the most suitable basis will affect the performance of the ML algorithm in memory, speed and accuracy metrics. **Parallelisation:** The work of (Helal _et al._, 2008, 2009; Helal, 2009) implemented Tensor Partitioning on a cluster of computing nodes as an example of wavefront processing of N-D arrays applied to the Multiple Sequence Alignment problem. The ND arrays were expressed dynamically using a C data structure containing Ndim, shape, and data as parameters and creating a linear array in memory. The N-dimensional index is then parameterised to access a specific location in the array or update it. The dense partitions are only created in the memory of the computing processor from a specific index up to a partition size to be processed independently on different processors. For example, shape (100, 100, 100, 100, 100) and partition size (10, 10, 10, 10, 10) will have the first wave computing on one processor the first partition from (0, 0, 0, 0, 0) to (9, 9, 9, 9, 9). Wave 2 will divide between the available processors the available partitions with starting indices that sum up to 10, such as (10, 0,0,0,0), (8, 1,1,0,0), (7, 1,1,1,0),... etc., and all permutations and number combinations. The processor ID identifies its share of the nd-indices in the current wave and the dependent partitions from previous waves of computation to receive its scores for direct communication. This model was done in C using MPI that can run over multiple cores or clusters of computing nodes over any network. This can easily be done in Python using GPUs or other parallel processing platforms. Numpy, MatLab, and Mathematica enable building arrays of any dimensionality and shape. It is very important not to fix the array dimensionality and shape in the input of ML stacks such as SKlearn, TensorFlow and others. This limits the ability to dynamically create these arrays from the different datasets, particularly in the tensoriation step, as shown in the tensorisation by adding coordinates Python notebook. #### 2.3.2 Deterministic Tensorization Hankelization and Lownerization are all deterministic tensorisation methods that systematically project any dataset to a higher dimension. Deterministic methods can be detensorised to 2-D or 1-D, or any lower dimension by applying the reverse process (Debals and De Lathauwer, 2015). Hankelisation creates a skew-diagonal higher order of the original data elements. The data is assumed to be exponential, sinusoidal, or polynomial functions that can be mapped to a tensor shape of a precise rank. This is useful in harmonic retrieval, direction-of-arrival estimation, and sinusoidal carriers in telecommunication for applications such as Blink Source Separation (BSS). This is useful in estimating the subspace in which a high-dimensional dataset may reside. The higher-order parameterised implementation of the Hankelisation procedure is found in (Vervliet _et al._, 2016) as Matlab functions in the TensorLab package. An attempt to rebuild it in Python is in the accompanying source code, with enough details to assess the BSS example. Comparing the Hankelised higher-order tensor of the mixed signal that is decomposed in the CP algorithm with PCA and ICA performance, the residual error is higher in the multi-way solution, then ICA, then PCA as illustrated in Figure 5. However, the multi-way reconstructed signals are closer in shape to the original signals than both ICA and PCA reconstructed signals as illustrated in Figure 6. lownerization is another deterministic approach that maps a given dataset to a higher dimension, assuming the data are of Rational function basis. This is also implemented in (Vervliet _et al._, 2016) as Matlab functions and its reverse process. Other tensorisation functions implemented in TensorLab include segmentation and decimation and their reverse processes. They are parameterised differently to enable various applications to find a suitable subspace that represents the dataset being analysed. #### 2.3.3 Statistical Tensorization Figure 5: BSS residual error from ICA, PCA and multi-way decomposition of the hankelised tensor Figure 6: BSS reconstructed signals from ICA, PCA, and multiway compared to the mixed signal and the original sources. Second-order statistics, such as the covariance matrix, can be used to tensorise a dataset along particular modes. The covariance matrix is usually a matrix of the variance of each feature (variable or column) with each other, as estimated from N rows or observations. In the higher order, this will be repeated for the higher dimension. This is implemented as well in (Vervliet _et al._, 2016) as d'cov Matlab function generalising the cov function, which identifies the modes of the observation and features and generates the higher order covariance of the features with the remaining modes. Also, lagged second-order statistics are implemented in the same TensorLab package using scov that returns shifted covariance matrices stacked along the third mode. This is parameterised in the lags argument. For non-gaussian datasets with statistically independent latent variables, the higher-order statistics such as cumulants and moments describe the data distribution. The p-norm of a vector is a positive-definite scalar function defined as \(\left\|\nu\right\|_{p}=\left(\sum_{i=1}^{N}\left|\nu_{i}\right|^{p}\right)^{ \frac{1}{p}}\geq 0,\forall p\geq 1\), where \(\left|\nu_{i}\right|\) is the absolute value of each element \(\nu_{i}\).This means that 1-norm is the sum of the absolute values of the elements. The 2-norm is the magnitude of the vector \(\nu\in\mathbb{R}^{N}\), which is its length (Frobenius norm) and is denoted \(\left\|\nu\right\|_{2}\) or \(\left\|\nu\right\|_{F}\). It is the Euclidean distance from the origin to the point reached by the vector and calculated as follows \(=\sqrt{\sum_{i=1}^{N}\nu_{i}^{2}}\). The infinity-norm is defined as the case where \(p\rightarrow\infty\), as \(\left\|\nu\right\|_{\infty}=\lim_{p\rightarrow\infty}(\sum_{i=1}^{N}\left| \nu_{i}\right|^{p})^{\frac{1}{p}}=\max\left(\left|\nu_{i}\right|\right)\). For example, given \(\nu=\begin{bmatrix}10\\ 2\\ -6\end{bmatrix}\), then \(\left\|\nu\right\|_{1}=18\), \(\left\|\nu\right\|_{2}=11.83\), \(\left\|\nu\right\|_{\infty}=10\). The central moments describe a distribution by its mean of a sample or the Expected value of the population as the weighted average \(E(X)=\sum_{i=1}^{N}p_{i}x_{i}\) where \(p\) is the probability/frequency/weight of the value \(x_{i}\) as the first central moment, the second central moment is the variance, the third is the skewness, and the fourth is the kurtosis. The \(n^{\text{th}}\) central moment is \(\mu_{n}:=E[(X-E[X])^{n}]=\int_{-\infty}^{\infty}(x-\mu)^{n}f(x)dx\). The cumulant of a random variable is calculated in the form of a cumulant generating function, which is the logarithm of the moment generating function as \(K(t)=\log E[e^{tX}]=\sum_{n=1}^{\infty}k_{n}\frac{t^{n}}{n!}=k_{1}\frac{t^{1} }{1!}+k_{2}\frac{t^{2}}{2!}+\cdots=\mu_{t}+\sigma^{2}\frac{t^{2}}{2}+\cdots\). The first, second and third cumulants are identical to the first, second and third central moments. They differ at the beginning of the fourth cumulant. Cumulants are easier to compute for their excellent mathematical properties, such as going to zero when variables are independent, and they describe the connectivity of the data, while the central moments do not. Cumulants are also implemented in the same TensorLab package as cum3, cum4, xcum4 and stcum4 Matlab functions. These are also useful in describing non-Gaussian datasets and are often used in BSS using a quadrilinear mapping of the fourth cumulant. #### Domain-transform Tensorisation and other methods Domain-transform methods can be used in Tensorisation to represent some signal datasets better. The values stored in a given index vector capture interaction between the dimensions/modes. This means symmetric or partially symmetric tensors might be sufficient to capture the inter-mode interactions, ignoring the values in a permuted index (same modes in a different order) in which the value might be redundant. For example, in the EEG dataset, we have F frequency measures collected over T time samples from S channels, forming a 3rd-order tensor. Transforming the domain of this 3rd-order tensor to get the time-frequency decomposition can be achieved using a short-time Fourier transform (STFT) that uses a fixed window size or wavelet transform (WT) that uses variable window sizes inversely proportional to the frequency resolution (high or low). Other transformations can represent data at multi-scale and orientation levels, such as the Gabor, contourlet, or pyramid steerable transformations. More details about the change of basis and representation learning are discussed in chapter five of (Heli, 2023). Furthermore, another higher-order statistic to describe a dataset is the partial derivatives of the observations' Generalised Characteristic Functions (GCF). Using a suitable tensor format, such as the lower-rank core tensors presented in section 2.1.1, enables all the above-discussed transformations and others while keeping the number of parameters smaller. The monograph by (Cichocki _et al._, 2016, 2017) provides detailed discussions with examples of various forms of tensorisation that prepare a dataset for compressed tensorised deep neural network models. Tensorising datasets, for example, by using tensor network representations, often allows for super-compression of datasets as large as \(10^{50}\) entries down to the affordable levels of \(10^{7}\) or even less. ## 3 Proposed Tensorisation Framework The previous section explained various methods that can be used to tensorise a dataset or a collection of datasets in one tensor form. The tensorisation is the first step in applying tensor computing or multi-way analysis. The complete framework is illustrated in Figure 7. The Tensorisation step can be itself a representation learning step. By studying the dataset, Tensorisation might take into consideration the most useful basis functions for the required number of coordinates. However, the interactions of the variables might not be clear enough, and another step of representation learning might be done manually or by dimensionality reduction algorithms such as PCA and SVD for 2-way analysis or Tucker and CPD as the equivalent multi-way analysis. Also, mapping the dataset to the most useful representation might be all the analysis required to perform various ML tasks, such as tensor completion, clustering and classification. It might be a data pre-processing step for another ML algorithm, such as various Deep Learning models. Various Tensorisation occurs at the various steps in building an artificial neural network (ANN) model. ## 4 Application of Tensorization in Deep Learning Models In the first ANN building block, activation and loss function choice can be tensorised, as shown in Figure 8. The standard weighted sum of dot products of input vectors with weight vectors and then aggregating them is suitable for vector and matrix form datasets. A tensorised activation function for a tree data structure can start from the node assigned to a neuron to recursively compute the weighted sum of its children with weight sharing between neurons to reduce the model complexity. This model, called a recursive neuron, is first modelled for binary trees and leads to a higher-order Figure 7: Tensor Computing Framework generalised n-ary tree using tensorised aggregation. CP decomposition or Tensor Train (TT) decomposition can further decompose the full format tensor aggregation. Chapter six will introduce more details about tensorised activation functions (Bacciu and Mandic, 2020). The loss function can use the decomposed tensor cores of the weights tensor. In the second ANN building block, the choice of the number of hidden layers and the number of neurons per layer can also benefit from the compressive nature of tensor decomposition algorithms. Tensor Networks can compress the whole Deep Neural Network (DNN) using a suitable tensor decomposition algorithm and then map back to the uncompressed form. Another approach is to update only the final fully connected layer (or specific layers of interest) of a model with a tensor decomposition layer, such as TT. Usually, compressed models benefit from wider, fewer layers (shallower networks). This new arrangement of the tensorised data will require an alignment of the data slicing into the different epochs. These blocks of tensorised data need to represent the multi-way structures in the dataset identifying latent variables so that the learning iterations can reduce the error. ### Tensorising ML and DNN Case Studies and Experiments The following reviews various experiments with Tensor computing in different domains. More details about these experiments detailing their approaches can be found in (Heli, 2023). #### Signal Processing: An example of the BSS problem using tensor decomposition is presented in (Bottcher _et al._, 2018). The authors built a Python Package, "Decompose", that generalises the PCA, ICA, and NMF solutions to the BSS problem. These methods are built on statistical assumptions that might be found in a dataset or might not be found. Each of them would produce different sources when applied to the same dataset. Expert knowledge is usually needed to identify the correct statistical assumptions of a given dataset in an application domain. The authors built a probabilistic BSS model that estimates the priors of every source, can extend to new prior distributions, scales well to large datasets, assuming each source has a different sparsity level, and efficiently estimates the posterior adapted to the dataset. Figure 8: Tensorising Neural Networks (Novikov _et al._, 2015) Data Varehous and Business Intelligence: Data cubes are multidimensional data analysis methods provided by various Data warehousing providers. It is based on dividing any dataset into Dimensions and Measures. The dimensions are the coordinates/order of the tensor, and the measures are the values it stores. Any dataset in matrix form can be analysed by transforming it into Data cubes, which are hypercubes in the higher dimension. Various cube algebra operations can be applied, such as slice and dice for partitioning, drill up and down for hierarchical dimensions, aggregation for compression, and pivoting for navigating the different views. Online Analytical Processing (OLAP) aim to pre-compute all required operations for faster response time. This remained computationally prohibitive as the dimensionality increased. The literature has a plethora of approaches to reduce the complexity by compression, approximation, and parallelisation. The work (Peiris, 2017) hypothesizes from experimental results that the CUBE operator of Spark's DataFrame API performs better in distributed cube materialisation than MapReduce. The work (Spelta, 2017) applied CP tensor decomposition of a 3D-tensor \(D\in\mathbb{R}^{N\times N\times Z}\) in which the first and second dimensions are for a basis of one stock out of N stocks, and the third dimension is the time series of length Z. The tensor values \(\delta_{k\text{z}}\) represents the distance between stock k and stock l at time z. The CP decomposition produces the outer product of three vectors \(D\cong\beta v\circ v\circ u\), such that \(v\in\mathbb{R}^{N}\) is the two vectors, including the total spatial dissimilarity between stocks and the vector \(u\in\mathbb{R}^{Z}\) shows the temporal profile of the dissimilarities, and \(\beta\cong\|v\|\;\|v\|\;\|u\|\). The predicted vectors compose the predicted tensor D as the distance matrix generalisation that connects each stock asset's intensity with the other assets over time. This similarity is computed by summing the distances between each pair of assets, producing a centrality score \(\hat{F}_{k}=\frac{1}{N}\sum l\hat{d}_{kl}\). A predicted asset stock price change is computed as \(\hat{d}_{k}^{t+1}=\hat{F}_{k}^{t}-\hat{F}_{k}^{t-1}\). They applied this method to the 58P500 dataset of 388 stock assets with time series prices over 3827 work days and another two datasets: FTSE MIB of 59 and Euronext Paris of 156 stocks, both with prices over almost three years of time series. Their results revealed the increasing trend and three recessions of the dot-com bubble in 2002-03, the financial crises of 2008-09, and a recession around the Greek parliamentary elections. Prior methods used correlation networks that aggregate the data to pair-wise distance matrices, causing loss of crucial information, while the tensor decomposition method relied on the distance evolution over time. ### Computational Science: A 3-way association tensor completion application example is presented in (Huang _et al._, 2021). The dataset used in this paper is collected from the HMDD (the Human microRNA Disease Database), which is a database that curates biological lab experiment-supported evidence for human microRNA (miRNA) and disease associations. Auxiliary data are the disease descriptors collected from Medical Subject Heading (MeSH), a comprehensive controlled vocabulary thesaurus about life science, to calculate disease semantic similarity. Previous work focused on predicting whether a miRNA-disease association exists or not (binary classification/prediction). Instead of building a binary graph association, this paper presented the dataset in a 3-way structure of miRNA-disease-type triplets as a tensor. It introduced Tensor Decomposition methods to solve the prediction task, such that the type explains the roles of miRNAs in disease development or identification. The authors formulated the multi-type miRNA-disease association prediction as a tensor completion task. Their goal was to complete the tensor for exploring the unobserved triple associations using Tensor Decomposition Methods. The authors concluded that TDRC could produce better performance while being more efficient. The reconstructed tensor's low-rank property may help further improve the performance. This application illustrates a method that can be generalised to tensorize any relational dataset such that several types of relationships are involved. #### Machine Learning: The first application of tensors in data mining to review is contributed by Acar et al. (Acar et al., 2005), (Acar et al., 2006), who applied different tensor decompositions to the problem of discussion disentanglement in online public chatrooms on the Internet Relay Chat (IRC), and how social networks evolve. The dataset contains text messages with timestamps, nicknames of identities of sender and receiver, and timestamps of nicknames quit/leave or kick. The nicknames could belong to the same person, pretending to be of any age and gender. The topics' keywords were semantically analysed, and data distributions were estimated. The dataset is multidimensional and noisy. The authors constructed a tensor T of order three, capturing users, keywords, and time in each mode, respectively, such that where T\({}_{ijk}\) is user i, sent several of keyword j during time slot k. Then Tucker1 and Tucker3 were applied to identify user groups, which are set of users sharing a maximal keyword set in a given time period. This is achieved by an indeterministic c-means clustering algorithm running 100 times, which returns multiple memberships for each data point. They compared to 2-way SVD clustering of user/keyword, and user/timestamps to show that the 3-way was more accurate. Then, they defined a 4-way Tensor to add the IRC server as the fourth mode to identify the computational efficiencies of tensor higher-order representations and decomposition methods compared to the corresponding pair-wise approaches. The work in (Bader et al., 2008) used CP for automatic conversation detection in the Enron Email dataset over time using an m term x n author x q month 3-way tensor \(X^{m,n,q}\). Based on previous literature on the analysis of this dataset, the authors selected an interesting subset of this dataset that makes a tensor \(X^{69157,\ 197,\ 12}\) with 1,042,202 non-zeros entries scaled to their weighted frequency. They applied Parafac decomposition using ALS such that \(\|X-\sum_{l-1}^{r}\quad A_{l}\circ B_{l}\circ C_{l}\|^{2}\). The decomposed tensor factors were A\({}^{mwr}\) containing the highest scores for terms/topics, \(\ \mathbb{B}^{mwr}\) containing the highest scores for authors, and C\({}^{mwr}\), containing the highest scores for these topics over time; they chose a 12-month duration. The rank r was chosen to retrieve a specific number of topics in the data, setting it to 25. Eight topics out of the 25 were interpretable in the context of other events happening in the same time duration, as compared to the two-way (term-author) NMF method that could not extract these discussions. The decomposition predicted discussion threads and produced charts of previous focused discussions over time. The authors of (Nickel et al., 2011) proposed a relational learning approach RESCAL based on the DEDICOM tensor decomposition method with relaxed constraints. They exploited a three-way tensors \(X^{n,\,n,\,m}\), n entities x n entities x m relationships such that when \(X_{ijk}=1\), this means that entity i has a relationship of type k with entity j. Entities. Domain data is given in the form of Resource Description Framework (RDF) triplets. A tensor decomposition of the form \(X\)s \(X_{k}=AR_{k}A^{T}\), where A is a nxr matrix containing the latent-component representation of the repeated entities in two modes from the domain and R\({}_{k}\) is an asymmetric rxr matrix that models the interactions of the entities in the k-th predicate. The computed low-rank representation of the domain data is used in the prediction of a link as \(\hat{X}_{ijk}>\theta\), for some threshold \(\theta\). The collective classification can be performed by slicing the low-rank reconstructed tensor for a given class relationship or actually reconstructing the relevant slice only. Also, Link-based clustering of entities can be performed using a similarity measure between entities based on their similarity across multiple relations. The authors compared the performance of RESCAL compared to standard tensor factorisations such as CP and DEDICOM, and relational learning algorithms such as statistical unit node-set (SUNS) and the aggregated SUNs+AG. The conducted various experiments on various datasets. They showed that the results of RESCAL and DEDICOM outperform both CP and SUNS on all datasets. Another application is the proposal of new ontologies' terms from the data to knowledge database engineers as decision support systems to evolve an ontology by grouping/clustering instances. Out of 87 predicates relation in YAGO 2 at the time, 38 were used to form a sparse tensor \(X^{3000417,3000417,38}\) with 41 million entries and a sparse attribute matrix \(D^{3000417,1138407}\) with 35.4 million entries. Of the 4.3x1014 possible triplets in YAGO 2, only 4x107 non-zero entries were available. The authors compared the RESCAL to other tensor factorisation and classical relational learning methods, showing that RESCAL is more efficient in predicting RDF triplets and other machine learning tasks (Nickel et al., 2012). The authors of (Padia et al., 2016) further extended the RESCAL tensor decomposition method to RESCAL+ approach by adding a similarity matrix to the minimisation algorithm to force slices of the relational tensor to decrease their differences between one another to achieve unequal contribution of the slices. They tested using the DBpedia-Person dataset. They compared their link prediction performance against the original RESCAL method and its non-negative variant NN-RES and showed that their method achieves higher AUC scores and diagonal confusion matrix scores. The work of (Yang and Hospedales, 2017) presents a multi-task learning (MTL) representation learning using tensor factorisation (Tucker and TT) as a generalisation of the matrix factorisation (such as PCA) to share knowledge across tasks in fully connected and convolutional DNN layers. They compare their method to Single Task Learning (STL) vs MTL, using user-defined representation vs the learned representation on shallow and deep layer networks. The increased accuracy of learning the representation using tensor factorisation on deep layers is due to the end-to-end training of both the classifier and feature extractor. There is also a generative model using Bayesian Tucker decomposition that is proposed by (Castellana and Bacciu, 2019). It is suitable for tree-structured data. The Markov model is an expressive model that grows in size and becomes intractable for practical problems. Tensor factorisation enables the model to be a non-parametric Bayesian model as well. The authors of (Lacroix et al., 2020) applied CP decomposition to predict dynamic knowledge graph links. They created a 4-mode tensor of subject, predicate, object, and time. They proposed a new dataset for temporal knowledge graphs parsing Wikipedia. They compared their model to other models on their proposed dataset and other datasets to show that their results are promising. #### Image Processing: Eigenfaces is an algorithm for face recognition, modelling several images for the same person as an unfolded vector of each image that is stacked in a matrix, using XY matrices for P persons. Then, Eigen decomposition using principal component analysis (PCA) is applied to reduce the dimensionality of the images to use only the uncorrelated variables. The result is the eigenface with the smallest Euclidian distance to which the person resembles the most (Turk and Pentland, 1991). The authors of (M. Alex O. Vasilescu and Terzopoulos, 2002) pioneered the use of Tucker decompositions in computer vision to disentangle the multiple factors an image is composed of, such as scene structure, different facial geometries (people), expressions, head poses, lighting conditions, and imaging. They contributed TensorFaces, representing the tensor decompositions' cores as a set of facial components. They applied a HOSVD on facial images (512 x 352 decimated by a factor of 3 and cropped, yielding 7943 pixels) dataset of 28 people x 5 poses x 3 illumination conditions x 3 facial expressions x 7943 pixels. This created a 5-mode tensor \(D^{28,5,3,7943}\), that is decomposed by HOSVD to: \(D=Z\times_{1}U_{people}\times_{2}U_{views}\times_{3}U_{ultummation}\times_{4}U_{ expression}\times_{5}U_{pixels}\), such that \(U_{people}\in R^{28\times 28}\)spans the space of people's image parameters, \(U_{views}\in R^{5\times 5}\)spans the space of viewpoint parameters and so forth for the remaining factor matrices. Each Factor matrix is computed as \(U_{n}=D_{(n)}V_{n}\Sigma^{+}\), where \(D_{(n)}\)is mode n flattening of \(D\), and computing SVD on \(D_{(n)}\) to have the right matrix \(V\) and the singular values in the diagonal matrix \(\Sigma\), considering \(U_{n}\) to be the left matrix of the SVD. This method generalises the eigenfaces method as a factor matrix \(U_{pixels}\) is the eigenimage. Solving for the core matrix \(Z\) calculates the interactions of all modes considered in this experiment, as explained in the Tucker Decomposition in section 2.1.1.2. (Vasilescu, 2002) has also applied the Tucker decomposition to human motion as a composite of multiple actions. The author had multiple aims: to extract a human movement signature as a subset of actions (analysis), to resynthesise new motions from the learned ones (synthesis), and to recognise a specific person or action (recognition). The author defined a tensor \(D^{N,M,T}\), where N is the number of people, M is the number of action classes, and T is the number of joint angle time samples. The author applied the same N-mode SVD applied in TensorFlowaces, \(D=Z\times_{1}P\times_{2}A\times_{3}J\). The people core matrix P has N person-specific rows containing the human motion signatures. The action core matrix A has M action-specific rows encoding action invariances across people. The last joint angle matrix J has T joint angle rows, the eigenmotions typically computed by PCA. The analysis is achieved by a change of basis to capture a person-associated motion by \(C=Z\times_{1}P\times_{3}J\), and the change of basis to capture an action-associated motion is achieved by \(B=Z\times_{2}A\times_{3}J\). The synthesis is achieved by the knowledge of the core tensor \(Z\) capturing all multi-way interactions, core matrix A generalising the actions, and core matrix J generalising the joint angles. To synthesise an action of a new person not seen before is a Tensor completion or a regression problem to predict \(D_{p,a}\) of a new person p doing action a, as \(D_{p,a}=B_{a}\times_{1}p^{T}\), where \(B_{a}=Z\times_{2}a_{a}^{T}\times_{3}J\) for the specific action a. If the aim is to synthesise a motion for a new individual, then \(p^{T}=d_{a}^{T}B_{a}^{-1}\), where \(d_{a}^{T}\) is the flattened tensor in the people mode, and choosing the specific action Transpose, and the complete set of motions for the new individual is \(D_{p}=B\times_{1}p^{T}\). If the aim is to synthesise a new action for a known person whom we have other actions recorded for, the new action \(a^{T}=d_{p}^{T}C_{p}^{-1}\), then synthesising this action for all people in the database is \(D_{a}=C\times_{2}a^{T}\). This makes this tensor decomposition a generative model. the recognition task is achieved in this tensor decomposition by identifying a person from action parameters as the projection \(p=B_{a}^{-T}d\). Similarly, identifying a person's specific action is the projection \(a=C_{p}^{-T}d\). In both cases, the nearest neighbour algorithm returns the nearest person or action in the learned motion data d. The previous multi-way analysis case studies were either using the tensor decomposition technique for tensor completion and prediction, or as a pre-processing step for a traditional ML learning algorithm such as clustering. The data representation is done manually. Since the advances in ANN from 2006 onwards, ANN implicitly learn the representation. For example, traditionally images are converted using Fourier Transforms to the wave domain (change of basis), now, Deeper CNNs identify the different representation hierarchy based on the complexity of the images scene contents. This process studies the structural data interactions, forming an embedding representing the data that can be used for the given machine learning task at the output layer. The depth of the neural networks adds more parameters to estimate, causing the curse of dimensionality in big data analytics. It was observed that the weights within a layer in CNN can be estimated by a 5% subset of its parameters, indicating the DL models are over-parameterised (Denil et al., 2014). #### Deep Learning: There are three common network compression approaches that are useful for regularisation to prevent the network from overfitting. The first is adding pooling layers that are useful for regularisation as well. These layers apply some approximations through the layers by neglecting irrelevant contents by applying different pooling functions. Another approach is adding drop-out layers that randomly set some neurons to zero; this approach is called pruning. These are considered blind compression since they do this collectively on all neurons or randomly. Other methods work to learn which neurons drop out in a structured approach, such as (Fan, Grave and Joulin, 2019) and (Knodt, 2022). The third common ANN compression approach is Vector quantisation, which can be applied to the CNN parameters and storage requirements. These methods include binarisation (1-bit quantisation) by turning off neurons with negative values. Another Quantisation method uses lower precision, such as converting floating point types to integer types, compressing the network four times and speeding it up 2:4 times. Moreover, scalar quantisation uses k-means to cluster the weights and use representative neurons of each cluster. Also, Product quantisation divides the weights vector space into many disjoint subspaces and quantises them by raising them to different powers and, applying k-means on all and storing their cluster indices only. Finally, the residual quantisation performs clustering by k-means and then identifies the residuals to reapply the clustering on them (Gong _et al._, 2014). Other ANN compression approaches in the literature include HashedNets (Chen _et al._, 2015), which uses a hash function to group connection weights in hash buckets, and Layer Fusion (Graph Optimization) is another compression approach used by NVIDIA TensorRT compiler and ONNX Runtime cross-platform compiler. Various tensor factorisation algorithms have been applied to achieve NN compression. This approach requires tensorizing and decomposing the weight matrices into a series of low-rank tensors to reduce redundant weights by using sparse representations. The authors of (Novikov _et al._, 2015) replaced fully connected layers in CNN with a Tensor Train (TT-Layer) that they called TensorNet, which is compatible with the same training algorithm. TT Format is more immune to dimensionality curse and simpler for basic operations (addition and multiplication by a constant, summation and entry-wise tensor products, the sum of all elements and Frobenius norm) than Tucker and Hierarchical Tucker tensor decompositions. They proposed a mapping function between the weights metric and the TT-format and introduced the NN layer with weights stored in TT-format to be a TT-layer, which, when used in any NN, makes it TensorNet. A fully connected layer computes \(y=Wx+b\), while a TT-layer converts all to tensors in TT-format \(Y(i_{1},...,i_{d})=G_{1}(i_{1},j_{1})\)... \(G_{d}(i_{d},j_{d})X(j_{1},...,j_{d})+B(i_{1},...,i_{d})\), where \(G_{i}\) are the d core tensors of the TT-format of the original weights matrix, and \(Y\), \(X\) and \(B\) are the d-dimensional tensors formed from the corresponding vectors \(\mathsf{y}\) (dependent/target/outcome variable), \(\mathsf{x}\) (independent/predictor/feature variable(s)), and \(\mathsf{b}\) (bias), respectively. The paper details how the loss function can be performed in the TT-Format. They achieved 200,000 times fewer parameters and compressed the size of the whole network by a factor of 7 in two-TT-Layer TensorNet compared to a two-layer fully connected network on the MNIST, and CIFAR-10 datasets, without compromising the accuracy on TT-ranks all equal to 8. The work in (Calvi _et al._, 2020) introduced the Tucker Tensor Layer (TTL) as an alternative to the dense weight matrices of neural networks. They also showed how the number of parameters in the neural layer is reduced while deriving a Forward and back-propagation on tensors algorithm that preserves the physical interpretability of Tucker decomposition and provides an insight into the learning process of the layer. They achieved a 66.63% compression with 82.3% accuracy compared to the 86.3% accuracy of the uncompressed model. The Hierarchical Tucker (HT) tensor decomposition method performs better in compressing the weight matrices in Fully Convolutional layers because HT prefers the tensor with balanced dimensions lengths, as shown in (Gabor and Zdunek, 2023). The authors experimented with medium-scale CNNs on the CIFAR-10 dataset and large-scale CNNs, such as VGG-16 and ResNet-50, on the ImageNet dataset. They compared HT-2 to other tensor factorisation and other NN compression approaches to show its competitiveness in the achieved compression without much drop in accuracy. A hybrid tensor decomposition combining TT and HT is proposed by (Wu _et al._, 2020). They compared HT formats to TT-LSTM (Yang, Krompass and Tresp, 2017) and TR-LSTM (Pan _et al._, 2019) to show that RNNs/LSTMs in the HT format have higher compression than those in the TT format when compressing weight matrices but with worse accuracy than regular uncompressed RNNs, and that the TT format is more suitable for CNNs Earlier NLP work used bag of words such as in (Socher _et al._, 2013) to build a Recursive Neural Tensor Network (RNTN) that uses a high-order neural network for structured data that leverages a full 3-way tensor for aggregating children's information in binary parse trees within a natural language processing application. The authors also explained the back-propagation algorithm for tensors and used AdaGrad for this non-convex optimisation. The authors in (Weber, Balasubramanian and Chambers, 2017) presented a natural language understanding application where tensors are used to capture multiplicative interactions combining predicate, object and subject, generating aggregated representations for event prediction tasks. The Predicate Tensor approach was better in one case predicting words (which was always more accurate than predicting events), using Hard similarity scores as a percentage of cases where the similar pair had higher cosine similarity than the dissimilar pair. The latest NLP advances are due to the better encoding algorithms such as Word2Vec, GloVe and the Transformer based models. The Transformers estimate a large number of parameters and can benefit from compression techniques such as parameter sharing across layers and low-rank approximations. These can be achieved by tensor decompositions methods such as Block-Term Tensor Decomposition (BTD), which is proposed by the authors of (Ma _et al._, 2019). BTD combines both CP decomposition and Tucker decomposition, such that a tensor is decomposed into P Tucker decomposition, each with its core tensor and d factor matrices, such that P is the CP rank. The authors first used Single-block attention based on the Tucker decomposition to use a linear function of a set of vectors. Then they built the multi-head attention using the BTD, enabling parameter sharing across multiple blocks, higher compression (8 times fewer parameters), and lower complexity. They tested using Penn Tree Bank (PTB), WikiText-103 and One-billion language modelling tasks, and English-German neural machine translation WMT-2016 to show that their method is more compressed and more accurate than Transformer, Transformer XL, TT-format tensor factorised Transformer model, and other models using RNN, LSTM, and others. Another generative DNN architecture model is the Restricted Boltzmann Machines (RBM) which estimates the probability distribution of various datasets. Mapping an RBM to the Tensor Networks States (TNS) has been successfully applied by (Chen _et al._, 2018). TNS has been applied to various problems in quantum-many-body physics. The physics communities refer to Tensor Chain (TC) decomposition as the Matrix Product State (MPS), which is a special case of the Hierarchical Tucker (HT) decomposition and the simplest TNS and equivalent to the TT format. They discussed other TNS models, adding more deep layers, and how the number of parameters does not increase while the model performance increases. ### Multi-Model and Graph Data: In multi-modal ANN models, a simple approach is concatenating the vectors or applying an element-wise sum or product between the different modalities. This will not capture complex interactions between the different modalities. Outer-Product methods are used to capture bilinear interactions between all elements of two vectors, such as an outer product q\(\otimes\)v between visual v and textual q embeddings. This will generate a massive number of parameters to estimate. Multi-modal Compact Bilinear pooling (MCB) uses FFT to compress further the outer product (Fukui _et al._, 2016). The authors in (Ben-younes _et al._, 2017) address the Visual Question Answering (VQA) task by using tensors to fuse visual and textual representations. They proposed a multi-modal tensor-based Tucker decomposition to capture the interactions between images and textual modalities with fewer parameters (compression) than other bilinear models. They compared the performance with other state-of-the-art models to show performance improvements. The work in (Li _et al._, 2020) created a multi-modal sentiment analysis (MSA) using the MOSI/CMU-MOSI dataset of the form (A, V, L), where A = {A\({}_{1}\),..., A\({}_{7}\)}, V = {V\({}_{1}\),..., V\({}_{7}\)} and L = {L\({}_{1}\),...,L\({}_{7}\)}, denote the time series of the length T w.r.t. the acoustic, visual and language data, respectively. They proposed Time Product Fusion Network (TPFN) that builds on the temporal tensor fusion network (T2FN). TPFN applies implicit outer product methods across sliding time windows to capture the model interaction across modalities in the data fusion phase. CP is the method for low-rank decomposition, and regularisation on the low-rank representation handles incomplete datasets. In (Hou _et al._, 2019), the authors addressed the Multi-modal sentiment analysis (MSA) problem by proposing a High-order polynomial tensor pooling (PTP). PTP concatenated features form a Tensor by tensor product operation of order P to represent all possible polynomial expansions up to order P. As P increases, so does the number of parameters to learn, but the higher polynomial interactions between tensors can be captured. Using CP decomposition, the weights' tensor is compressed. Then, a Hierarchical polynomial fusion network (HPFN) is formed, assuming a 2D feature map time series. HPFN recursively learn the local temporal modalities pattern by arranging PTP in multiple layers. This borrows many features from CNN, including receptive fields, sharing parameters, scanning window, and PTP 'fusion filters'. Graph Neural Networks (GNN) present a class of models that takes graph or network data structures as input, and it has been applied in DNN and using Tensor decomposition approaches. GNN is an active research topic and is almost reaching maturity, as presented by (Liu and Zhou, 2020). The authors (Kwon and Chung, 2022) proposed a recursive tensor decomposition method that is based on the CP decomposition by choosing orthogonal vectors in the SVD step, creating a decomposition tree. The work in (Hamdi and Angryk, 2019) presents tensor decomposition-based node embedding algorithms that learn node features from arbitrary types of graphs: undirected, directed, and/or weighted, without relying on computationally expensive eigen decomposition or requiring tuning of the word embedding-based hyperparameters as a result of representing the graph as a node sequence similar to the sentences in a document. The work in (Jermyn, 2019) presents tensor trees as efficient tensor computer representations based on both optimal brute force and greedy algorithm heuristic that performs well for higher-rank tensors tree decompositions. For graph transformation, graph-tensors proposed in (Malik _et al._, 2019) learn embeddings of time-varying graphs based on a tensor framework. There are also matrix networks proposed in (Sun _et al._, 2018) and graph tensor neural networks (Liu and Zhou, 2020). ### Python Packages Various Python packages implement multi-way Tensors and the algorithms based on them. Table 4 summarises the functions implemented by some famous packages in the literature. ## 5 Conclusion and Future Directions In this paper, we have conducted an extensive survey of multi-way analysis approaches, contrasting them with traditional linear algebra-based machine learning algorithms that rely on matrix-form pairwise relationships. The provided tensorization unveils the expressive power of multidimensional data for enhanced multiway analysis and its integration with deep neural networks. Building on the tutorial-style introduction presented by (Helal, 2023), we provided a comprehensive overview of tensor computing fundamentals. The accompanying Python code implementation served as a valuable resource for understanding the core algorithms, though it did not address advanced considerations such as vectorisation, parallelisation, and handling exceptional cases. One key finding from our experiments is the pressing need to generalise machine learning packages to accommodate datasets with varying dimensions, as dictated by specific application requirements. Currently, most ML frameworks are built around traditional matrix representations or 2D/3D tensors in the context of convolutional neural networks (CNNs), treating batches as an additional dimension when necessary. As such, existing multi-way analysis implementations suffer from limitations in this regard. Another key finding is the lack of standardisation for sparse tensors and the suitable adaptation required from ML and DL algorithms stacks. We explored various tensorization techniques, including slow-paced tensorization by adding coordinates, choosing basis functions, segmentation, and dataset merging, and surveyed both \begin{table} \begin{tabular}{|l|l|} \hline Package Name & Functionality \\ \hline Tensorly (Kossaifi _et al._, 2019) & Various Tensor decomposition, such as CP and Tucker, tensor regression, a Tensor Regression Layer (TRL), FC layer using TT format \\ \hline scikit-tt(Gel8, 2022) & Various Tensor decomposition, TT decomposition, tensor regression \\ \hline HOTTBOX(Kisil _et al._, 2021) & Various Tensor decomposition \\ \hline scikit-tensor(Nickel, 2013) & Tensor decomposition such as INdividual Differences in SCALing (INDSCAL) CP, Tucker, DEDICOM, and RESCAL \\ \hline trtecipes(Ballester-Ripoll and Paredes, 2022) & Tensor regression \\ \hline T3F(Novikov _et al._, 2020) & Tensor Completion, Tensor Train decomposition for neural networks (NN) \\ \hline TT\_RNN(Yang, Krompass and Tresp, 2017) & A Tensorial RNN, FC, Simple RNN, LSTM and GRU using PyTorch \\ \hline TensorNet-TF (Garipov et al., no date) & Tensorised FC layer and CNN layer \\ \hline Spektrl(Grattorola and Alippi, 2020) & Different GNN models \\ \hline Deep Graph Library (DGL) (Wang et al., 2019) & Different GNN models \\ \hline PyG (Fey and Lenssen, 2019) & PyTorch geometric is another GNN framework \\ \hline TensorD (Hao et al., 2018) & Python tensor library built on Tensorflow with basic tensor operations and decompositions supporting parallel computation (e.g. GPU). \\ \hline Tednet (Pan, Wang and Xu, 2022) & Various neural network layer types compressed using different tensor decompositions, such as compressing an RNN layer using TR decomposition (TR\_RNN). They support ResNet Layers, LSTM Layers, CNN, and Linear Layers, among others \\ \hline Tensortools (Williams et al., 2018). & Time-shifted CP decomposition \\ \hline \end{tabular} \end{table} Table 4: Python Packages implementing Tensor methods deterministic and statistical methods. Our examination of applications across diverse domains elaborated further in (Hetal, 2023) underscores that multi-modal deep neural networks (DNNs) and graph neural networks (GNNs) emerge as the primary beneficiaries of tensor-based approaches. The ongoing group theoretic research outcomes and its promising direction of generalising machine learning algorithms over various mathematical structures and algebras motivate further research and development of stacks of optimised implementations. The benefits of adopting multi-way analysis techniques are manifold: they facilitate enhanced representation of complex relationships, feature extraction from tensorized data, efficient data compression (parameter reduction), accurate prediction and classification, improved generalisation and robustness, interpretability and explainability, as well as scalability through efficient memory and processing utilisation. However, these advantages come with their own set of challenges. Data preprocessing to conform to the required tensor form remains a formidable task. Moreover, the lack of standardisation in tensor data structures across Python packages poses an obstacle to seamless integration. Historically, Basic Linear Algebra Subprograms (BLAS) libraries have played a vital role in optimising numerical recipes for solving equations and various linear algebra computations. BLAS started with vector operations only, then matrix operations, then matrix-matrix operations. Recently, Tensor-tensor operations BLAS for the fourth-order tensors were implemented (Liu and Wang, 2017), including tensor (Kronecker) product, KhatriRao product, Hadamard product, tensor contraction, t-product, or L-product. A similar optimisation level is needed for tractable tensor-tensor operations on variable-order tensors. This also needs to be interoperable between different deep-learning frameworks and parallel hardware platforms. Tensorised NN Frameworks can be built with all Tensorised Layer types, tensorised activation functions, and tensorised forward and backward propagation algorithms such as the SGD with DMRG algorithms, AutoDiff (Paszke et al., 2017) and DDSP (differentiable digital signal processing) (Engel et al., 2020) Looking ahead, the development of tensorized neural network frameworks should encompass a wide range of tensorized layer types, activation functions, and forward/backward propagation algorithms. Section 4 of this paper illustrated various case studies on tensorizing neural networks at different stages of the architecture, but ongoing research is required to determine the optimal number and type of tensorizations and their collective performance impact. The choice of tensor decomposition techniques, such as CP, Tucker, HT, TT, or others, should be further investigated to strike a balance between compression and performance enhancement, as well as interpretability across different models and applications. Establishing standardised benchmarks with clear metrics will facilitate the comparison of future proposals. Lastly, the generalisation of graph neural networks (GNNs) and dynamic graph networks (DGNs) to handle datasets with unconstrained topologies, such as hypergraphs with hierarchical connections, represents a promising research avenue. Systematic evaluations of expressiveness, achieved through tensorization, can elucidate the performance of newly proposed models (Errica, Bacciu and Micheli, 2020). Additionally, exploring time-evolving graphs, temporal-spatial learning, or online learning using tensorized models on graphs promises exciting directions for enhancing and evaluating tensorization's impact on compression, expressiveness, and accuracy trade-offs. In summary, this paper comprehensively explores tensorization and its potential to revolutionise deep learning models and multi-way analysis. Addressing the challenges and further investigating these methodologies will pave the way for more efficient, expressive, and adaptable machine learning systems as we move forward.
2303.00264
Distance-based Weight Transfer from Near-field to Far-field Speaker Verification
The scarcity of labeled far-field speech is a constraint for training superior far-field speaker verification systems. Fine-tuning the model pre-trained on large-scale near-field speech substantially outperforms training from scratch. However, the fine-tuning method suffers from two limitations--catastrophic forgetting and overfitting. In this paper, we propose a weight transfer regularization(WTR) loss to constrain the distance of the weights between the pre-trained model with large-scale near-field speech and the fine-tuned model through a small number of far-field speech. With the WTR loss, the fine-tuning process takes advantage of the previously acquired discriminative ability from the large-scale near-field speech without catastrophic forgetting. Meanwhile, we use the PAC-Bayes generalization theory to analyze the generalization bound of the fine-tuned model with the WTR loss. The analysis result indicates that the WTR term makes the fine-tuned model have a tighter generalization upper bound. Moreover, we explore three kinds of norm distance for weight transfer, which are L1-norm distance, L2-norm distance and Max-norm distance. Finally, we evaluate the effectiveness of the WTR loss on VoxCeleb (pre-trained dataset) and FFSVC (fine-tuned dataset) datasets.
Li Zhang, Qing Wang, Hongji Wang, Yue Li, Wei Rao, Yannan Wang, Lei Xie
2023-03-01T06:38:02Z
http://arxiv.org/abs/2303.00264v2
# Distance-based Weight Transfer for Fine-tuning from Near-Field to Far-Field Speaker Verification ###### Abstract The scarcity of labeled far-field speech is a constraint for training superior far-field speaker verification systems. In general, fine-tuning the model pre-trained on large-scale near-field speech through a small amount of far-field speech substantially outperforms training from scratch. However, the vanilla fine-tuning suffers from two limitations - _catastrophic forgetting_ and _overfitting_. In this paper, we propose a weight transfer regularization (WTR) loss to constrain the distance of the weights between the pre-trained model and the fine-tuned model. With the WTR loss, the fine-tuning process takes advantage of the previously acquired discriminative ability from the large-scale near-field speech and avoids catastrophic forgetting. Meanwhile, the analysis based on the PAC-Bayes generalization theory indicates that the WTR loss makes the fine-tuned model have a tighter generalization bound, thus mitigating the overfitting problem. Moreover, three different norm distances for weight transfer are explored, which are L1-norm distance, L2-norm distance, and Max-norm distance. We evaluate the effectiveness of the WTR loss on VoxCeleb (pre-trained) and FFSVC (fine-tuned) datasets. Experimental results show that the distance-based weight transfer fine-tuning strategy significantly outperforms vanilla fine-tuning and other competitive domain adaptation methods. Li Zhang\({}^{1}\), Qing Wang\({}^{1}\), Hongji Wang\({}^{2}\), Yue Li\({}^{1}\), Wei Rao\({}^{2}\), Yannan Wang\({}^{2}\), Lei Xie\({}^{1,*}\)+\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern Polytechnical University (NPU), Xi'an, China \({}^{2}\)Tencent Ethereal Audio Lab, Tencent Corporation, Shenzhen, China weight transfer, fine-tuning, far-field speaker verification Footnote †: Corresponding author. ## 1 Introduction Speaker verification (SV) is a task to authenticate a speaker's identity given a small amount of speech from that speaker [1]. In recent years, deep learning has shown remarkable success in SV tasks, but current methods often rely on a large amount of labeled training speech [2, 3]. The performance of SV systems degrades significantly in far-field conditions due to attenuated signals, noise interference as well as the rareness of far-field datasets [4, 5]. In general, far-field speech datasets are relatively small in size and insufficient to train decent SV models from scratch. Therefore, near-field datasets are generally leveraged for training to improve the discriminative ability of SV systems [6, 7]. However, there is a mismatch problem between near-field speech and far-field speech, so here transfer learning methods are necessary to transfer the SV model from near-field to far-field scenarios. In recent SV research, there are four kinds of transfer learning methods typically used to address domain mismatch problems. The first one is domain adversarial training generally formulated as a min-max problem, where adversarial strategies [8, 9] are used to confuse speaker encoders learning domain-invariant speaker representation. The second strategy is based on the back-end process of SV models. Unsupervised PLDA [10] and CORAL+ PLDA [11] are proposed to adapt the between-class and within-class covariance matrices of PLDA models [12] in in-domain datasets. The third one is feature distribution alignment, including aligning the distribution between source and target domains [13] and feature mapping with distance-based metric losses [14]. The last one is the simple fine-tuning strategy, which is a common and effective transfer learning method [6, 7]. In this paper, we mainly focus on fine-tuning strategy, which leverages large-scale near-field speech to improve the performance of SV systems in far-field scenarios. Compared with training from scratch, fine-tuning a pre-trained neural network using a far-field dataset can significantly improve performance while reducing the far-field labeled speech requirements [6, 7]. However, oracle fine-tuning just initializes the weights of the target model with those of pre-trained model without considering the catastrophic forgetting and overfitting problems. To solve the above problems, we propose a weight transfer regularization (WTR) loss to constrain the distance between the weights of the pre-trained model and those of the fine-tuned model. In addition, we analyze the generalization bound of fine-tuning with the WTR loss by PAC-Bayes theory [15], which is proved to correlate better with empirical performance compared with several popular generalization theories [16]. The analysis result of the PAC-Bayes generalization bound testifies that the generalization bound is tighter with the WTR loss, which limits the distance between the weights of the pre-trained and fine-tuned models than without any constraint. Furthermore, we explore three different norm distances in WTR loss. Experimental results on VoxCeleb and FFSVC datasets further demonstrate the effectiveness of WTR loss in fine-tuning. ## 2 Weight Transfer for Fine-tuning ### Weight Transfer Regularization The speaker verification framework mainly consists of two modules: speaker embedding extractor \(f_{E}\) and classifier \(f_{C}\). During fine-tuning, the learnable weights of the pre-trained speaker embedding extractor are used to initialize the soon-to-be-finetuned model in the first epoch. The classifier layer needs to be trained from scratch since different datasets contain different numbers of speakers. Suppose the learnable weights of the pre-trained and fine-tuned speaker embedding extractor are \(W^{s}=[W^{s}_{1},W^{s}_{2},W^{s}_{3},...,W^{s}_{j},...,W^{s}_{L}]\) and \(W^{t}=[W^{t}_{1},W^{t}_{2},W^{t}_{3},...,W^{t}_{j},...,W^{t}_{L}]\), where \(L\) is the number of layers in the speaker embedding extractor. The data space of the large scale pre-training speech is \(D_{s}=\{(x^{s}_{i},y^{s}_{i})\}\sim\mathcal{P}(s)\) consisting of \(n_{s}\) labeled samples and the fine-tuning far-field dataset is \(D_{t}=\{(x^{t}_{i}),(y^{t}_{i})\}\sim\mathcal{P}(t)\) with \(n_{t}\) samples. In the fine-tuning process, only \(D_{t}\) is available and \(D_{s}\) is unavailable. The speaker prediction error \(L^{(t)}\left(f_{C},f_{E}\right)\) is measured by speaker classification loss AAMSoftmax [17] and the WTR loss in fine-tuning. The speaker embedding extractor in fine-tuning is initialized by the weights of pre-trained model. The total loss in fine-tuning the SV system is formulated as: \[\mathcal{L}^{(t)}\left(f_{C},f_{E}\right)=\mathop{\mathbb{E}}_{(x ^{t}_{i},y^{t}_{i})\sim\mathcal{P}(t)}\left[f_{C}\left(f_{E}(x^{t}_{i}),y^{t}_ {i}\right)\right] \tag{1}\] \[+\alpha\sum_{i=1}^{L}||W^{t}_{j}-W^{s}||_{\pi},\] where \(\pi\) represents different norm distances and \(\alpha\) is a trade-off hyper-parameter between the speaker classification loss and WTR loss. The \(W^{t}_{j}\) denotes the learnable weights for the current \(j\)-th epoch during fine-tuning. ### Generalization Analysis of Weight Transfer To prove the WTR loss mitigating the overfitting of the fine-tuned model, we use the PAC-Bayes generalization theory [18] to testify that limiting the distance between weights of the pre-trained and fine-tuned models can obtain a tighter generalization upper bound. The PAC-Bayes framework [18] provides generalization guarantees for randomized inference predictors. Suppose the prior distribution of the pre-trained weights \(W^{t}\) is \(\mathcal{P}(s)\), which is independent of far-field speech datasets. The posterior distribution of the fine-tuned weights \(W^{t}\) is \(\mathcal{P}(s)\), which depends on the far-field training datasets. The PAC-Bayes theorem states that with probability at least \(1-\delta\) (\(\delta\in(0,1)\)) over the training data, the expected error of speaker classification can be bounded as follows [18]: \[\mathop{\mathbb{E}}_{f\sim\mathcal{P}(t)}[\mathcal{L}(f)]\leq \mathop{\mathbb{E}}_{f\sim\mathcal{P}(t)}[\mathcal{L}(f_{C}[f_{E}(x^{t}_{i}), y^{t}_{i})])]+ \tag{2}\] \[C\sqrt{\frac{\text{KL}\left(\mathcal{P}(s)\|\mathcal{P}(t)\right) +3\ln\frac{n}{\delta}+8}{n}},\] where \(C\) is the bound of the loss function and \(n\) is the number of far-field samples. Following the conclusions on the normal distribution of weights in convolutional neural networks from the previous work [19], we set the prior distribution \(\mathcal{P}(s)=N(W^{s},\delta^{2}Id)\), where \(W^{s}\) is the weights of the pre-trained network. The posterior distribution \(\mathcal{P}(t)\) is centered at the fine-tuned model as \(N(W^{t},\delta^{2}Id)\). We expand the KL divergence using the density of multivariate normal distributions as \[\text{KL}\left(\mathcal{P}(t)\|\mathcal{P}(s)\right) =\mathop{\mathbb{E}}_{W^{t}_{j}\sim\mathcal{P}(t)}\left[\log\left( \frac{\Pr(W^{t}_{j}\sim\mathcal{P}(t))}{\Pr\left(W^{t}_{j}\sim\mathcal{P}(s) \right)}\right)\right] \tag{3}\] \[=\mathop{\mathbb{E}}_{W^{t}_{j}\sim\mathcal{P}(t)}\left[\log \frac{\exp\left(-\frac{1}{2\sigma^{2}}\left\|W^{t}_{j}-W^{s}\right\|^{2}\right) }{\exp\left(-\frac{1}{2\sigma^{2}}\|W^{t}_{j}-W^{t}\|^{2}\right)}\right]\] \[=-\frac{1}{2\sigma^{2}}\mathop{\mathbb{E}}_{W^{t}_{j}\sim\mathcal{ P}(t)}\left[\left\|W^{t}_{j}-W^{s}\right\|^{2}-\left\|W^{t}_{j}-W^{t}\right\|^{2}\right]\] \[=\frac{1}{2\sigma^{2}}\mathop{\mathbb{E}}_{W^{t}_{j}\sim\mathcal{ P}(t)}\left[\left(W^{s}-W^{t},2W^{t}_{j}-W^{s}-W^{t}\right)\right]\] \[=\frac{1}{2\sigma^{2}}\left\|W^{t}-W^{s}\right\|^{2}_{\pi},\] where the \(W^{t}_{j}\) denotes the learnable weights for the current \(j\)-th epoch during fine-tuning. In Eq. 2, there are two variant items that decide the upper bound of generalization, which is the classification error and KL divergence of \(\mathcal{P}(s)\) and \(\mathcal{P}(s)\). The classification error is supervised by the classification loss AAMSoftmax [17] and speaker labels. By the proof of Eq. 3, the magnitude of the KL divergence is positively related to the difference between the weights of the pre-trained and fine-tuned models. From the above proof, we can draw a conclusion that fine-tuning with WTR loss, which constrains the weight distance between the pre-trained model and the fine-tuned model, makes the fine-tuned model have a tighter generalization upper bound. In other words, fine-tuning with WTR loss mitigates the overfitting problem. ### Distance-based Weight Transfer To limit the weight difference between the pre-trained model and the fine-tuned model, we further explore three kinds of norm distance in WTR loss, which are L1-norm distance, L2-norm distance, and Max-norm distance, respectively. #### 2.3.1 L1-norm Distance-based WTR The L1 norm distance is calculated as the sum of the absolute values of the difference between weights of pre-trained and fine-tuned models. The L1-norm-based WTR loss is formulated as: \[||W_{j}^{t}-W^{s}||_{\pi}=||W_{j}^{t}-W^{s}||, \tag{4}\] where \(W_{j}^{t}\) is the weight of the fine-tuning model on \(j\)th epoch and \(W^{s}\) is the weight of the pre-trained model. #### 2.3.2 L2-norm Distance-based WTR The L2 norm is calculated as the square root of the sum of the squared of the difference between weights of the pre-trained and fine-tuned models. The L2-norm-based WTR loss is formulated as: \[||W_{j}^{t}-W^{s}||_{\pi}=||W_{j}^{t}-W^{s}||_{2}^{2}, \tag{5}\] where \(W_{j}^{t}\) is the weight of the fine-tuning model on \(j\)th epoch and \(W^{s}\) is the weight of the pre-trained model. #### 2.3.3 Max-norm Distance-based WTR Max-norm distance is the largest of the absolute values of all elements in the difference matrix between weights of pre-trained and fine-tuned models. The Max-norm based WTR loss is formulated as: \[\begin{split}||W_{j}^{t}-W^{s}||_{\pi}=\|W_{j}^{t}-W^{s}\|_{ \infty}=\\ \max\left(\left\|(W_{j}^{t}-W^{s})_{1}\right\|,\ldots,\left\|(W _{j}^{t}-W^{s})_{n}\right\|\right),\end{split} \tag{6}\] where \(n\) denotes the columns of the \(\|W_{j}^{t}-W^{s})\|_{1}\). ## 3 Experimental Setup ### Datasets We conduct experiments on VoxCeleb (1&2) [20] and FFSVC [21, 22] datasets. VoxCeleb (1&2) is the large-scale pre-trained dataset. FFSVC 2020 and FFSVC 2022 are the two in-domain far-field datasets respectively. We test on two trials, which are the development trials of FFSVC 2022 and the development trials in task 2 of FFSVC 2020. Note we only use single-channel test utterances (recorded by channel 2) in FFSVC 2020 trials. The development trials of FFSVC 2022 contain the utterances recorded by iPad and iPhone, so we select the iPhone and iPad recorded speech in FFSVC as the training set for the FFSVC 2022 trials. Meanwhile, we select the iPhone and channel 2 recorded data as the training set of the FFSVC 2020 trials. ### Training Details In this paper, the structure of the speaker verification model is ECAPA-TDANN (1024) [3]. The loss function is additive angular margin softmax (AAM-softmax) [17] with a margin of 0.2 and a scale of 30. The speaker embedding models are trained with 80-dimensional log Mel-filter bank features with 25ms window size and 10ms window shift. In the pre-training process, the weight decay is set to 2e-5. The Adam optimizer with a cyclical learning rate varying between 1e-8 and 1e-3 following triangular policy [23] is used for pre-training. The pre-trained model is trained for 12 epochs. Finally, total models are evaluated on the FFSVC 2020 and FFSVC 2022 test sets to find the best models. In the fine-tuning step, the neurons of the classification layer in the speaker verification model are modified to the speaker number of far-field speech and the learning rate are varying between 1e-8 to 1e-4. The other configurations are the same as the pre-training process. In the pre-training and fine-tuning steps, we all adopt online data augmentation, which includes adding noise [24], adding reverberation [25] and specAug [26]. The hyperparameter of \(\alpha\) in Eq. 1 is set to 0.01. ### Comparison Methods We compare the WTR fine-tuning method with several other competitive domain adaptation methods. They are listed in the following: * Wasserstein Domain Adversarial Training (Wasserstein DAT) [9]: The authors introduce an end-to-end domain adversarial method based on Wasserstein distance to mitigate the language mismatch problem in SV task. * Unsupervised PLDA [10]: Daniel et.al use the out-of-domain PLDA system to cluster unlabeled in-domain speech, and then use the in-domain data to adapt the parameters of the PLDA system. * CORAL+ PLDA [11]: Kong Aik et al. propose CORAL+ to compute the pseudo in-domain within and between class covariance matrices to regularize the corresponding matrices of PLDA. * MMD Transfer Learning [13]: This work [13] introduces a DNN-based adaptation method using maximum mean discrepancy (MMD). When training the comparing method - PLDA, we randomly select 1 million utterances in VoxCeleb (1&2) and the FFSVC dataset to train the initial PLDA, then only use the training dataset of FFSVC2020 or FFSVC2022 to train the adapted PLDA. Moreover, we reproduce the rest of the above methods on VoxCeleb (1&2), FFSVC 2020, and FFSVC 2022 datasets. ### Scoring Criterion In the test phase, we use cosine similarity as the scoring criterion. The performance metrics are equal error rate (EER) and minimum detection cost function (minDCF) [27] which is evaluated with \(P_{target}=0.01\), \(C_{miss}=C_{fa}=1\). ## 4 Experimental Results and Analysis ### Results of Different Distance-based WTR The experimental results of different Distance-based WTR losses on FFSVC 2020 and FFSVC 2022 are listed in Table 1. From Table 1, we can observe that the EER/minDCF of the pre-trained model on FFSVC 2020 and FFSVC 2022 are 9.817%/0.814 and 9.849%/0.731. After vanilla fine-tuning, the EER/minDCF are reduced by 2.382%/0.100 and 1.808%/0.028 respectively, which illustrates that fine-tuning transfers the near-field model to the far-field speech to a certain extent. With the help of L2-norm distance-based WTR loss, the EER/minDCF are reduced by 1.548%/0.041 and 1.399%/0.143 compared with the results of vanilla fine-tuning on FFSVC 2020 and FFSVC 2022. As shown in Table 1, fine-tuning with L2-norm-based WTR loss obtains the lowest EER/minDCF on FFSVC 2020 and FFSVC 2022. Each norm distance for WTR loss consistently outperforms the vanilla fine-tuning, demonstrating that the distance constraint between the weights of the pre-trained model and fine-tuned model not only keeps the transferability as vanilla fine-tuning but also alleviates the overfitting problem of the fine-tuned model. ### Analysis of Vanilla and WTR Fine-tuning To show that weight transfer can avoid overfitting, we illustrate the loss and EER values during vanilla fine-tuning and fine-tuning with L2-norm-based WTR loss in Fig 1. Fig 1 (a) is the changing trend of the loss and EER values with the vanilla fine-tuning epochs. Although the loss function is decreasing (the empirical error of the model is decreasing), the EER of the model is indeed increasing (the generalization error keeps increasing). The results indicate that it is very easy to overfit during fine-tuning. In addition, the weights of the pre-trained speaker verification model are only used to initialize the fine-tuned model in the first epoch and there are no other constraints during fine-tuning, where the fine-tuned model is prone to forget the discriminability learned from large-scale near-field datasets. Compared with Fig 1 (a), Fig 1 (b) shows the tendency of the loss and EER values by the fine-tuning epochs with the help of WTR loss. Obviously, in Fig 1 (b), the EER and loss of the training set and the validate set change in the same trend. Specifically, as the number of training epochs increases, the EER and loss get lower and lower until they stabilize at the 20th epoch. Therefore, the analysis further shows that WTR mitigates the overfitting of the model during fine-tuning. ### Comparison Results with Other Competitive Methods We compare the performance of WTR loss with other competitive domain adaptation methods in the SV task. The experimental results are shown in Table 2. We compare the WTR method with unsupervised PLDA adaptation, CORAL PLDA, Wasserstein DAT and MMD feature distribution alignment. As shown in Table 2, our proposed L2-norm distance-based WTR method outperforms all the compared domain adaptation methods. ## 5 Conclusion In this paper, we propose a weight transfer regularization (WTR) loss to solve the catastrophic forgetting and overfitting problems in fine-tuning far-field speaker verification models. Specifically, the WTR term is to limit the weight distance between the pre-trained model and the fine-tuned model. We also explore three kinds of norm distance in the WTR loss, which are L1-norm, L2-norm and Max-norm respectively. Moreover, we prove the generalization capacity of the fine-tuned model with WTR constraint by PAC-Bayes generalization theory. Experimental results and analysis on the FFSVC 2020 and FFSVC 2022 datasets demonstrate the effectiveness of the WTR term in alleviating overfitting and catastrophic forgetting problems during model fine-tuning. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{FFSVC 2020} & \multicolumn{2}{c}{FFSVC 2022} \\ \cline{2-5} & EER (\%) & minDCF & EER (\%) & minDCF \\ \hline Pre-trained Model & 9.817 & 0.814 & 9.849 & 0.731 \\ \hline Vanilla Fine-tuning & 7.435 & 0.714 & 8.041 & 0.703 \\ \hline + WTR (L1-norm) & 7.234 & 0.698 & 7.122 & 0.598 \\ \hline + WTR (L2-norm) & **5.887** & **0.673** & **6.702** & **0.560** \\ \hline + WTR (Max-norm) & 6.478 & 0.698 & 7.088 & 0.615 \\ \hline \hline \end{tabular} \end{table} Table 1: EER/minDCF (p=0.01) of fine-tuning with different distance-based WTR losses. Figure 1: The trend of EER/loss values with the increasing of fine-tuning epochs. The red dashed/solid line is training/validation loss on FFSVC2020. The blue dashed/solid line is training/validation loss on FFSVC2022. (a) vanilla fine-tuning, (b) WTR fine-tuning. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{FFSVC 2020} & \multicolumn{2}{c}{FFSVC 2022} \\ \cline{2-5} & EER (\%) & minDCF & EER (\%) & minDCF \\ \hline Unsupervised PLDA [10] & 8.763 & 0.744 & 8.211 & 0.742 \\ \hline CORAL+ PLDA [11] & 7.435 & 0.714 & 7.837 & 0.724 \\ \hline Wasserstein DAT [9] & 8.433 & 0.778 & 9.136 & 0.715 \\ \hline MMD [13] & 7.335 & 0.725 & 7.503 & 0.619 \\ \hline **WTR (L2-norm)** & **5.887** & **0.673** & **6.702** & **0.560** \\ \hline \hline \end{tabular} \end{table} Table 2: EER/minDCF (p=0.01) of other competitive methods.
2307.12397
Performance Comparison Between VoLTE and non-VoLTE Voice Calls During Mobility in Commercial Deployment: A Drive Test-Based Analysis
The optimization of network performance is vital for the delivery of services using standard cellular technologies for mobile communications. Call setup delay and User Equipment (UE) battery savings significantly influence network performance. Improving these factors is vital for ensuring optimal service delivery. In comparison to traditional circuit-switched voice calls, VoLTE (Voice over LTE) technology offers faster call setup durations and better battery-saving performance. To validate these claims, a drive test was carried out using the XCAL drive test tool to collect real-time network parameter details in VoLTE and non-VoLTE voice calls. The findings highlight the analysis of real-time network characteristics, such as the call setup delay calculation, battery-saving performance, and DRX mechanism. The study contributes to the understanding of network optimization strategies and provides insights for enhancing the quality of service (QoS) in mobile communication networks. Examining VoLTE and non-VoLTE operations, this research highlights the substantial energy savings obtained by VoLTE. Specifically, VoLTE saves approximately 60.76% of energy before the Service Request and approximately 38.97% of energy after the Service Request. Moreover, VoLTE to VoLTE calls have a 72.6% faster call setup delay than non-VoLTE-based LTE to LTE calls, because of fewer signaling messages required. Furthermore, as compared to non-VoLTE to non-VoLTE calls, VoLTE to non-VoLTE calls offer an 18.6% faster call setup delay. These results showcase the performance advantages of VoLTE and reinforce its potential for offering better services in wireless communication networks.
Rashed Hasan Ratul, Muhammad Iqbal, Jen-Yi Pan, Mohammad Mahadi Al Deen, Mohammad Tawhid Kawser, Mohammad Masum Billah
2023-07-23T18:22:51Z
http://arxiv.org/abs/2307.12397v1
Performance Comparison Between VoLTE and non-VoLTE Voice Calls During Mobility in Commercial Deployment: A Drive Test-Based Analysis ###### Abstract The optimization of network performance is vital for the delivery of services using standard cellular technologies for mobile communications. Call setup delay and User Equipment (UE) battery savings significantly influence network performance. Improving these factors is vital for ensuring optimal service delivery. In comparison to traditional circuit-switched voice calls, VoLTE (Voice over LTE) technology offers faster call setup durations and better battery-saving performance. To validate these claims, a drive test was carried out using the XCAL drive test tool to collect real-time network parameter details in VoLTE and non-VoLTE voice calls. The findings highlight the analysis of real-time network characteristics, such as the call setup delay calculation, battery-saving performance, and DRX mechanism. The study contributes to the understanding of network optimization strategies and provides insights for enhancing the quality of service (QoS) in mobile communication networks. Examining VoLTE and non-VoLTE operations, this research highlights the substantial energy savings obtained by VoLTE. Specifically, VoLTE saves approximately 60.76% of energy before the Service Request and approximately 38.97% of energy after the Service Request. Moreover, VoLTE to VoLTE calls have a 72.6% faster call setup delay than non-VoLTE-based LTE to LTE calls, because of fewer signaling messages required. Furthermore, as compared to non-VoLTE to non-VoLTE calls, VoLTE to non-VoLTE calls offer an 18.6% faster call setup delay. These results showcase the performance advantages of VoLTE and reinforce its potential for offering better services in wireless communication networks. VoLTE, LTE, XCAL, Drive Test, Call Setup Delay, DRX Cycle, Battery Saving, Cellular Communication. ## I Introduction In the rapidly evolving mobile communication sector, customers usually demand uninterrupted, efficient, and consistent service, especially with regard to call setup duration and battery savings. The need for constant and faster connectivity to operator services, including the internet, while moving has become increasingly important. The limitations of traditional circuit-switched phone calls can significantly impact the user experience in terms of call setup time and voice quality [1]. Moreover, during cell switching, the call setup delay often increases, leading to dissatisfied consumers. Therefore, the implementation of VoLTE service, which allows for faster call setup time and better utilization of battery savings, has become a crucial aspect of user demand [2]. The purpose of this research is to conduct a practical assessment of the network's performance in VoLTE enabled urban areas. The study aims to analyze the call setup delay performance and the coverage of the network using a standard field experiment using the XCAL drive test tool. By providing performance data, this research aims to assist network service providers, planners, and designers in making informed decisions regarding the integration, enhancement, and deployment of VoLTE service in their network infrastructure. With the widespread adoption of LTE by mobile network operators, VoLTE has emerged as the most favored technical standard for delivering faster and high-quality voice services over LTE networks [3]. By implementing VoLTE technology, mobile network operators no longer need to dedicate separate circuit-switched (CS) services for voice and data, thereby simplifying their network infrastructure and reducing operational costs [4]. This integration of voice and data services onto a single LTE network streamlines network management and delivers a more efficient, cost-effective, and seamless user experience. Mobile communication networks using standardized cellular technology must optimize their network performance to ensure optimal service delivery. Call setup time and DRX mechanism are two critical performance metrics that must be taken into consideration. Real-time data that captures a variety of network characteristics affecting performance is required to confirm the benefits of VoLTE technology. While considerable research has been done on this subject, there is insufficiently clear and thorough documentation on these aspects. The primary objective of this study is to present a test strategy for analyzing the performance of call setup delay and overall relative battery saving ability in VoLTE compared to non-VoLTE operation. This article delves into the critical issue of the time taken to initiate a voice call. The aim is to shed light on the potential challenges and opportunities for improvement in network performance by conducting a comprehensive analysis of this fallback time. Through the implementation of VoLTE technology, it has been discovered that the additional time taken to fall back to 3G from LTE service can be greatly minimized, resulting in faster call initiation and a more satisfactory user experience [5]. This research article also focuses on testing new features of VoLTE service for utilizing better power-saving ability and verifying end-to-end system functionality. It addresses the challenges that test engineers may encounter during power-saving performance testing and call setup time, proposing an assessment strategy for field environment testing. Precise testing of the battery saving mechanism and call setup delays are necessary for valid results and further analysis. This research advances network optimization knowledge and recommends methods for enhancing mobile communication network quality of service (QoS). With a focus on the effects of signaling messages and the DRX cycle, the research seeks to figure out the factors that contribute to the shorter call setup delay as well as better utilization of the power-saving mechanism in VoLTE compared to traditional voice services. ## II Literature Review Krasniqi et al. examined VoLTE performance in real time, including voice quality, call setup time, and call reliability. In order to acquire real-time network parameters, the authors performed a drive test [6]. Ayman et al. investigated and analyzed Circuit Switched Fallback (CSFB) and VoLTE systems' call setup delay, handover protocols, and KPIs. The article also discusses Single Radio Voice Call Continuity (SRVCC) and its evolution. The performance of CSFB and VoLTE in terms of call setup delay and in-call mobility performance is also assessed. Finally, the article provided strategies for minimizing call setup duration and enhancing eSRVCC handover success [7]. However, there is no validated analysis of drive tests for assessing real-time data in the article. Ayman et al. also investigated VoLTE in commercial 3GPP Release-10 LTE networks. The study also provided deployment guidelines. The assessment took place on active commercial LTE networks under typical conditions and in scenarios involving mobility at an average speed of 80 km/h. The study concluded that VoLTE voice calls with a speech rate of 12.65 kbps provided improved voice quality to OTT and CS voice calls [8]. Aleksander et al. undertook a study to assess the performance of VoLTE for railway-specific voice communication, with an emphasis on one-to-one operational calls and Railway Emergency Calls (REC). The performance of VoLTE was tested against rail industry standards using simulation scenarios. The findings proved that the call setup process for VoLTE was quicker than what was required by railways. According to the findings, VoLTE adequately addresses railway communication requirements. However, the paper's generalization to other railway systems or regions may be limited, and the simulations may not have reflected real-world operational conditions [9]. Bautista et al. examined CSFB's performance in optimized LTE Rel-8 networks and M2M scenarios on live commercial LTE networks. The authors compared mobile-originated (MO) and mobile-terminated (MT) call delays to landline calls. Their studies showed that MO UE observed delay was twice as long as landlines, whereas MT UE perceived delay was equivalent to landline MT calls. The report also included CSFB call setup failure analysis, including network optimization, NAS message handling, and Release 9 CSFB features. It should be noted, however, that the study is limited to CSFB and may not be applicable to other communication techniques [10]. Pastrav et al. examined Voice over Internet Protocol (VoIP) QoS and QoE in an LTE cell deployed in a 3D-modeled Romanian metropolitan area. The simulation findings assist in planning an efficient LTE network based on area parameters, user density, and VoIP performance. However, the study's simulation-based evaluation may not fully reflect real-world applications and it focused only on VoIP performance without considering other applications or services [11]. Aibichandani et al. examined coverage at pedestrian and vehicular speeds for various voice over LTE adaptive multi rate wideband codec mode-sets. The study included laboratory testing in an RF screen room with attenuated RF signals and outdoor testing located in rural Wisconsin on a level drive route at 90 km/h. The researchers tested numerous mode-sets using the XCAL-MPM4 box to record data from four mobile devices simultaneously. The goal was to assess the voice quality and coverage of various mode-sets in LTE networks. However, the study's restricted scope of testing in a controlled lab environment and a specific geographic region may not fully reflect practical network conditions [12]. The objective of Vehanen et al.'s thesis is to enhance the performance testing of Inter Radio Access Technology (I-RAT) handovers from LTE to 3G networks by developing a comprehensive test plan. The authors thoroughly explore the issues test engineers may face during I-RAT handover testing and offer a field environment test plan. The drive test was carried out with the support of XCAL [13]. Gadze et al. compared WiMAX network performance using simulation and field data. The study used GPS, Dongle XCAL-X, a laptop with XCAL-X software, and a van to assess distance and coverage in Accra's urban areas. The researchers employed varying downlink/uplink ratios to balance throughput during field experiments across the University of Ghana Campus to simulate urban outdoor wireless network challenges. The simulation and measurement results shed light on the theoretical and simulated performance of the MIMO configuration based on the network simulation parameters used at the test site [14]. One of the major challenges in studying and improving technological systems is the limited scope of research. Most studies tend to focus on a specific technology or scenario, which may not be directly applicable to other systems or regions. Another common issue in research is the lack of real-time evaluation, where some studies do not include real-time analysis of network parameters and instead rely on simulation scenarios or laboratory testing. This approach may not always reflect the real-world complexities of the system, and thus limit the applicability of the findings. Researchers must adopt comprehensive, dynamic approaches that encompass diverse scenarios and include real-time monitoring and analysis to overcome these limitations. The significance of our research lies in addressing the limitations of previous studies that have mainly focused on specific technologies or scenarios, with limited applicability to other systems or regions. Our research aims to bridge this gap by analyzing real-time signaling messages transmitted between UE and eNodeB, enabling us to accurately calculate the call setup delay and analyzing the power saving mechanism for VoLTE. This is a novel approach, as no previous research has explored the actual reason for the faster call setup time as well as the factor behind better power saving mechanisms together in VoLTE with proper validation and justification of real-time signaling messages. The quantitative analysis of this research is critical in providing high-level accurate measurements of call setup delay analysis and the reason behind enhanced power saving. Additionally, we have conducted two-way voice calls simultaneously in different real-life scenarios, providing insights into the variability of signaling messages that have not been explored in current accessible research. Overall, our research aims to enhance the technical understanding of call setup delay and power-saving techniques in VoLTE and contribute to the development of more efficient and reliable communication systems. ## III Methodology The evaluation of VoLTE service from an end-user experience perspective involves conducting mobility tests at driving speeds while collecting real-time network data. In this study, measurements were obtained through measurement operations conducted by a reputable mobile operator in Bangladesh, where VoLTE technology is extensively deployed. The measurements were performed in Gulshan, a large urban region encompassing connecting roads between sectors and adjacent smaller communities. The aim was to assess the technical effectiveness of the network, specifically focusing on end users equipped with VoLTE-capable smartphones. By gathering numerical measurements and analyzing their statistical distributions, valuable insights were obtained regarding the performance and quality of the VoLTE service in real-world mobility scenarios. ### _Drive Test Plan_ The tests are performed utilizing two separate mobile-to-mobile systems in order to call one another. The focus of this activity is to monitor the network messages and signal information that occurs during the call setup process. Additionally, each of these analyses is performed for VoLTE to VoLTE calls, VoLTE to regular LTE calls, and LTE to LTE calls, enabling a thorough comparison of call setup performance between various technologies. The strategy employed in this study comprises measuring mobility in a real-world network setting in order to get empirical information on VoLTE service performance. The drive test plan was meticulously designed to assess the performance of VoLTE service in a real-world urban setting. A specially equipped microbus was utilized for conducting the tests, ensuring mobility and coverage across Gulshan 1 and 2, bustling metropolitan areas in Bangladesh. The tests spanned an entire day, from 10.30 am to 4.30 pm, capturing network data under diverse environmental conditions and traffic scenarios. To capture and analyze network parameters, signaling messages, and technical insights, two mobile phones, namely Samsung Galaxy S10 and Samsung Galaxy S10+, were connected to a laptop equipped with the XCAL drive test software. This licensed software, widely recognized for its reliability in assessing mobile network performance, facilitated real-time monitoring and data collection during phone calls [15]. Through the XCAL drive test software, an extensive range of technical aspects was examined, including call setup delay, signal strength, handover performance, and voice quality metrics. This comprehensive analysis allowed for the evaluation of key performance indicators (KPIs) specific to VoLTE and non-VoLTE operations, such as the time required for call setup, and the efficiency of battery-saving techniques. By conducting tests in a dynamic urban environment and leveraging advanced measurement capabilities, this study provided valuable insights into the technical capabilities and performance of VoLTE. Both mobile phones were equipped with VoLTE-supported SIM cards from the same operator, which allowed for testing VoLTE service performance in a real-world network environment. ### _Dynamic Mobility Scenario Evaluation_ Each test conducted as part of this study involved a comprehensive evaluation of VoLTE service performance in dynamic mobility scenarios. The tests were carried out for approximately 30 minutes, encompassing a single drive loop with the microbus traveling at an average speed of 45 km/h. This speed was chosen to simulate typical vehicular speeds, ensuring that the tests captured the real-world performance of VoLTE service during on-the-go scenarios. By conducting the tests while the vehicle was in motion, the study aimed to assess the impact of mobility on VoLTE performance. To maintain consistency and comparability across measurement scenarios, a predetermined routing path was followed for all tests. This routing path, illustrated in Fig. 1, provided a standardized framework for assessing the performance of VoLTE service under varying network conditions and common geographic locations in most of the urban areas with VoLTE enabled operations. ### _XCAL Software Tool for Field Test_ The XCAL drive test software used in the study provided detailed signaling message notifications at the millisecond level, making it easier to accurately calculate call setup delays for each cellular communication technology [16]. The precise timing information captured by the software allowed for precise measurement of call setup delays for VoLTE and general circuit-switching non-VoLTE oriented operations, enabling a comprehensive analysis of the network behavior and performance during call setup. This level of granularity in the signaling message notifications provided by XCAL software facilitated the accurate calculation of call setup delays for each case, ensuring reliable results for the drive test. Overall, the drive test plan aimed to gather empirical data on the performance of VoLTE service in a real-world network environment, using a van equipped with mobile phones and specialized drive test software. The collected data and observations from different scenarios would provide valuable insights into the technical performance of VoLTE service, contributing to a comprehensive analysis of VoLTE service performance. The XCAL setup is shown in Fig. 2. Fig. 1: Routing path for the drive test. ## IV Overall Analysis This section presents a comprehensive analysis of key variables that play a crucial role in the performance of VoLTE service. By examining these variables in detail, valuable insights can be gained regarding the optimization of VoLTE network implementation and deployment. ### _Call Setup Delay_ Call setup delay refers to the time taken for a voice call to be established from the initiation of the call to the point where the call is connected and the parties can communicate. Call setup delay is one of the key performance metrics that is measured and analyzed to assess the performance of VoLTE service [17]. Different scenarios are observed for calls made from one mobile to another mobile and vice versa, and for different types of calls, including VoLTE to VoLTE calls, VoLTE to traditional LTE calls, and LTE to LTE calls. The objective is to capture different call setup scenarios and analyze the network behavior during call setup. Fig. 3 illustrates the sample signaling messages after initiating a phone call as well as the call setup delay for VoLTE to VoLTE service. These statistical metrics help in understanding the typical call setup performance of the VoLTE network, including the time taken for call setup, the variability in call setup delay, and any potential issues or optimizations needed. The call setup delay data is also compared with traditional CSFB technologies to understand the differences in call setup performance between VoLTE and these technologies. Cellular communication technologies have gone through numerous stages to offer advanced features [18]. Hence, by understanding the call setup delay behavior, network operators and service providers can identify any performance issues, optimize network configurations, and ensure a satisfactory and advanced user experience for VoLTE service users [19]. Table 1 shows the simplified sample signaling messages after initiating a phone call for non-VoLTE operation. Call setup delay for non-VoLTE service has been calculated using the time gap between the Extended service request message and the ALTERING message. Four sets of signaling messages play a crucial role in VoLTE-VoLTE call setup which has been shown in Table 2. The timely exchange of these messages determines the overall call setup delay, which is a critical performance metric in the VoLTE service. Therefore, to ensure optimal call setup time, it is essential to ensure that these four sets of signaling messages are delivered in a timely and efficient manner. Fig. 3: Observed signaling messages for VoLTE to VoLTE call. Fig. 2: XCAL drive test tool setup with cell phones. ### _DRX Cycle and Power Saving Mechanism Analysis_ The variation in the DRX cycle during VoLTE call setup signifies the adaptive power-saving operation of the system. When the time difference between signaling messages is high, the UE can switch to a longer DRX cycle to reduce the number of times it needs to wake up and receive messages from the eNodeB, thus saving power. Conversely, when the time difference between signaling messages is low, the UE can switch to a shorter DRX cycle to ensure the timely receipt of messages and prevent call setup delays [20]. This adaptive mechanism allows VoLTE to efficiently manage battery usage during call setup and other communication operations. Fig. 4 shows the variation of the DRX cycles between VoLTE and non-VoLTE services. During a VoLTE call setup, there is a significant variation in the time difference between the transmission and reception of signaling messages between UE and eNodeB. This leads to a sharp rise and drop in the DRX cycle, which is an adaptive mechanism used by VoLTE to save power [21]. The sharp variation of the DRX cycle in VoLTE enabled voice calls is shown in Fig. 5. In contrast, non-VoLTE operation has a nearly steady DRX cycle and fixed time difference between signaling messages as shown in Fig. 4, resulting in continuous signaling messages and reduced probability of battery savings. The adaptive DRX cycle mechanism in VoLTE call setup enables efficient battery usage and improves the overall performance of the system [22]. Furthermore, our research also revealed that the power-saving performance of VoLTE is significantly better than traditional voice services. This is due to the adaptive DRX cycle mechanism employed by VoLTE, which adjusts the cycle duration based on the quality of the radio link and the overall situation of user activity before and after the initiation of any Service Request by the UE. This results in reduced power consumption during periods of inactivity, without compromising the call quality or setup time. ### _Paging Messages for VoLTE and non-VoLTE_ Paging messages are essential in communication networks for locating and notifying mobile devices of incoming calls or messages. They contain important information and are broadcasted to reach all devices within a coverage area [23]. Mobile devices respond to paging messages to confirm their presence and readiness for connection. Efficient handling of paging messages improves network performance and conserves battery life [24]. Advancements in LTE and 5G networks have introduced enhancements to paging mechanisms, optimizing efficiency and user experience. The time interval of paging messages in a communication network is closely related to the DRX mechanism and battery saving [25]. DRX is a power-saving technique used in cellular networks where the mobile device periodically enters sleep mode to conserve battery. During sleep mode, the device disables certain functionalities, including continuous monitoring of the paging channel. Instead, it wakes up periodically to check for paging messages within predefined intervals [26]. The time interval of paging messages directly affects the power consumption of mobile devices [27]. If the paging interval is short, the device needs to wake up more frequently to check for incoming calls or messages. This increased wake-up frequency results in higher power consumption and potentially reduces battery life. On the other hand, if the paging interval is long, the device can remain in sleep mode for extended periods, reducing power consumption and conserving battery life [28]. However, a longer paging interval may lead to delays in receiving incoming calls or messages, as the device checks for them less frequently. Thus, there exists a trade-off between battery saving and timely message reception. Adjusting the paging interval requires finding the right balance between power efficiency and the need for timely communication. Networks strive to optimize this balance by considering factors such as user preferences, network congestion, and the urgency of incoming communications. The research findings provide valuable insights into the characteristics of paging message time intervals in VoLTE and non-VoLTE operations. This study reveals that in VoLTE, the paging message time interval initially exhibits a higher duration prior to receiving the Service Request message at the eNodeB. However, upon receiving the Service Request from the UE, there is a significant reduction in the signaling time interval. This reduction in the paging message time gap after the Service Request message reflects an adaptive process specifically designed for VoLTE, aiming to minimize call setup delays. Fig. 4: Variation of DRX cycle during Call Setup Delay (in Red). Fig. 5: Time variation of signaling messages during call setup in VoLTE. The adaptive nature of the paging message time interval in VoLTE showcases an important optimization mechanism. It effectively reduces the occurrence of unnecessary signaling message transfers, thereby streamlining call setup procedures and improving overall efficiency. In addition, this adaptive strategy aids in reducing battery consumption. The observation of high time gaps initially suggests the presence of excessive signaling message transfers, which are effectively mitigated through the implementation of an adaptive paging message time interval in VoLTE. These techniques allow for longer sleep durations and more flexible paging strategies, reducing unnecessary wake-ups and extending battery life without significantly impacting the responsiveness of the device. Fig. 6 showcases the paging message time intervals in the VoLTE case. It visually demonstrates the initial higher interval before receiving the Service Request, followed by a significant decrease in the signaling time interval. This adaptive feature in VoLTE reduces call setup delays and enhances energy efficiency. Conversely, the analysis emphasizes that non-VoLTE operations lack this advanced feature of an adaptive paging message time interval both before and after the Service Request. As a result, the battery-saving mechanism in non-VoLTE operations is comparatively less user-friendly, potentially leading to increased power consumption and reduced battery efficiency. Fig. 7 displays the paging message time intervals in the non-VoLTE case. Unlike VoLTE, the graph shows a static interval without an adaptive mechanism. This absence of adaptivity in non-VoLTE systems may result in less optimized call setup delays and potential inefficiencies in battery usage. These research findings underscore the significant advantages associated with VoLTE's adaptive paging message time interval. By optimizing call setup delays and improving energy efficiency, VoLTE offers an enhanced user experience compared to non-VoLTE systems. The adaptive approach ensures a more streamlined and efficient signaling process, benefiting both call setup performance and battery consumption at the UE end. The percentage of energy savings can be calculated using the following formula [29]: \[\text{Percentage energy savings}=\frac{ME_{\text{sleep}}+NE_{\text{ awake}}}{(M+N)E_{\text{ awake}}} \tag{1}\] In this case, the system considers a scenario where the UE operates in two distinct modes: the DRX mode and the normal operation mode. The DRX mode consists of \(M\) frames, during which the UE is in a power-saving state, while the normal operation mode comprises \(N\) frames, where the UE operates in its regular mode without power-saving mechanisms. Our proposed formula for calculating energy savings based on the time interval of signaling messages is as follows: \[\text{Energy savings}=\frac{MT_{\text{interval}}{}^{\text{t}}E_{\text{per unit}}+NT_{\text{ awake}}{}^{\text{t}}E_{\text{per unit}}}{(M+N)T_{\text{ awake}}{}^{\text{t}}E_{\text{per unit}}} \tag{2}\] Here, the equation under consideration involves the variables \(M\) and \(N\), which symbolize the total number of signaling blocks before and after the initiation of the Service Request message, respectively. \(T_{\text{interval}}\) and \(T_{\text{ awake}}\) are employed to represent the average time gap between two corresponding signaling messages before and after the initiation of the Service Request message. Lastly, \(E_{\text{per unit}}\) denotes the average energy consumed by the battery for the transmission of each signaling block. ## V Results The call setup delay time for VoLTE-VoLTE calls is almost 72.6% faster compared to LTE-LTE calls. This difference is primarily due to the reduced number of signaling messages involved in VoLTE-VoLTE calls. Moreover, optimum delivery of the message blocks associated with EPS bearer context request message during VoLTE service enables faster call setup in VoLTE-VoLTE calls. On the other hand, traditional LTE-LTE calls require additional message blocks while switching back to 3G, resulting in longer call setup times. Even when making calls from a VoLTE user to a non-VoLTE user, the average call setup delay is significantly faster than traditional circuit-switched voice calls. Our findings reveal that the average call setup delay for VoLTE to non-VoLTE calls is 18.6% faster than the average call setup delay for non-VoLTE to non-VoLTE calls. In terms of energy efficiency, VoLTE exhibits a significant energy Fig. 6: Paging message interval analysis in VoLTE. Fig. 7: Paging message interval analysis in non-VoLTE. saving of approximately 60.76% compared to non-VoLTE before the Service Request initiation. Similarly, after the Service Request, VoLTE showcases a notable energy saving of approximately 38.97% compared to non-VoLTE, highlighting its efficiency in optimizing energy consumption during signaling and communication processes. ## VI Conclusion VoLTE, a highly optimized and advanced cellular technology, offers numerous advantages over LTE and other traditional cellular networks. It excels in terms of battery-saving techniques, making it a more efficient and user-friendly choice. VoLTE also outperforms traditional cellular networks when considering call setup delays. With its advanced battery-saving techniques and adaptive features, VoLTE ensures enhanced battery performance, resulting in extended device usage and improved user experience. In conclusion, VoLTE surpasses LTE and traditional cellular technologies with its optimized energy consumption, faster call setup technique, and advanced battery-saving procedures. These advantages represent VoLTE as a more sustainable, efficient, and user-friendly choice for modern communication networks. ## Acknowledgment The drive tests were performed using the licensed XCAL software with sponsored logistical support from IUT, Bangladesh. This work is also supported in part by Industrial Technology Research Institute (ITRI), Taiwan.
2307.02184
Table inference for combinatorial origin-destination choices in agent-based population synthesis
A key challenge in agent-based mobility simulations is the synthesis of individual agent socioeconomic profiles. Such profiles include locations of agent activities, which dictate the quality of the simulated travel patterns. These locations are typically represented in origin-destination matrices that are sampled using coarse travel surveys. This is because fine-grained trip profiles are scarce and fragmented due to privacy and cost reasons. The discrepancy between data and sampling resolutions renders agent traits non-identifiable due to the combinatorial space of data-consistent individual attributes. This problem is pertinent to any agent-based inference setting where the latent state is discrete. Existing approaches have used continuous relaxations of the underlying location assignments and subsequent ad-hoc discretisation thereof. We propose a framework to efficiently navigate this space offering improved reconstruction and coverage as well as linear-time sampling of the ground truth origin-destination table. This allows us to avoid factorially growing rejection rates and poor summary statistic consistency inherent in discrete choice modelling. We achieve this by introducing joint sampling schemes for the continuous intensity and discrete table of agent trips, as well as Markov bases that can efficiently traverse this combinatorial space subject to summary statistic constraints. Our framework's benefits are demonstrated in multiple controlled experiments and a large-scale application to agent work trip reconstruction in Cambridge, UK.
Ioannis Zachos, Theodoros Damoulas, Mark Girolami
2023-07-05T10:24:16Z
http://arxiv.org/abs/2307.02184v2
# Table Inference for Combinatorial Origin-Destination Choices in Agent-Based Population Synthesis ###### Abstract A key challenge in agent-based mobility simulations is the synthesis of individual agent socioeconomic profiles. Such profiles include locations of agent activities, which dictate the quality of the simulated travel patterns. These locations are typically represented in origin-destination matrices that are sampled using coarse travel surveys. This is because fine-grained trip profiles are scarce and fragmented due to privacy and cost reasons. The discrepancy between data and sampling resolutions renders agent traits non-identifiable due to the combinatorial space of data-consistent individual attributes. This problem is pertinent to any agent-based inference setting where the latent state is discrete. Existing approaches have used continuous relaxations of the underlying location assignments and subsequent ad-hoc discretisation thereof. We propose a framework to efficiently navigate this space offering improved reconstruction and coverage as well as linear-time sampling of the ground truth origin-destination table. This allows us to avoid factorially growing rejection rates and poor summary statistic consistency inherent in discrete choice modelling. We achieve this by introducing joint sampling schemes for the continuous intensity and discrete table of agent trips, as well as Markov bases that can efficiently traverse this combinatorial space subject to summary statistic constraints. Our framework's benefits are demonstrated in multiple controlled experiments and a large-scale application to agent work trip reconstruction in Cambridge, UK. Combinatorial explosion Markov bases origin-destination matrix population synthesis spatial interaction models ## 1 Introduction Agent-based models (ABMs) are becoming increasingly popular policy-making tools in areas such as epidemic and transportation modelling (Bonabeau, 2002). The emergent structure arising from ABM simulations relies on the quality of the underlying agent population's demographic and socioeconomic attributes. In transportation ABMs, such as MATSim (Axhausen and ETH Zurich, 2016), simulated travel patterns are predominantly governed by the location where agent activities take place (e.g. working, shopping). The trips between activities are summarised in origin-destination matrices (ODMs), which are often either partially or not available a priori. Therefore, _population synthesis_ is performed to create artificial agents whose attributes (e.g. workplace location) have the same summary statistics as those described by population averages (e.g. regional job availability). Location choice synthesis translates to reconstructing integer-valued origin-destination matrices whose margins are summary statistics. To this end, coarse/aggregate agent activity surveys by geographical region and activity type are mainly leveraged (Fournier et al., 2021). This is because fine-grained individual/disaggregate profiles are scarce and fragmented due to privacy and/or data acquisition cost reasons. Therefore, a discrepancy arises between the spatial resolutions of the data and latent states. Inferring individual agent trips subject to population summary statistics necessitates the exploration of a combinatorial choice space. The size of this space induces identifiability issues since a unique set of agent location choices consistent with the data cannot be recovered. A downsampling approach of sampling individual choices is computationally infeasible for any real-world application. Assuming that there are \(M\) agents with \(L\) available location choices, then computing the likelihood of the aggregate data given individual model parameters requires summing over \(L^{M}\) possible location configurations, many of which are inconsistent with the data. Computational and identifiability issues can be alleviated by appropriately constraining the discrete latent space. The problem of exploring a constrained combinatorial agent state space is pertinent to any agent-based inference setting where the latent state is discrete. Although discrete choice models (Train, 2009) are popular candidates for disaggregating agent location choices, they cannot encode aggregate statistic constraints. Therefore, they either accrue errors when reconstructing ODMs or lead to factorially growing rejection rates (DeSalvo and Zhao, 2016) when forced to adhere to discrete constraints in a rejection-type scheme. A suite of greedy optimisation algorithms such as iterative proportional fitting (Bishop, Fienberg, and Holland, 2007) and combinatorial optimisation (Voas and Williamson, 2000) were employed to assimilate summary statistic constraints in continuous and discrete spaces, respectively. These methods suffer from poor convergence to local optima, yielding solutions heavily dependent on good initialisations. Moreover, operating in a continuous probability/intensity space requires an additional sampling step to discretise the ODM, such as stochastic rounding (Croci et al., 2022). This is an ad-hoc treatment of the problem and produces errors. The unidentifiable nature of disaggregating agent choices from aggregate data calls for uncertainty quantification in order to give practitioners the ability to interrogate and rank the sampled ODMs according to their probability. Probabilistic methods have overcome some of the aforementioned limitations (Farooq et al., 2013; Sun and Axhausen, 2016), but remain approximate since they operate in the continuous intensity/probability space. In the case of location choice synthesis, ODMs are equivalent to two-way contingency tables of two categorical variables (e.g. origin residential population and destination workforce population) and the joint distribution of the two variables is explored using Gibbs sampling. Table marginal probabilities are elicited by normalising the discrete summary statistics. This approximation incurs information loss and may cause marginal class imbalances in high dimensional tables (Fournier et al., 2021), meaning a growing divergence between ground truth and sampled marginal distributions. In addition, partially available data cannot be accommodated in a principled manner and unreasonable conditional independence assumptions are imposed. The work of (Carvalho, 2014) endeavoured to address these two problems by adopting a Bayesian paradigm that operates directly on the discrete table space. However, neither the most efficient proposal mechanism nor the available intensity structure were exploited. Instead, a Metropolis-Hastings (MH) scheme for sampling contingency tables was employed that proposes jumps of size at most one in \(\mathcal{O}(\#\text{ origins }\times\#\text{ destinations })\), causing poor mixing in high-dimensional tables. Furthermore, the author argued for a hierarchical construction that jointly learns the constrained discrete ODM and the underlying intensity function. In doing so, they attempted to leverage a family of log-non-linear intensity models known as _spatial interaction models_ (SIMs) (Wilson, 1971). SIMs incorporate summary statistic constraints directly in the continuous intensity space. Despite this effort, a log-linearity assumption was imposed on the SIM to simplify parameter inference. Also, the known dynamics of competition between destination locations (Dearden and Wilson, 2015) were ignored, effectively stripping SIMs of all their embedded structure. Moreover, additional data were required to calibrate the intensity function, such as seed matrices, which are seldom available, as opposed to regularly observed data on the economic utility of travelling to destination locations. The works of (Ellam et al., 2018) and (Gaskin, Pavliotis, and Girolami, 2023) alleviated this problem by constructing a physics-driven log-non-linear SIM intensity prior. However, both approaches operated strictly in the continuous intensity space and could not explore the discrete table space where population synthesis is performed. ### Contributions Our paper focuses on reconstructing origin-destination agent trip matrices summarising residence-to-workplace location choices. We offer an upsampling Bayesian approach that jointly samples from the discrete table (T) and continuous intensity (A) spaces for agent location choice synthesis. Our framework seamlessly assimilates any type of aggregate summary statistic as a constraint, which in turn regularises the space of admissible disaggregate/individual agent choices. We demonstrate an improved reconstruction and coverage of a partially observed origin-destination matrix summarising agent trips from residential to workplace locations in Cambridge, UK. \begin{table} \begin{tabular}{|c c c c c c|} \hline \(\mathcal{C}\) & Constrained ODM & This work & (Ellam) & (Gaskin) & \(\mathbb{P}(\mathbb{T}|\mathbb{A},\mathcal{C})\) \\ \hline \(\left\{\lambda_{++}\right\}\) & Totally & ✓ & ✓ & ✓ & - \\ \hline \(\left\{\Lambda_{+}\right\}\) & Singly & ✓ & ✓ & ✓ & - \\ \hline \(\left\{T_{++},\lambda_{++}\right\}\) & Totally & ✓ & \(\times\) & \(\times\) & \(\begin{array}{c}\text{Multinomial}\\ \left(T_{++},\lambda_{++}\right)\end{array}\) \\ \hline \(\left\{\mathbb{T}_{+},\Lambda_{+}\right\}\) or & Singly & ✓ & \(\times\) & \(\times\) & \(\begin{array}{c}\text{Product Multinomial}\\ \left(T_{+},\frac{\Lambda_{+}}{\lambda_{++}}\right)\end{array}\) \\ \hline \(\left\{\mathbb{T}_{-+},\mathbb{T}_{+,\lambda_{++}}\right\}\) & Doubly & ✓ & \(\times\) & \(\times\) & \(\begin{array}{c}\text{Fisher's non-central}\\ \left(T_{-+},\mathbb{T}_{+,\lambda_{++}}\right)\end{array}\) \\ \hline \(\left\{\mathbb{T}_{-+},\mathbb{T}_{+,\lambda_{++}}\right\}\) & \(\begin{array}{c}\text{Doubly and cell}\\ \left(\mathbb{T}_{+},\mathbb{T}_{+,\lambda_{++}},\frac{\lambda_{++}\lambda_{ +}}{\lambda_{++}}\right)\end{array}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of our method’s capabilities against previous works. Agent choices are described by a discrete table (T) or a continuous intensity (A). Subscripts define summary statistics: the row and column sums/margins are indexed by \((\cdot,+),(+,\cdot)\), respectively. The cell universe \(\mathcal{X}\supseteq\mathcal{X}^{\prime}\) contains table/intensity indices of an \(I\times J\) matrix. Contrary to the previous work, we perform a Gibbs step and sample tables in \(\mathcal{O}(\#\) destinations) leveraging the Markov basis machinery in (Diaconis and Sturmfels, 1998) to design a Markov Chain Monte Carlo (MCMC) scheme with proposals that allow arbitrarily large jumps in table space without any accept/reject step. Hence, we bypass the problem of marginal distribution imbalances by respecting the exact margin frequencies rather than marginal distributions. We employ SIMs to understand the behavioural mechanism of aggregate location choice in continuous intensity space and relax previously adopted log-linearity assumptions on the intensity model. In the same fashion as (Ellam et al., 2018; Gaskin, Pavliotis, and Girolami, 2023), we account for the stochastic dynamics of competition between destinations governing agent location choices and enforce an interpretable structure in the SIM intensity prior. A summary of our framework's capabilities relative to the previous works is depicted in Table 1. ## 2 Problem setup Consider \(M\) agents that travel from \(I\) origins to \(J\) destinations to work. Let the expected number of trips (intensity) of agents between origin \(i\) and destination \(j\) be denoted by \(\Lambda_{ij}\). The residential population in each origin (row sums) is equal to \[\Lambda_{i+}=\sum_{j=1}^{J}\Lambda_{ij},\quad i=1,\ldots,I, \tag{1}\] while the working population at each destination (column sums) is \[\Lambda_{+j}=\sum_{i=1}^{I}\Lambda_{ij},\quad j=1,\ldots,J. \tag{2}\] We assume that the total origin and destination demand are both conserved: \[M=\Lambda_{++}=\sum_{i=1}^{I}\Lambda_{i+}=\sum_{j=1}^{J}\Lambda_{+j}. \tag{3}\] This construction defines a totally constrained SIM. The demand for destination zones depends on the destination's attractiveness denoted by \(\mathbf{w}\coloneqq(w_{1},\ldots,w_{J})\in\mathbb{R}_{>0}^{J}\). Let the log-attraction be \(\mathbf{x}\coloneqq\log(\mathbf{w})\). Between two destinations of similar attractiveness, agents are assumed to prefer nearby zones. Therefore, a cost matrix \(\mathbf{C}=(c_{i,j})_{i,j=1}^{I,J}\) is introduced to reflect travel impedance. These two assumptions are justified by economic arguments Pooler, 1994. The maximum entropy distribution of agent trips subject to the total number of agents being conserved is derived by maximising \[\mathcal{E}(\mathbf{\Lambda})=\sum_{i=1}^{I}\sum_{j=1}^{J}-\Lambda_{ij}\log( \Lambda_{ij})+\zeta\left(\sum_{i,j}^{I,J}\Lambda_{ij}-M\right)+\alpha\sum_{j=1 }^{J}x_{j}\Lambda_{ij}-\beta\sum_{i,j}^{I,J}c_{ij}\Lambda_{ij}, \tag{4}\] which yields a closed-form expression for the trip intensity: \[\Lambda_{ij}=\frac{\Lambda_{++}\exp(\alpha x_{j}-\beta c_{ij})}{\sum_{k,m}^{I, J}\exp(\alpha x_{m}-\beta c_{km})}, \tag{5}\] where \(\alpha,\beta\) control the two competing forces of attractiveness and deterrence. A higher \(\alpha\) relative to \(\beta\) characterises a preference over destinations with higher job availability, while the contrary indicates a predilection for closer workplaces. The destination attractiveness \(\mathbf{w}\) is governed by the Harris-Wilson (Harris and Wilson, 1978) system of \(J\) coupled ordinary differential equations (ODEs): \[\frac{\mathrm{d}w_{j}}{\mathrm{d}t}=\epsilon w_{j}\left(\Lambda_{+j}-\kappa w _{j}+\delta\right),\ \ \mathbf{w}(0)=\mathbf{w}^{\prime}, \tag{6}\] where \(\kappa>0\) is the number of agents competing for one job, \(\delta>0\) is the smallest number of jobs a destination can have and \(\Lambda_{+j}(t)-\kappa w_{j}(t)\) is the net job capacity in destination \(j\). A positive net job capacity translates to a higher economic activity (more travellers than jobs) and a boost in local employment, and vice versa. In equilibrium, the \(J\) stationary points of the above ODE can be computed using \[\kappa w_{j}-\delta=\frac{\Lambda_{++}w_{j}^{\alpha}}{\sum_{k,m}^{I,J}w_{k}^{ \alpha}\exp(-\beta c_{km})}\sum_{i=1}^{I}\exp(-\beta c_{ij}). \tag{7}\] The value of \(\kappa\) can be elicited by summing the above equation over destinations, which yields \[\kappa=\frac{\delta J+\Lambda_{++}}{\sum_{j=1}^{J}w_{j}}, \tag{8}\] while \(\delta\) corresponds to the case when no agent travels to destination \(j^{\prime}\) (\(\Lambda_{+j^{\prime}}=0\)), i.e. \[\delta=\kappa\min_{j}\{w_{j}\} \tag{9}\] A stochastic perturbation of 6 incorporates uncertainty in the competition dynamics emerging from the randomness of agents' choice mechanisms. This gives rise to the Harris-Wilson stochastic differential equation (SDE) for the time evolution of log destination attraction \(\mathbf{x}\) \[\mathrm{d}\mathbf{x}=-\epsilon^{-1}\nabla V(\mathbf{x})\,\mathrm{d}t+\sqrt{2 \gamma^{-1}}\,\mathrm{d}\mathbf{B}_{t},\quad\mathbf{x}(0)=\mathbf{x_{0}}, \tag{10}\] where the potential function \(V(\mathbf{x})\) in the drift term is equal to \[\epsilon^{-1}V(\mathbf{x})=\underbrace{-\alpha^{-1}\sum_{i=1}^{I}O_{i}\log \left(\sum_{j=1}^{J}\exp(\alpha x_{j}-\beta c_{ij})\right)}_{\text{utility potential}}+\underbrace{\kappa\sum_{j=1}^{J}\exp(x_{j})}_{\text{cost potential}}-\underbrace{\delta\sum_{j=1}^{J}x_{j}}_{\text{additional potential potential}}, \tag{11}\] and \(\boldsymbol{\theta}=(\alpha,\beta)\) is the free parameter vector. The steady-state distribution of 10 is shown in (Ellam et al., 2018) to be the Boltzmann-Gibbs measure \[p(\mathbf{x}|\boldsymbol{\theta})=\frac{1}{Z(\boldsymbol{\theta})}\exp\left( -\gamma V_{\boldsymbol{\theta}}(\mathbf{x})\right) \tag{12}\] \[Z(\boldsymbol{\theta})\coloneqq\int_{\mathbb{R}^{J}}\exp\left(-\gamma V_{ \boldsymbol{\theta}}(\mathbf{x})\right)\,\mathrm{d}\mathbf{x}. \tag{13}\] The observed data \(\mathbf{y}\) are assumed to be noisy pertubartions of \(\mathbf{x}\), where the error between the two satisfies \(\log(\mathbf{e})\sim\mathcal{N}(\mathbf{0},\sigma_{d}^{2}\mathbf{I})\), that is \[\log(\mathbf{y})=\mathbf{x}+\log(\mathbf{e}). \tag{14}\] We introduce a data augmentation step to perform inference at the higher resolution origin-destination table space of agent trips as depicted in Figure 1. Assume that the \(I\times J\) discrete contingency table \(\mathbf{T}\) summarising the number of agents living in location \(i\) and working in location \(j\) is Poisson distributed: \[T_{ij}\sim\text{Poisson}\left(\Lambda_{ij}(\mathbf{x},\boldsymbol{\theta}) \right), \tag{15}\] where the \(T_{ij}\)'s are conditionally independent given the \(\Lambda_{ij}\)'s. The contingency table inherits constraint 3. These hard-coded constraints can be viewed as noise-free data on the discrete table space. We abbreviate the vector of row sums, column sums and the scalar total of \(\mathbf{T}\) by \(\mathbf{T}_{\cdot+}\), \(\mathbf{T}_{+\cdot}\) and \(T_{++}\), respectively. Note that \(\mathbf{T}\) uniquely determines the rest of the aforementioned random variables and \(T_{++}=\Lambda_{++}\). Moreover, the distribution of \(\mathbf{x}\) in 12 coupled with a prior on \(\boldsymbol{\theta}\) jointly induces a prior over the intensity function \(\boldsymbol{\Lambda}\). Performing inference in a discrete higher-resolution table space circumvents challenges associated with enforcing summary statistic constraints in the continuous intensity space. First, the doubly constrained intensity (see Table 1) admits solutions retrieved only through an iterative procedure that converges to poor local optima without any quantification of uncertainty, since the physical model in (10) becomes redundant. Second, maximising (4) subject to individual cell constraints induces discontinuities in the \(\Lambda\) space prohibiting SIM parameter calibration. To avoid dealing with discontinuities, a fully observable table is required, which is seldom available and defeats the purpose of ODM reconstruction. Alternatively, more parameters can be introduced, which entails identifiability problems as the number of free parameters becomes \(\mathcal{O}(I+J)\) instead of \(\mathcal{O}(J)\). Moreover, augmenting \(\mathcal{C}_{\Lambda}\) to match \(\mathcal{C}_{T}\) strengthens the dependence between \(\mathbf{T}|\mathbf{A},\mathcal{C}\) and \(\mathbf{y}|\mathbf{x},\mathcal{C}\). As a result, constraints are implicitly weighted (hard \(\mathcal{C}_{\Lambda}\) and soft \(\mathcal{C}_{T}\) constraints), which inflicts identifiability issues in \(\boldsymbol{\Lambda}\). ## 3 Discrete Table Inference Let the set of table indices (cells) be \(\mathcal{X}=\{(i,j):1\leq i\leq I,1\leq j\leq J\}\) such that \(T(x)=T_{ij}\) is the table value of cell \(x=(i,j)\in\mathcal{X}\). For any subset \(\mathcal{X}_{k}\subseteq\mathcal{X}\) let \(S_{k}:\mathcal{X}_{k}\to\mathbb{N}^{I+J}\) be a bijective function that maps every cell \(x\in\mathcal{X}_{k}\) to the \((I+J)\)-dimensional binary vector with the \(i\)-th and \((I+j)\)-th entries equal to one and the rest being zero. Define \(\mathcal{S}_{k}:\mathcal{T}\to\mathbb{N}^{I+J}\) to be the summary statistic operator applying a uniquely defined \(S_{k}(\cdot)\) to a table \(\mathbf{T}\in\mathcal{T}\) over cells \(\mathcal{X}_{k}^{\prime}\subseteq\mathcal{X}\) such that \(\mathcal{S}_{k}(\mathbf{T}^{\prime})=\sum_{x\in\mathcal{X}}\mathbf{T}^{\prime} (x)S_{k}(x)\). The ordered collection1 of summary statistic operators \(\big{\{}\mathcal{S}_{1}(\mathbf{T}^{\prime}),\dots,\mathcal{S}_{K}(\mathbf{T}^ {\prime})\big{\}}\) is abbreviated by \(\boldsymbol{\mathcal{S}}(\mathbf{T}^{\prime})\). Define a collection of discrete summary statistics \(\mathcal{C}_{T}=\big{\{}\mathbf{s}_{1},\dots,\mathbf{s}_{K}\big{\}}\) expressed as constraints on table space, where each \(\mathbf{s}_{k}\) is a realisation of \(\mathcal{S}_{k}\). We leverage the same convention to define continuous constraints \(\mathcal{C}_{\Lambda}\) in the intensity space. The union of table and intensity constraints is summarised by \(\mathcal{C}\). We sometimes refer to \(\mathcal{C}_{T}\) by \(\mathcal{C}\) to avoid notation clutter. In Table 1 the singly constrained ODM model corresponds to a given \(\mathcal{C}\) as opposed to singly constrained tables and intensities that map to \(\mathcal{C}_{T}\) and \(\mathcal{C}_{\Lambda}\), respectively. Equivalently, constrained models are defined by combinations of constrained tables and intensities. Footnote 1: Summary statistics are arranged in increasing order of cell set sizes \(|\mathcal{X}_{k}|\). **Definition 3.1**.: Consider an ordered1 collection of constraints \(\mathcal{C}_{T}\) and table summary statistics operators \(\boldsymbol{\mathcal{S}}\) with associated functions \(\mathbf{S}\). A table \(\mathbf{T}^{\prime}\) is \(\mathcal{C}_{T}\)_-admissible_ if and only if its summary statistics satisfy all the constraints in \(\mathcal{C}_{T}\), i.e. \(\mathcal{S}_{k}(\mathbf{T}^{\prime})=\mathbf{s}_{k}\in\mathcal{C}_{T}\)\(\forall\)\(k=1,\dots,K\). Footnote 1: Summary statistics are arranged in increasing order of cell set sizes \(|\mathcal{X}_{k}|\). We denote the function space of all \(\mathcal{C}_{T}\)-admissible contingency tables (origin-destination matrices) of dimension \(\dim(\mathbf{T})=I\times J\) by \(\mathcal{T}_{\mathcal{C}_{T}}=\{\mathbf{T}\in\mathcal{T}:\)\(\boldsymbol{\mathcal{S}}(\mathbf{T})=\mathcal{C}_{T}\}\) and drop the dependence on \(T\) for notational convenience. This space contains all agent location choices consistent with the aggregate summary Figure 1: Plate diagram of our modelling framework. Rectangular and circular nodes are deterministic and random variables, respectively. Shaded nodes correspond to conditioned quantities. statistics \(\mathcal{C}_{T}\). The set \(\mathcal{T}_{\mathcal{C}_{k}}\) contains all tables that when applied \(\mathcal{S}_{k}\) over cells \(\mathcal{X}_{k}\) satisfy the \(k\)-th constraint of \(C_{T}\). In the rest of the paper we set \(\mathcal{C}_{\Lambda}=\left\{\Lambda_{++}\right\}\) unless otherwise stated. Our goal is to sample from \(P(\mathbf{T},\mathbf{x},\mathbf{\theta}|\mathcal{C},\mathbf{y})\)2, where Footnote 2: We denote probability distributions over discrete, continuous and mixed discrete-continuous supports by \(\mathbb{P}\), \(p\), \(P\), respectively. \[P(\mathbf{T},\underbrace{\mathbf{x},\mathbf{\theta}}_{\mathbf{\Lambda}}|\mathcal{ C},\mathbf{y})\propto\mathbb{P}(\mathbf{T}|\mathbf{x},\mathbf{\theta},\mathcal{C})p( \mathbf{y}|\mathbf{x})p(\mathbf{x}|\mathbf{\theta},\mathcal{C})p(\mathbf{\theta}) \tag{16}\] We achieve this by devising a Metropolis-Hastings-within-Gibbs scheme to sample from \(\mathbb{P}(\mathbf{T}|\mathbf{x},\mathbf{\theta},\mathcal{C})\), \(p(\mathbf{x}|\mathbf{\theta},\mathbf{T},\mathbf{y})\) and \(p(\mathbf{\theta}|\mathbf{x},\mathbf{T},\mathbf{y})\). The conditional samplers for \(\mathbf{x}\) and \(\mathbf{\theta}\) have acceptance ratios similar to those in (Ellam et al., 2018) and equal to \[p(\mathbf{x}^{\prime},\mathbf{m}^{\prime}|\mathbf{x},\mathbf{m},\mathbf{\theta}, \mathbf{T},\mathcal{C},\mathbf{y}) =\min\left(1,\frac{\mathbb{P}(\mathbf{T}|\mathbf{x}^{\prime},\mathbf{ \theta},\mathcal{C})p(\mathbf{y}|\mathbf{x}^{\prime})\exp\left(-H_{\theta}( \mathbf{x}^{\prime})\right)}{\mathbb{P}(\mathbf{T}|\mathbf{x},\mathbf{\theta}, \mathcal{C})p(\mathbf{y}|\mathbf{x})\exp\left(-H_{\theta}(\mathbf{x})\right)} \right), \tag{17}\] \[p(\mathbf{\theta}^{\prime}|\mathbf{\theta},\mathbf{x},\mathbf{m},\mathbf{ T},\mathcal{C},\mathbf{y}) =\min\left(1,\frac{\mathbb{P}(\mathbf{T}|\mathbf{x},\mathbf{\theta}^{ \prime},\mathcal{C})\exp\left(-\gamma\mathcal{V}_{\mathbf{\theta}^{\prime}}( \mathbf{x})\right)Z(\mathbf{\theta})p(\mathbf{\theta}^{\prime})}{\mathbb{P}(\mathbf{T} |\mathbf{x},\mathbf{\theta},\mathcal{C})\exp\left(-\gamma\mathcal{V}_{\mathbf{\theta} }(\mathbf{x})\right)Z(\mathbf{\theta}^{\prime})p(\mathbf{\theta})}\right), \tag{18}\] where \(H_{\theta}(\mathbf{x}^{\prime})=-\gamma\mathcal{V}_{\mathbf{\theta}}(\mathbf{x}^{ \prime})-1/2|\mathbf{m}^{\prime}|^{2}\) is the Hamiltonian of state \(\mathbf{x}^{\prime}\) with associated momentum \(\mathbf{m}^{\prime}\). Although a singly constrained intensity can be leveraged here, enforcing hard constraints through \(\mathcal{C}_{\Lambda}\) and potentially different soft constraints through \(\mathbb{P}(\mathbf{T}|\mathbf{x},\mathbf{\theta},\mathcal{C})\) would cause identifiability issues in \(\mathbf{x}\). We aim to provide a general construction for joint table and intensity inference and employ singly constrained SIMs only when \(\mathcal{C}_{T}=\{\mathbf{T}_{+}\}\). In the following exposition, we show that the type of summary statistic data available determines whether the constrained table distribution \(\mathbb{P}(\mathbf{T}|\mathbf{x},\mathbf{\theta},\mathcal{C})\) can be sampled directly or indirectly through Markov Chain Monte Carlo (MCMC). ### Tractable constrained table sampling In this section, we offer closed-form contingency table sampling. Without loss of generality, assume that only one of the two table margins is known; namely \(\mathbf{T}_{+}\). (singly constrained table). Then, any subset \(\mathcal{C}_{T}\) of the universe of summary statistic constraints \(\left\{T_{++},\mathbf{T}_{++},\left\{\mathbf{T}\mathcal{X}_{k}|\mathcal{X}_{l }\subseteq\mathcal{X},l\in\mathbb{N}\right\}\right\}\) yields a closed-form posterior table marginal as shown in Table 1. By the construction in (15), the case for \(\mathcal{C}_{T}=\emptyset\) is equivalent to unconstrained table sampling conditioned on an intensity model, which in our case is a SIM. Cell constraints \(\mathcal{C}_{T}=\left\{\mathbf{T}\mathcal{X}_{l}|\mathcal{X}_{l}\subseteq \mathcal{X},l\in\mathbb{N}\right\}\) can be seamlessly incorporated in an unconstrained table without violating the posterior's tractability. Furthermore, leveraging that \(\mathbf{T}\) uniquely determines both \(T_{++}\) and \(\mathbf{T}_{+}\). and applying Bayes' rule it follows that the models with \(\mathcal{C}_{T}=\{T_{++}\}\) and \(\mathcal{C}_{T}=\{\mathbf{T}_{+}\}\) yield Multinomial and product Multinomial distributions, respectively (See full derivations in Appendix A). Equivalently, \[\mathbb{P}(\mathbf{T}|\mathbf{A},T_{++})=\prod_{i,j}^{I,J}\left(\frac{T_{++} \mathcal{1}}{T_{ij}!}\left(\frac{\Lambda_{ij}}{\Lambda_{++}}\right)^{T_{ij}} \right), \tag{19}\] and \[\mathbb{P}(\mathbf{T}|\mathbf{A},\mathbf{T}_{+})=\prod_{i,j}^{I,J}\left(\frac {T_{ii}\mathcal{1}}{T_{ij}!}\left(\frac{\Lambda_{ij}}{\Lambda_{++}}\right)^{T_{ ij}}\right). \tag{20}\] We obtain \(N\) independent samples \(\mathbf{T}^{(1:N)}\) from (15), (19) (20) in closed-form. Samples from the Poisson and product Multinomial distributions can be drawn in parallel. We note that the space complexity of table sampling is \(\mathcal{O}(IJ)\) while the time complexity for (15), (19) (20) is \(\mathcal{O}(IJ)\), \(\mathcal{O}(1)\) and \(\mathcal{O}(I)\), respectively. Moreover, coupling either constraint \(T_{++}\) or \(\mathbf{T}_{+}\) with cell constraints leaves the target distribution unchanged but shrinks its support. Hence, the available table margin is updated by subtracting the value of fixed cell constraints from margin statistics and performing inference on the free cells. We present the joint intensity and table sampling algorithm for tractably constrained tables in Algorithm 1. ``` 1:Inputs: \(\mathbf{C}\), \(\mathcal{C}\), \(\mathbf{y}\), \(N\) 2:Outputs: \(\mathbf{x}^{(1:N)},\mathbf{\theta}^{(1:N)},\text{sign}\left(\mathbf{\theta}^{(1:N)} \right),\mathbf{T}^{(1:N)}\). 3:Initialise \(\mathbf{x}^{(0)},\mathbf{\theta}^{(0)},\mathbf{T}^{(0)}\). 4:for\(n\in\{1,\ldots,N\}\)do 5: Sample \(\mathbf{x}^{(n)}|\mathbf{\theta}^{(n-1)},\mathbf{T}^{(n-1)}\) using Hamiltonian Monte Carlo (Neal, 2011) with acceptance 17. 6: Sample \(\mathbf{\theta}^{(n)}|\mathbf{x}^{(n)},\mathbf{T}^{(n-1)}\) using Random Walk Metropolis-Hastings with acceptance 18. 7: Construct intensity \(\mathbf{\Lambda}^{(n)}\) using \(\mathbf{x}^{(n)}\), \(\mathbf{\theta}^{(n)}\) using 5. 8: Sample tables (in parallel) using the relevant closed-form distribution ( 15, 19, 20) \(\mathbf{T}^{(n)}\sim\mathbb{P}(\mathbf{T}|\mathbf{A}^{(n)},\mathcal{C})\). 9:endfor ``` **Algorithm 1** Metropolis-within-Gibbs MCMC sampling algorithm for tractably constrained tables. ### Intractable constrained table sampling In this section, we introduce an MCMC scheme for sampling tables subject to any subset of the power set \(\mathcal{P}\Big{(}\left\{\mathbf{T}_{+},\mathbf{T}_{+},\left\{\mathbf{T}_{ \mathcal{X}_{l}}|\mathcal{X}_{l}\subseteq\mathcal{X},l\in\mathbb{N}\right\} \right\}\Big{)}\) excluding those subsets contained in the constraint universe of the previous section. By conditioning on both table margins and leveraging the conditional distributions of \(\mathbf{T}_{+,i}|T_{++},\mathbf{\Lambda}\) and \(T_{++}|\mathbf{\Lambda}\), the induced conditional distribution becomes Fisher's non-central multivariate hypergeometric (Agresti, 2002): \[\mathbb{P}(\mathbf{T}|\mathbf{\Lambda},\mathbf{T}_{+,i},\mathbf{T}_{+,i})\propto \frac{\prod_{i=1}^{I}T_{i+}!\prod_{j=1}^{J}T_{i+j}!}{T_{++}!\prod_{i,j=1}^{I,J} T_{ij}!}\prod_{i,j=1}^{I,J}\left(\frac{\Lambda_{ij}\Lambda_{++}}{\Lambda_{i+ }\Lambda_{+j}}\right)^{T_{ij}}, \tag{21}\] where \(\omega_{ij}=\frac{\Lambda_{ij}\Lambda_{++}}{\Lambda_{i+}\Lambda_{+j}}\) is called the odds ratio and encodes the strength of dependence between row \(i\) and column \(j\). Complete independence is achieved if and only if \(\omega_{ij}=1\). Our choice of intensity model encodes this dependence in the travel cost matrix \(\mathbf{C}\). Origin-destination independence is achieved if and only if the travel cost's effect on destination choice is irrelevant (\(\beta=0\)). Moreover, the normalising constant of (21) is a partition function defined over the support of all tables satisfying the conditioned margins and can't be efficiently computed by direct enumeration. In Appendix B we prove an extension of Chu-Vandermonde's convolution theorem for Multinomial coefficients (Bellachir, 2014) that facilitates computation of the normalising constant in \(\mathcal{O}(1)\). In particular, we show that the following identity holds: \[\binom{T_{++}}{T_{+1}\ldots T_{+J}}\prod_{i,j}^{J}\omega_{ij}^{T_{+j}}=\sum_{ \mathcal{S}(\mathbf{T})=\mathcal{C}_{T}}\prod_{i,j}^{I,J}\binom{T_{+j}}{T_{1 j}\ldots T_{Ij}}\omega_{ij}^{T_{ij}}, \tag{22}\] where \(\binom{T_{+j}}{T_{1j}\ldots T_{Ij}}=\frac{T_{+j}!}{T_{1j}!\ldots T_{Ij}!}\) is the Multinomial coefficient. Shrinking the \(\mathcal{T}_{\mathcal{C}}\) space using elements of the constraint universe above requires a Markov Basis (MB) MCMC sampling scheme (Diaconis and Sturmfels, 1998) due to the intractability of the induced table posterior. #### 3.2.1 Markov Basis MCMC We construct a \(\mathcal{C}_{T}\)-admissible table for initialising Markov Basis MCMC using a suite of greedy deterministic algorithms, such as iterative proportional fitting (Bishop, Fienberg, and Holland, 2007). We concoct a proposal mechanism on \(\mathcal{T}_{\mathcal{C}}\) as follows. **Definition 3.2**.: A _null-admissible table_\(\mathbf{T}\) is a \(\mathcal{C}_{T}\)-admissible table with \(\mathcal{C}_{T}\subseteq\{\mathbf{T}_{+,},\mathbf{T}_{-+}\}\) and \(\forall\ \mathbf{s}\in\mathcal{C}_{T}\) it follows that \(\mathbf{s}=\mathbf{0}\). **Definition 3.3**.: : A _Markov basis_ is a set of table moves \(\mathbf{f}_{1},\ldots,\mathbf{f}_{L}:\mathcal{X}\rightarrow\mathbb{Z}\) that satisfy the following conditions: 1. \(\mathbf{f}_{i}\) is a null-admissible table for \(1\leq l\leq L\) and 2. for any two \(\mathcal{C}_{T}\)-admissible \(\mathbf{T},\mathbf{T}^{\prime}\) there are \(\mathbf{f}_{1_{1}},\ldots,\mathbf{f}_{l_{A}}\) with \(\eta_{l}\in\mathbb{N}\) such that \(\mathbf{T}^{\prime}=\mathbf{T}+\sum_{m=1}^{A}\eta_{l}\mathbf{f}_{lm}\) and \(\mathbf{T}+\sum_{m=1}^{a}\eta_{l}\mathbf{f}_{lm}\geq 0\) for \(1\leq a\leq A\). Condition (i) guarantees that all proposed moves do not modify the summary statistics in \(\mathcal{C}_{T}\), while condition (ii) ensures that there exists a path between any two tables such that any table member of the path is \(\mathcal{C}_{T}\)-admissible. The collection of constraints \(\mathcal{C}_{T}\) generates a Markov basis \(\mathcal{M}\). When \(I\times J\) tables satisfy both row and column margins, \(\mathcal{M}\) consists of functions \(\mathbf{f}_{1},\ldots,\mathbf{f}_{L}\) such that \(\forall\ x=(i_{1},j_{1}),x^{\prime}=(i_{2},j_{2})\in\mathcal{X}\) with \(i_{1}\neq i_{2},j_{1}\neq j_{2}\), \[\mathbf{f}_{l}(x)=\begin{cases}\eta&\text{if }x=(i_{1},j_{1})\text{ or }x=(i_{1},j_{2})\\ -\eta&\text{if }x=(i_{2},j_{2})\text{ or }x=(i_{2},j_{1})\\ 0&\text{otherwise}\end{cases} \tag{23}\] The case for coupling individual cell constraints with table margins requires a minor modification. Let \(\mathcal{X}^{\prime}\subseteq\mathcal{X}\) and \(\mathcal{C}^{\prime}_{T}\) be the individual cell admissibility criteria. Then, \(\mathcal{M}\) is updated to exclude all basis functions \(\mathbf{f}_{l}\) with \(\mathbf{f}_{l}(x)\neq 0\ \ \forall\ x\in\mathcal{X}^{\prime}\). Moreover, \(\mathcal{C}_{T}\) is revised so that \(\forall\ \mathbf{s}\in\mathcal{C}_{T}\), \(\mathbf{s}^{\prime}\in\mathcal{C}_{T}^{\prime}\), \(\mathbf{s}\) is updated to \(\mathbf{s}-\mathbf{s}^{\prime}\) at every \(x\in\mathcal{X}^{\prime}\). In other words, the constrained cell values are deducted from the rest of the summary statistic constraints in \(\mathcal{C}_{T}\). A Markov Basis Markov chain (MBMC) can now be constructed. **Proposition 3.1**.: _(Adapted from (Diaconis and Sturmfels, 1998)): Let \(\mu\) be a probability measure on \(\mathcal{T}_{\mathcal{C}}\). Given a Markov basis \(\mathcal{M}\) that satisfies 3.3, generate a Markov chain in \(\mathcal{T}_{\mathcal{C}}\) by sampling \(l\) uniformly at random from \(\{1,\ldots,L\}\). Consider the Markov Basis Metropolis-Hastings (MB-MH) and Gibbs (MB-Gibbs) proposals:_ 1. _MB-MH: Let_ \(\eta\in\{-1,1\}\) _and choose_ \(\eta\) _from this set with probability_ \(\frac{1}{2}\) _independent of_ \(l\)_. If the chain is at_ \(\mathbf{T}\in\mathcal{T}_{\mathcal{C}}\) _it will move to_ \(\mathbf{T}^{\prime}=\mathbf{T}+\eta\mathbf{f}_{l}\) _with probability_ \[\min\biggl{\{}\frac{\mu(\mathbf{T}+\eta\mathbf{f}_{l})}{\mu(\mathbf{T})},1\biggr{\}}\] _provided_ \(\mathbf{T}^{\prime}\geq 0\)_. In all other cases, the chain stays at_ \(\mathbf{T}\)_._ 2. _MB-Gibbs: Let_ \(\eta\in\mathbb{Z}\)_. If the chain is at_ \(\mathbf{T}\in\mathcal{T}_{\mathcal{C}}\)_, determine the set of_ \(\eta\) _such that_ \(\mathbf{T}+\eta\mathbf{f}_{l}\geq 0\)_. Choose_ \[\mathbb{P}(\eta)\propto\prod_{x\in\{x\in\mathcal{X}:\mathbf{f}_{l}(x)\neq 0 \}}\frac{1}{\mu\biggl{(}n(x)+\eta\mathbf{f}_{l}(x)\biggr{)}}\] _and move to_ \(\mathbf{T}^{\prime}=\mathbf{T}+\eta\mathbf{f}_{l}\geq 0\)_._ _In both cases an aperiodic, reversible, connected Markov chain in \(\mathcal{T}_{\mathcal{C}}\) is constructed with stationary distribution proportional to \(\mu(\mathbf{T})\)._ The proof of Proposition 3.1 is provided in (Diaconis and Sturmfels, 1998). Theoretical guarantees of Markov Basis MCMC convergence on \(\mathcal{T}_{\mathcal{C}}\) show that the MB-MH scheme in 3.1 mixes slowly and is not scalable to high-dimensional \(I\times J\) tables for large \(T_{++}\). Instead, a Gibbs sampler can be constructed as detailed in the same proposition (MB-Gibbs). In doubly constrained tables, \(\eta\) is distributed according to Fisher's non-central hypergeometric distribution for \(2\times 2\) tables. The derivation of this result is provided in Corollary A.4.1 of Appendix A. The overhead of generating \(\mathcal{M}\) for any constrained table is at most \(\mathcal{O}(I^{2}J^{2})\) in both time and space. This overhead can be easily overcome by amortising the construction of \(\mathcal{M}\) prior to sampling. The sampling procedure for a constrained model with an intractable parallel marginal distribution and underlying SIM intensity model is summarised in Algorithm 2. The time complexity of proposing a move in \(\mathcal{T}_{\mathcal{C}}\) is \(\mathcal{O}(1)\) and \(\mathcal{O}(\max\{\max(\mathbf{s})\bigm{|}\mathbf{s}\in\mathcal{C}\})\) for MB-MH and MB-Gibbs, respectively. The corresponding space complexities are both \(\mathcal{O}(IJ)\). ``` 1:Inputs: \(\mathbf{C}\), \(\mathcal{C}\), \(\mathbf{y}\), \(\mathcal{M}\), \(\mu\), \(N\). 2:Outputs: \(\mathbf{x}^{(1:N)},\boldsymbol{\theta}^{(1:N)}\), sign \(\left(\boldsymbol{\theta}^{(1:N)}\right),\mathbf{T}^{(1:N)}\). 3: Initialise \(\mathbf{x}^{(0)},\boldsymbol{\theta}^{(0)},\mathbf{T}^{(0)}\). 4:for\(m\in\{1,\ldots,M\}\)do 5: Sample \(\mathbf{x}^{(n)}|\boldsymbol{\theta}^{(n-1)},\mathbf{T}^{(n-1)}\) using Hamiltonian Monte Carlo (Neal, 2011) with acceptance 17. 6: Sample \(\boldsymbol{\theta}^{(n)}|\mathbf{x}^{(n)},\mathbf{T}^{(n-1)}\) using Random Walk Metropolis-Hastings with acceptance 18. 7: Construct intensity \(\mathbf{\Lambda}^{(n)}\) using \(\mathbf{x}^{(n)},\boldsymbol{\theta}^{(n)}\) using 5. 8: Sample \(l\) uniformly at random from \(\{1,\ldots,L\}\). 9: Find the valid \(\eta\) support yielding \(\mathcal{C}_{T}\)-admissible tables. 10: Use MB-Gibbs in case 2 of 3.1 to sample valid \(\eta\) with specified \(\mu\). 11: Obtain \(\mathbf{T}^{(n)}=\mathbf{T}^{(n-1)}+\eta\mathbf{f}_{l}\). 12:endfor ``` **Algorithm 2** Metropolis-within-Gibbs Markov Basis MCMC sampling algorithm for intractably constrained tables. The curse of dimensionality prohibits the use of any standard convergence diagnosis techniques, such as the Gelman and Rubin criterion (Gelman and Rubin, n.d.). Therefore, we employ the \(l_{1}\) norm to empirically assess the convergence of sample summary statistics and establish convergence in probability. Furthermore, we assume the underlying intensity function is known a priori, which acts as a ground truth. In the case of Fisher's non-central hypergeometric distribution, exact moments are not available (McCullagh and Nelder, 2019). These are approximated by the moments of a product Multinomial kernel derived in Appendix A. ## 4 Experimental Results and Discussion We showcase table sampling convergence results based on a fixed synthetic intensity across different numbers of origins \(I\), destinations \(J\) and agents \(M=T_{++}\). Figure 2 depicts empirical convergence rates based on a total of \(10^{3}\) chains each run for \(10^{3}\) steps. Sparse tables () induce multimodal distributions in \(\mathcal{T}_{\mathcal{C}}\) and mix slowly compared to their dense counterparts (). Convergence is decelerated more by a larger number of agents rather than higher table dimensionality. The number of agents grows as fast as the diameter of the chain's state space and bounds the number of MCMC steps required to reach the stationary distribution. This observation agrees with the theoretical bounds obtained in (Diaconis and Sturmfels, 1998), although the latter bounds are derived based on a uniform measure over \(\mathcal{T}_{\mathcal{C}}\) explored using MB-MH. Despite this discrepancy, theoretical results provide an upper bound for our case of direct sampling, as evidenced by Figure 3. Direct sampling from the closed-form table posterior achieves the fastest convergence, and we use it to benchmark against Markov basis MCMC. Any doubly constrained table can be explored using either MB-MH () or MB-Gibbs (). Encoding additional constraints in \(\mathcal{T}\) to contract the posterior entails the overhead of using MCMC, introducing a tradeoff between convergence rate and distribution contraction in the presence of more summary statistic constraints \(\mathcal{C}_{T}\). Furthermore, we present a large-scale application of discrete ODM reconstruction to Cambridge commuting patterns from residence to workplace locations, using the ODM models in Table 1. The precise experimental setup mimics that of (Ellam et al., 2018) and is provided in the Supporting information. In light of new summary statistics \(\mathcal{C}_{T}\) (e.g. \(\equiv\), \(\varphi\), \(\varphi\)), the table posterior contracts and its high mass region concentrates around the ground truth table (), as shown in Figure 4. The fact that the low-noise table samples (\(\varphi\),\(\Psi\), \(\varphi\)) are nearly their high-noise counterparts (-,\(\Psi\), \(\varphi\), \(\varphi\)) indicates a more dominant effect of the table likelihood on the posterior relative to that of the intensity SDE prior, which enforces the confidence in our reconstructed ODM. The intensity samples of (Gaskin, Pavliotis, and Girolami, 2023) (-,-,-) have the highest variance amongst the sampled intensities due to the random initialisations of the Neural ODE solver in (Gaskin, Pavliotis, and Girolami, 2023). Despite this, the intensity distributions in (Ellam et al., 2018) and (Gaskin, Pavliotis, and Girolami, 2023) have insufficient \(\mathcal{C}_{\Lambda}\) constraints and a higher divergence from the ground truth table region than table samples. Our intensity samples are also distant from the ground truth table (\(\varphi\),\(\mathbb{B}\),\(\varphi\), \(\varphi\), \(\varphi\), \(\varphi\), \(\varphi\), \(\varphi\)) because they are informed strongly by \(\mathcal{C}_{\Lambda}\) and weakly by \(\mathcal{C}_{T}\) (See Figure 1), where the former set is smaller than the latter. The ODM validation results summarised in Table 2 affirm that reasoning at the discrete table level accomplishes greater error reductions and enhanced ground truth coverage. Data fitness and posterior prediction errors are computed using the Sorensen similarity index (SSI), standardised root mean square error (SRMSE) and Markov Basis distance (MBD). Uncertainty quantification is evaluated based on the coverage probability (CP) of ground truth table cells contained in the \(99\%\) highest posterior mass (HPM) region. We elucidate each of these metrics in the Supporting information. The best error-coverage tradeoff, lowest SRMSE, MBD, and highest SSI are attained in the doubly and \(20\%\) cell constrained model due to it having the richest constraint set \(\mathcal{C}\). Our doubly constrained models account for an SRMSE reduction of \(16\%\) relative to the singly constrained model while sustaining an acceptable ground truth coverage equal to approximately \(80\%\). The apparent increase in the mean intensity SRMSE across all doubly constrained models potentially alludes to the SIM's lack of expressivity. This may be because \(\mathcal{C}_{T}\) and \(\mathbf{y}\) give rise to conflicting SIM parameter configurations in the limit of large \(\mathcal{C}_{T}\). The MBD decrease in the growth of \(\mathcal{C}\) indicates that the expected upper bound on the number of Markov Basis moves required to exactly match \(\mathbf{T}^{\mathcal{D}}\) is reduced. In the totally and singly constraint models, our table posterior mean matches or outperforms the intensity mean of (Ellam et al., 2018) and (Gaskin, Pavliotis, and Girolami, 2023) in terms of data fit (SSI) and Figure 4: Visualisation of the table (left) and intensity (right) samples projected in 2D using T-distributed stochastic neighbour embedding (Hinton and Roweis, 2002). Samples are coloured by the constraint sets in Table 2 for low (e), high (*) and variable (*) noise regimes. The ground truth table (*) is better covered by the discrete table posterior regardless of \(\mathcal{C}\), and the table distribution becomes increasingly concentrated around the ground truth table in light of more data \(\mathcal{C}_{T}\). Intensity samples are weakly informed through \(\mathcal{C}_{\Lambda}\) and \(\mathbb{P}(\mathbf{T}|\mathbf{x},\boldsymbol{\theta},\mathcal{C})\) and more distant from the ground truth. Figure 3: \(l_{1}\) error norm of \(\mathbb{E}[\mathbf{T}|\mathbf{y},\mathcal{C}_{T}]\) for a \(33\times 33\) table with \(5000\) agents in the singly () and doubly () constrained tables. MB-Gibbs has a substantially faster convergence rate than MB-MH and mixes reasonably slower compared to direct sampling. Ground truth averages \(g(\boldsymbol{\Lambda})\) are approximate for doubly constrained tables (). Figure 2: \(l_{1}\) error norm of \(\mathbb{E}[\mathbf{T}|\mathbf{y},\mathbf{T}_{+}]\) across table sizes \(\dim(\mathbf{T})\) and number of agents \(T_{++}\) using Algorithm 1. Convergence is slower for sparse tables () that induce multimodal distributions. As \(T_{++}\) grows () convergence is decelerated by a factor inversely proportional to the table size, which agrees with the theoretical bounds established in (Diaconis and Sturmfels, 1998). SRMSE. The highest ground truth cell coverage probability (\(94\%\)) is achieved by the most relaxed table, namely the unconstrained table, but entails a high bias. A lower SRMSE (\(0.67\) instead of \(0.85\)) is attained by the intensity field of the totally constrained model in (Gaskin, Pavliotis, and Girolami, 2023), at the expense of a coverage probability drop from \(94\%\) to \(85\%\) and a discretisation error accrued for population synthesis. Our framework's benefits also extend to SIM parameter estimation. In Figure 5 we show that the log destination attraction prediction \(R^{2}\) increases for larger constraint sets \(\mathcal{C}_{T}\) from \(0.77\) to \(0.84\). This allows us to explain the evolved destination employment by informing the data-generating process through \(\mathcal{C}\) instead of increasing the diffusivity of the SDE prior in (12). Therefore, we mitigate the identifiability issues of the multimodal \(\boldsymbol{\theta}\) posterior emerging in the high noise regime. The \(\mathbf{x}\) predictions are further improved in the high noise regime (\(R^{2}=0.99\)) compared to the low noise counterpart (\(R^{2}=0.84\)), which favours the hypothesis of a stochastic growth in destination employment. In the high noise regime, unbiased estimators of \(\boldsymbol{\theta}\) are devised based on a more disperse SDE prior on \(\mathbf{A}\) 12. Increased prior diffusivity steers the \(\mathbf{x}\) posterior marginal towards a larger region of plausible SDE solutions in the vicinity of \(\mathbf{y}\), which improves the quality of \(\mathbf{x}\) predictions. Additionally, we recover the \(\mathbf{x}\) and \(\boldsymbol{\theta}\) posterior marginals obtained in (Ellam et al., 2018) at a fraction of additional computational cost. In conclusion, performing population synthesis directly on the discrete high-resolution space of agent attributes bears tangible empirical benefits. These include improved reconstruction and coverage of the ground truth ODM, as well as table posterior contraction in the limit of constraint data \(\mathcal{C}_{T}\). If population synthesis is not of interest, SIM parameters can be adequately estimated using competitive approaches such as (Gaskin, Pavliotis, and Girolami, 2023). Combining such optimisation methods with Markov Basis MCMC in a naive Bayes scheme can be promising, as it exploits the advantages of both optimisation and MCMC techniques. Regardless, the apparent shortcomings of SIMs call for a comparative study of various intensity model classes, such as discrete choice models \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline \hline \multicolumn{1}{|c|}{_Cantorinal ODM_} & \(\mathcal{C}\) & _Medinal_ & \(\gamma\) & _Quasiory_ & \(|\lambda|\mathcal{A}|\) & \(\mathbb{E}\left[\alpha\mathbb{E}\left(\mathbf{T}^{\mathrm{aL},N}\right), \mathbb{E}^{\boldsymbol{\theta}}\right]\) & \(\mathrm{SR}\left(\mathbb{E}\left[\mathbf{T}^{\mathrm{aL},N}\right], \mathbb{E}^{\boldsymbol{\theta}}\right)\) & \(\mathrm{SRMSE}\left(\mathbb{E}\left[\mathbf{T}^{\mathrm{aL},N}\right], \mathbb{E}^{\boldsymbol{\theta}}\right)\) & \(\mathcal{C}_{\mathrm{B0}}(\mathbf{T},\mathbf{T}^{\mathrm{dD}})\) \\ \hline \multirow{10}{*}{Tocally} & \multirow{6}{*}{\(\lambda_{++}\)} & \multirow{6}{*}{This work} & \(10^{4}\) & \(\lambda|\mathcal{X},\mathbf{y}\) & - & - & **0.72** & 0.71 & 0.22 \\ & & & \(10^{2}\) & \(\lambda|\mathcal{X},\mathbf{y}\) & - & - & 0.48 & 0.83 & 0.37 \\ & & & \(10^{2}\) & \(\tau|\mathcal{C},\mathbf{y}\) & - & 12684 & 0.70 & 0.85 & **0.94\({}^{\text{\tiny{\textregistered}}}\)** \\ & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & **0.72** & 0.73 & 0.20 \\ & & & \(10^{2}\) & \(\tau|\mathcal{C},\mathbf{y}\) & - & - & 0.70 & 0.70 & 0.41 \\ & & & \(10^{2}\) & \(\tau|\mathcal{C},\mathbf{y}\) & 0 & 7600 & 0.69 & 0.70 & 0.68 \\ \cline{2-10} & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & **0.72** & 0.74 & 0.21 \\ & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & 0.70 & 0.70 & 0.40 \\ \cline{2-10} & & & \(10^{2}\) & - & - & 0.70 & **0.67** & 0.77 \\ & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & 0.65 & **0.68** & 0.85 \\ & & & & \(10^{2}\) & - & - & 0.64 & 0.70 & 0.84 \\ \hline \multirow{10}{*}{Stably} & \multirow{6}{*}{\(\lambda_{++}\)} & \multirow{6}{*}{\(\lambda|\mathcal{C},\mathbf{y}\)} & \multirow{6}{*}{-} & \multirow{6}{*}{-} & \multirow{6}{*}{0.72} & \multirow{6}{*}{0.74} & \multirow{6}{*}{0.20} \\ & & & & \(10^{2}\) & - & - & 0.72 & 0.74 & 0.20 \\ & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & 0.65 & **0.68** & 0.67 \\ & & & & \(10^{2}\) & - & - & 0.64 & 0.70 & 0.84 \\ \hline \multirow{10}{*}{Stably} & \multirow{6}{*}{\(\lambda_{+}\)} & \multirow{6}{*}{\(\lambda|\mathcal{C},\mathbf{y}\)} & \multirow{6}{*}{-} & \multirow{6}{*}{0.72} & \multirow{6}{*}{0.74} & \multirow{6}{*}{0.71} & \multirow{6}{*}{0.63} & \multirow{6}{*}{0.69} \\ & & & \(10^{2}\) & - & - & 0.72 & **0.62** & 0.42 \\ & & & \(10^{2}\) & - & - & 0.71 & **0.62** & 0.71 \\ \cline{2-10} & & & \(10^{2}\) & - & - & 0.71 & **0.61** & 0.79 \\ & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & 0.66 & **0.62** & **0.87** \\ & & & \(10^{2}\) & - & - & 0.65 & 0.65 & **0.86** \\ \hline \multirow{10}{*}{Dually} & \multirow{6}{*}{\(\lambda_{+}\)} & \multirow{6}{*}{This work} & \(10^{4}\) & \(\lambda|\mathcal{X},\mathbf{y}\) & - & - & 0.73 & 0.71 & 0.19 \\ & & & \(10^{2}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & **5912** & **0.76** & **0.59** & 0.68 \\ & & & \(10^{2}\) & - & - & 0.61 & 1.12 & 0.17 \\ & & & \(10^{2}\) & - & - & 0.73 & **0.59** & **0.80** \\ \hline \multirow{10}{*}{Dually and 200 still} & \multirow{6}{*}{\(\lambda_{+}\)} & \multirow{6}{*}{\(\lambda_{+}\)} & \multirow{6}{*}{This work} & \(10^{4}\) & \(\lambda|\mathcal{C},\mathbf{y}\) & - & - & 0.72 & 0.71 & 0.19 \\ & & & \(10^{2}\) & - & - & 0.72 & 0.71 & 0.19 \\ & & & \(10^{2}\) & - & - & 0.63 & 1.10 & 0.13 \\ & & & \(10^{2}\) & - & - & 0.77 & **0.56** & **0.78** \\ \hline \multirow{10}{*}{Dually and 200 still} & \multirow{6}{*}{\(\lambda_{+}\)} & \multirow{6}{*}{\(\lambda_{+}\)} & \multirow{6}{*}{-} & \multirow{6}{*}{-} & \multirow{6}{*}{0.72} & \multirow{6}{*}{0.71} & \multirow{6}{*}{0.19} \\ & & & \(10^{2}\) & - & - & 0.72 & 0.71 & 0.19 \\ \cline{1-1} & & & \(10^{2}\) & - & - & 0.63 & 1.10 & 0.13 \\ \cline{1-1} & & & \(10^{2}\) & - & - & 0.77 & **0.56** & **0.78** \\ \hline \multirow{10 (Train, 2009). Finally, the multi-faceted nature of population synthesis opens up future avenues of research beyond ODM reconstruction, where more convoluted dependency structures can be exploited. ## Software and Data Trip and employment data for Cambridge, UK are obtained from (Office for National Statistics, 2015; Office for National Statistics, 2014). Individual home and work facility locations are extracted from(Geofabrik, 2023). Our codebase has been released on [https://github.com/YannisZaf/ticodm](https://github.com/YannisZaf/ticodm).
2307.05833
Spatially variable crater morphology on the dwarf planet Haumea
Haumea, thought to be the Kuiper Belt's 3rd most massive object, has a fast 3.92 hr rotational period, resulting in its shape as a triaxial ellipsoid. Here, we make the first detailed predictions of Haumea's surface morphology, considering in particular effects stemming from its unique shape. Given observations have indicated Haumea's surface to be predominantly inert water ice, we predict crater characteristics, with craters likely to be the predominant surface feature on Haumea. In calculating Haumea's surface gravity, we find that g varies by almost two orders of magnitude, from a minimum of 0.0126 m/s^2 at the location of the equatorial major axis, to 1.076 m/s^2 at the pole. We also find a non-monotonic decrease in g with latitude. The simple to complex crater transition diameter varies from 36.2 km at Haumea's location of minimum surface gravity to 6.1 km at the poles. Equatorial craters are expected to skew to larger volumes, have depths greater by a factor of > 2, and have thicker ejecta when compared with craters at high latitudes. Considering implications for escape of crater ejecta, we calculate that Haumea's escape velocity varies by 62% from equator to pole. Despite higher escape velocities at the poles, impacts there are expected to have a higher mass fraction of ejecta escape from Haumea's gravitational well. Haumea may be unique among planet-sized objects in the solar system in possessing dramatic variations in crater morphology across its surface, stemming solely from changes in the magnitude of its surface gravity.
George D McDonald, Lujendra Ojha
2023-07-11T22:56:56Z
http://arxiv.org/abs/2307.05833v3
# Spatially variable crater morphology on the dwarf planet Haumea ###### Abstract Haumea, thought to be the Kuiper Belt's 3rd most massive object, has a fast 3.92 hr rotational period, resulting in its shape as a triaxial ellipsoid. Here, we make the first detailed predictions of Haumea's surface morphology, considering in particular effects stemming from its unique shape. Given observations have indicated Haumea's surface to be predominantly inert water ice, we predict crater characteristics, with craters likely to be the predominant surface feature on Haumea. In calculating Haumea's surface gravity, we find that \(g\) varies by almost two orders of magnitude, from a minimum of 0.0126 m/s\({}^{2}\) at the location of the equatorial major axis, to 1.076 m/s\({}^{2}\) at the pole. We also find a non-monotonic decrease in \(g\) with latitude. The simple to complex crater transition diameter varies from 36.2 km at Haumea's location of minimum surface gravity to 6.1 km at the poles. Equatorial craters are expected to skew to larger volumes, have depths greater by a factor of \(>\) 2, and have thicker ejecta when compared with craters at high latitudes. Considering implications for escape of crater ejecta, we calculate that Haumea's escape velocity varies by 62% from equator to pole. Despite higher escape velocities at the poles, impacts there are expected to have a higher mass fraction of ejecta escape from Haumea's gravitational well. Haumea may be unique among planet-sized objects in the solar system in possessing dramatic variations in crater morphology across its surface, stemming solely from changes in the magnitude of its surface gravity. + Footnote †: journal: George D. McDonald [email protected] 0000-0002-8818-7885]George D. McDonald 0000-0002-4880-7886]Lujendra Ojha ## 1 Introduction The dwarf planet Haumea is the 3rd brightest (Brown et al., 2006) and 3rd most massive Kuiper Belt Object (Ragozzine & Brown, 2009; Rambaux et al., 2017; Dunham et al., 2019), barring for the uncertainty on Makemake's mass allowing for a small possibility that it is more massive, which would make Haumea 4th. From early in its characterization, Haumea was determined to be an extraordinary object. Its \(\sim\)3.92 hr rotation period is the shortest among solar system objects larger than 100 km and thought to be the result of either a giant impact collision (Brown et al., 2006, 2007; Noviello et al., 2022) or a graze-and-merge collision (Leinhardt et al., 2010; Proudfoot & Ragozzine, 2019, 2022). The rapid rotation rate imparted by Haumea's formation was thought to result in its shape being either a triaxial ellipsoid or an oblate spheroid where the equatorial and polar axes differed by \(>\) 30 % (Rabinowitz et al., 2006). Later photometric, and thermal flux measurements confirmed that Haumea was indeed a triaxial ellipsoid (Lockwood et al., 2014). The presently known most precise dimensions of Haumea come from the stellar occultation observations of Ortiz et al. 2017 with equatorial axes of \(a\) = 1161 \(\pm\) 30 km and \(b\) = 852 \(\pm\) 4 km, and a polar axis of \(c\) = 513 \(\pm\) 16 km. Spectroscopic and photometric observations have provided valuable information about Haumea's surface. Infrared spectroscopy by Trujillo et al. 2007 indicated a surface composition of 66 \(\sim\) 80% crystalline water ice, while Pinilla-Alonso et al. 2009 used infrared spectroscopy in concert with Hapke scattering models to favor a surface covered by \(>\) 92% water ice, in close to a 1:1 ratio of amorphous to crystalline ice. Haumea's largest satellite, Hi'iaka, shares this water ice composition (Barkume et al., 2006). The presence of a heterogeneous surface in the form of a "dark red spot" was indicated by pho tometry, although the cause for this is unknown. While a distinct composition for this region is thought to be more likely, it may also be explained by variations in the water ice grain size (Lacerda et al., 2008). These constraints on Haumea's composition are valuable and provide a foundation from which additional inferences can be made. However, to date, there exists no method to observationally constrain the surface morphology of Haumea, and no studies to date have made detailed predictions for what might be expected of Haumea's surface morphology. In recent years, a major advancement in our knowledge of Kuiper Belt Object surfaces has been made by the observations of the _New Horizons_ mission. These observations revealed the dwarf planet Pluto to be a complex world-possessing both recently active geologic processes (Moore et al., 2016), as well as confirming extensive atmospheric photochemistry (Gladstone et al., 2016). Pluto's largest satellite, Charon, while posessing an older and largely cratered surface, also provided evidence for endogenic activity possibly related to an internal ocean and cryovolcanism (Moore et al., 2016). The availability of findings from _New Horizons_, in addition to the existing constraints on Haumea, make it timely to theorize on possible surface morphologies for this dwarf planet. Haumea's surface is predominantly water ice, which barring substantial present-day internal heat, will be involatile in the Kuiper belt (Brown et al., 2011). This precludes the mass movement of glacial flows, as well as any substantial vapor pressure supported atmosphere, as has been observed on Pluto (Moore et al., 2016; Gladstone et al., 2016). 3 - 50 mbar upper limits on atmospheric pressure, depending on composiiton, are also provided by Ortiz et al. 2017. The crystalline nature of the water ice suggests some sort of communication between the surface and interior, as amorphous water ice is more energetically favorable at Kuiper Belt conditions and radiation will convert crystalline ice to amorphous ice over time. Pinilla-Alonso et al. 2009 favor outgassing or the exposure of fresh material from large impacts (rather than cryovolcanism, due to the similar composition of the much smaller Haumea group objects), and estimate the surface age to be \(>10^{8}\) yr. An older surface coupled with few volatiles make cratering likely to be the major control on Haumea's surface morphology. One of the few processes that would be able to compete with cratering in sculpting Haumea's landscape is the extent of ice replenishment from the interior that is _not_ a result of impacts. With this knowledge, we focus on the manifestation of cratering on Haumea in predicting its likely surface morphologies, and leave consideration of the latter to studies modeling Haumea's interior. Haumea's shape as a triaxial ellipsoid, as well as its fast rotation rate, result in a surface gravitational acceleration that varies considerably as a function of position on Haumea's surface. The effect that this variable surface gravity has on crater morphologies is the primary focus of this manuscript. We first quantify Haumea's surface effective gravity and its spatial variations. This variable surface gravity drives the trends in the subsequent phenomena that we examine. We look at predicting crater types and dimensions and how they vary across Haumea's surface. We then look at spatial variations in crater ejecta characteristics. Finally, we look at how the fraction of ejecta that can escape from Haumea's gravitational well varies across the surface. ## 2 Haumea's surface gravity ### Coordinate system Throughout calculations in the manuscript, we adopt spherical coordinates with radial distance \(r\), polar angle \(\theta\), and azimuthal angle \(\lambda\). For plotting and geographic interpretation, \(\theta\) and \(\lambda\) are converted to latitude and longitude respectively. We define \(0^{\circ}\) azimuth and longitude to align with Haumea'sequatorial semi-major axis, \(a\). By this convention, \(180^{\circ}\) longitude also corresponds with the equatorial major axis, while the equatorial minor axis corresponds to longitudes of -90 and \(90^{\circ}\). Our adopted longitudes are positive eastward. With this convention, the direction of increasing longitude aligns with the direction of Haumea's rotation. Spherical coordinates provide computational convenience, but we note Figure 1: The magnitude and sign of the total surface gravitational acceleration \(g\) as a function of latitude, at longitude \(=0^{\circ}\). The values of the 6 largest terms that contribute to the total magnitude of \(g\) are also shown, with signs indicated to show their relative contributions during summation within each \(g\) component. that their use in specifying the surface of a triaxial ellipsoid results in some peculiarities. Specifically, both the latitudinal and longitudinal angles subtended by the same arc length will vary as a function of location on Haumea. In order to help orient the reader with respect to these effects, spatial plots are shown as both an equirectangular projection with latitude and longitude coordinates, as well as a three dimensional perspective with axes in units of length (km). ### Methods: Gravity The effective gravity potential at Haumea's surface is the sum of the gravitational and centrifugal potentials. The effective gravity potential is expressed as a series of spherical harmonics. \[\begin{split}\Phi(r,\theta,\lambda)=-\frac{GM}{r}\bigg{\{}& 1+\sum_{n=2}^{\infty}\sum_{m=0}^{n}\left(\frac{R_{o}}{r} \right)^{n}P_{n}^{m}(\cos\theta)\\ \times&[C_{nm}\cos m\lambda+S_{nm}\sin m\lambda] \bigg{\}}\\ &-\frac{1}{2}\omega^{2}r^{2}\sin^{2}\theta\end{split} \tag{1}\] where \(G\) is the universal gravitational constant, \(M\) is Haumea's mass, \(R_{o}\) is the mean radius, and \(P_{n}^{m}\) are the associated Legendre polynomials. \(\omega\) is the angular velocity from Haumea's rotation. \(C_{nm}\) and \(S_{nm}\) are the spherical harmonic coefficients. For a triaxial ellipsoid, due to symmetries, \(S_{nm}=0\) for all \(n\) and \(m\), while specifically due to north-south symmetry \(C_{nm}=0\) for all odd \(n\) and \(m\). We evaluate the gravitational potential to the 4th order. We use the coefficients \(C_{20}\) through \(C_{44}\) as calculated by Sanchez et al. 2020, who used Haumea's shape as determined by Ortiz et al. 2017, and the methodology of Balmino 1994 for calculating the coefficients. Balmino 1994 present a methodology for calculating the spherical harmonic gravity coefficients for a triaxial ellipsoid, assuming a homogeneous composition (i.e. uniform density). We then calculate the effective surface gravitational acceleration (hereafter surface gravity) as the negative of the gradient of the effective gravity potential. \[\begin{split}\vec{g}=&-\vec{\nabla}\Phi\\ =&-\frac{\partial\Phi}{\partial r}\hat{r}-\frac{1}{r }\frac{\partial\Phi}{\partial\theta}\hat{\theta}-\frac{1}{r\sin\theta}\frac{ \partial\Phi}{\partial\lambda}\hat{\lambda}\end{split} \tag{2}\] In evaluating the gravitational acceleration at the surface of Haumea, we need to calculate the distance to the origin at given coordinates (\(\theta\),\(\lambda\)) on Haumea's surface. We do this by using the equation for a triaxial ellipsoid in spherical coordinates \[\frac{r^{2}\cos^{2}\theta\sin^{2}\lambda}{a^{2}}+\frac{r^{2}\sin^{2}\theta\sin ^{2}\lambda}{b^{2}}+\frac{r^{2}\cos^{2}\lambda}{c^{2}}=1 \tag{3}\] and solving for \(r\)(\(\theta\),\(\lambda\)). The physical parameters that we adopt for Haumea, as well as the numerical constants used in all calculations in the manuscript are summarized in Table 1. ### Results: Gravity Figure 1 shows the sign and total magnitude of the surface gravity \(g\) as a function of latitude, at \(0^{\circ}\) longitude, as well as the 6 terms with the largest magnitudes Figure 2: (a) The magnitude of the surface gravitational acceleration \(g\) as a function of latitude and longitude, in an equirectangular projection. (b) The same as (a), but in a 3 dimensional perspective. The axes here show distances corresponding to the lenths of the axes, rather than longitude, with longitudes of -\(90^{\circ}\) and \(0^{\circ}\) labeled for orientation. The circular arrow depicts Haumea’s direction of rotation. contributing to \(g\). The terms are labeled according to the convention illustrated for the \(\vec{g_{r}}\) terms below: \[\vec{g_{r}}=-\frac{\partial\Phi}{\partial r}\hat{r}=(g_{r,1}+g_{r,2}+g_{r,3}+ \omega_{r})\hat{r} \tag{4}\] where on the plot itself \(g_{r,1}\) is labeled as \(r_{1}\) for visibility. For the full expansion the individual terms, as well as the other surface gravity components, the reader is referred to the Appendix. Note that the \(\vec{g_{r}}\) and \(\vec{g_{0}}\) terms have different directions, and that furthermore the unit vector \(\hat{\theta}\) varies as a function of location. We have shown these individual terms on the same plot mainly to demonstrate which terms are contributing greatest to the total magnitude of the surface gravitational force \(g\). The signs of individual terms are also shown, as these are summed together and various portions negate each other before the root mean square of the individual components is taken to calculate \(g\). Lastly, we plot \(g\) with a negative sign because despite its direction changing over Haumea's surface, it is always closer to pointing radially inwards vs. outwards. The overall trend is for \(g\) increasing with latitude. The reason for this is analogous to that on Earth--flattening from Haumea's rotation resulting in a shorter polar axis vs the equatorial axes, coupled with the centrifugal acceleration increasingly opposing the gravitational acceleration at lower latitudes. At the equator, \(g_{r1}\) and \(\omega_{r}\) are comparable in magnitude, with values of -0.2 and +0.231 respectively. The consequence of this is an extremely low surface gravity at the equator of -0.0126 m/s\({}^{2}\). Several other features warrant discussion. The overall strength of the \(g_{r,3}\) term, combined with a local maximum at 55\({}^{\circ}\) latitude and a change in sign at 67\({}^{\circ}\) latitude contribute to \(g\) not increasing monotonically with latitude. Specifically, this results in a local minimum in \(g\) at 42\({}^{\circ}\), and a local maximum at 60\({}^{\circ}\) latitude. While the \(g_{\theta}\) terms largely cancel each other out below 60\({}^{\circ}\) latitude, from 60 - 85\({}^{\circ}\), they result in a more pronounced \(\hat{\theta}\) component to \(g\). With a rough understanding of the contributions of individual terms to the surface gravity, we move to looking at the full set of spatial variations in the magnitude of \(g\). Figure 2 shows \(g\) as a function of latitude and longitude. Overall, Haumea's surface gravitational acceleration varies by almost two orders of magnitude--from 1.076 m/s\({}^{2}\) at the pole, to a minimum of 0.0126 m/s\({}^{2}\) at equatorial longitudes of 0 and 180\({}^{\circ}\). Along longitudes of -90 and 90\({}^{\circ}\) (corresponding to the minor), \(g\) at the equator (0.20 m/s\({}^{2}\)) is a factor of 5 lower than at the pole. Along the -90 and 90\({}^{\circ}\) longitude meridians, \(g\) is at its maximum for a given latitude. ## 3 Crater dimensions ### Methods: Crater volumes Perhaps the most fundamental cratering property of interest to predict is the crater volume that would be expected for an impactor of a given size. To make predictions for crater volumes on Haumea, we use the scaling methods developed over thirty years in the works of Holsapple & Schmidt 1982, Housen et al. 1983, Holsapple 1993, and Holsapple & Housen 2012, including other references therein. These are physically based relations that through the use of point-source approximations and dimensional analysis, provide functional forms for the prediction of many crater properties. We will refer to these relations throughout the manuscript as the cratering "point-source solutions." The point-source solutions predominantly distinguish cratering behavior between two regimes--that in which the material strength of the planetary surface controls crater prop \begin{table} \begin{tabular}{c c c c} \hline \hline Name & Value & Description & Reference \\ (1) & (2) & (3) & (4) \\ \hline \multicolumn{4}{l}{Haumea physical properties} \\ \hline \(\rho\) & 1885 kg/m\({}^{3}\) & Uniform density & a \\ \(a\) & 1161 km & Equatorial semi-major axis & a \\ \(b\) & 852 km & Equatorial semi-minor axis & a \\ \(c\) & 513 km & Polar semi-axis & a \\ \(R_{o}\) & 797.6 km & Mean radius & b, c \\ \(\omega\) & 4.457 \(\times\) 10\({}^{-4}\) & Angular velocity & d \\ \(C_{20}\) & -0.114805 & Spherical harmonic coeff. & e \\ \(C_{22}\) & 0.230731 \(\times\) 10\({}^{-1}\) & & \\ \(C_{40}\) & 0.305251 \(\times\) 10\({}^{-1}\) & & \\ \(C_{42}\) & -0.189209 \(\times\) 10\({}^{-2}\) & & \\ \(C_{44}\) & 0.950665 \(\times\) 10\({}^{-4}\) & & \\ \hline \multicolumn{4}{l}{Impact related parameters} \\ \hline \(\delta\) & 930 kg/m\({}^{3}\) & Impactor density & f \\ \(Y\) & 1.5 \(\times\) 10\({}^{7}\) Pa & Target strength & f \\ \(K_{1}\) & 0.06 & Volume scaling constant & f \\ \(K_{2}\) & 1 & Strength scaling constant & f \\ \(\mu\) & 0.55 & Scaling exponent & f \\ \(\nu\) & 0.33 & Scaling exponent & f \\ \(K_{r}\) & 1.1 & Simple crater diameter const. & f \\ \(\alpha_{E}\) & 0.6117 & Ejecta scaling exponent & g \\ \(K_{eg}\) & 3.3 & Ejecta velocity exponent & h \\ \hline \end{tabular} \({}^{*}\)Derived from fitting to the data in this reference \end{table} Table 1: Adopted values for physical parameters and numerical constants throughout calculations in the manuscript. erties (smaller craters), and that in which the surface gravity strength is more important in governing crater formation (larger craters, Holsapple, 1993). Holsapple 1993 derive a relation (equation 18 of that work) for the non-dimensional cratering efficiency \(\pi_{v}\), which relates crater volumes to a number of fundamental properties of both planetary body and impactor. \[\pi_{v}=K_{1}\bigg{\{}\pi_{2}\bigg{(}\frac{\rho}{\delta}\bigg{)}^{(6 \nu-2-\mu)/3\mu}\\ +\bigg{[}K_{2}\pi_{3}\bigg{(}\frac{\rho}{\delta}\bigg{)}^{(6\nu-2) /3\mu}\bigg{]}^{(2+\mu)/2}\bigg{\}}^{-3\mu/(2+\mu)}\\ \pi_{v}=\frac{\rho V}{m},\qquad\pi_{2}=\frac{ga}{U^{2}},\qquad \pi_{3}=\frac{Y}{\rho U^{2}} \tag{5}\] Here the target body properties are density \(\rho\), crater volume \(V\), local surface gravity \(g\), and material cohesive strength \(Y\) (in dimensions of stress). The impactor properties are radius \(a_{i}\) (thus assuming a spherical body), impact velocity \(U\), and mass \(m\) (where \(m=(4/3)\ \pi\delta a_{i}^{3}\).). \(\pi_{v}\), \(\pi_{2}\), and \(\pi_{3}\) are non-dimensional parameters for the cratering efficiency, gravity-scaled size, and strength respectively. The constants \(K_{1}\) and \(K_{2}\), as well as exponents \(\mu\) and \(\nu\) are fit to from experimental data. \(K_{2}\) is commonly set to 1, as we do here, such that \(K_{1}\) as well as the exponents \(\mu\) and \(\nu\) are determined from experiments with specific materials. For application to Haumea, we adopt for numerical constants the values informed from field observations of explosive craters in ice, as recorded in Holsapple 2022. The exception is constant \(K_{2}\), for which we adopt the value for hard rock as we find that the cold ice value predicts crater diameters an order of magnitude too large compared with what is observed on the Saturnian satellites. #### 3.1.1 Methods: Cratering regime The cratering point-source solutions are defined in two limits, based on the relative material strength of the planetary surface compared to the lithostatic pressure. In the "strength regime," the crustal strength is large compared to lithostatic pressure. Conversely, in the "gravity regime" the crustal strength is comparatively small (Holsapple, 1993). The relative magnitudes of the gravity-scaled size (\(\pi_{2}\)) and strength group (\(\pi_{3}\)), defined in equation 5 as per Holsapple 1993, define whether cratering is occurring in the strength or gravity regimes. Specifically, per Holsapple & Schmidt 1987, the strength to gravity transition is found to occur when \[0.1<\pi_{3}\pi_{2}^{-2/(2+\mu)}<10 \tag{6}\] ### Methods: Simple to complex transition Craters occur in two dominant morphologies: simple and complex. Simple craters are bowl-shaped and familiar from the appearance of small terrestrial and lunar craters. Complex craters are generally considered to be the result of the gravitational collapse of transient craters immediately following the impact event. Figure 3: (a) Contours of the ratio of surface strength to gravitational forces (\(\pi_{2}\pi_{3}^{-2/(2+\mu)}\)) as a function of surface gravity (\(g\)) and impactor radius (\(a_{i}\)), from which the strength to gravity regime transition can be interpreted. Specifically, the regime transition occurs when \(0.1<\pi_{2}\pi_{3}^{-2/(2+\mu)}<10\). A second x-axis on the top shows the latitudes corresponding to the \(g\) values on the bottom, wherein specific latitudes corresponding to \(0.27<g<0.8\) m/s\({}^{2}\) cannot be labeled due to the existence of multiple latitudes that correspond to each of these \(g\) values. (b) The shaded contours here show crater diameters (\(D\)) that result as a function of surface gravity (\(g\)) and impactor radius (\(a_{i}\)). The curves are for the same \(\pi_{2}\pi_{3}^{-2/(2+\mu)}\) parameter shown as contours in subplot a). This allows for reading off of the crater diameters that correspond to the strength to gravity regime transition. This results in the movement of material from the crater wall to interior, manifesting in terraced walls as well as central peaks and circular rings. The net effect is a depth to diameter ratio that is smaller than for simple craters, although this ratio varies as a function of crater size. Simple craters occur in both the strength and gravity regimes of the point-source solutions, while complex craters are only found in the gravity regime. While we use the point-source analytical relations to predict crater volumes and partially solve for crater diameters for a given simple or complex crater, the simple to complex transition as well as the crater depth to diameter ratio are the result of gravitational forces operating after the initial impact and are not readily predicted theoretically (Holsapple, 1993). To predict at what diameter a crater on Haumea would transition from simple to complex, we use constraints from the observations of cratering into icy bodies--which in recent years has benefited from a large increase in sample size due to observations by the _Cassini_ spacecraft of the Saturnian satellites as well as the _New Horizons_ mission's observations of Pluto and its largest satellite Charon. Specifically, these are fits to the simple crater to complex crater transition diameter (\(D_{t}\)) as a function of surface gravity (discussed in this section), as well as the crater depth to diameter (d/D) ratio for complex craters as a function of gravity (discussed in section 3.3). Aponte-Hernandez et al. 2021 examine the simple to complex transition diameter (\(D_{t}\)) as a function of surface gravity, separately for both icy and rocky bodies. The studied icy bodies are the major icy satellites of the giant planets, in addition to Ceres, Pluto and Charon. They find that \(D_{t}(g)\) is described by a power law, with the specific fit for icy bodies being: \(D_{t}\) = (39.7 \(\pm\) 1.7 km)\(g^{-0.4\pm 0.1}\), with \(g\) here in units of cm/s\({}^{2}\) (all other relations use SI units unless otherwise noted). The surface gravity for this data span the the 0.064 m/s\({}^{2}\) surface gravity of Mimas (\(D_{t}\) = 16.07 m/s\({}^{2}\)) to the 1.428 m/s\({}^{2}\) of Ganymede (which has \(D_{t}\) = 6.01 km). Haumea's gravity spans 0.012 m/s\({}^{2}\) i \(g\) i 1.08 m/s\({}^{2}\) and thus fall within the fitted data on the high end, while on the low end Haumea's surface gravity is about a factor of 5 smaller than that of Mimas. We extrapolate the power law fit to encompass Haumea's minimum surface gravity, in calculating crater characteristics on Haumea for regions where \(g\)\(<\) 0.064 m/s\({}^{2}\). ### Methods: Crater dimensions In calculating crater dimensions from crater volumes, specifically depth and diameter, we first distinguish between simple and complex craters. Simple craters show depth to diameter ratios that are largely consistent across planetary bodies (see Figure 3 of White et al. 2017), with some variation for bodies that may be the targets for particularly fast impactors (Bray & Schenk, 2015). In order to back out the diameter (\(D\)) for a simple crater that would correspond to a calculated crater volume (\(V\)), we use the following relation from Holsapple 2022: \[D=2K_{r}V^{1/3} \tag{7}\] with the values for constant \(K_{r}\) taken from data and suggested to be equivalent for for all cohesive materials (including cold ice). Specifically, \(K_{r}\) = 1.1. To calculate simple crater depths, we use the depth to diameter ratio that has been observed for Tethys (White et al., 2017). This is because for the crater volumes that we investigate in section 3.5, simple craters are only expected to form at the low end of Haumea's surface gravity range and thus transition well to our lowest complex depth to diameter ratio bin, which is also from Tethys (Table 2). We note however that depth to diameter ratios for simple craters show much greater consistency across planetary bodies compared to complex craters and the variability in these ratios may be a sole result of material properties and not surface gravity. (Holsapple, 1993; White et al., 2017). Figure 4: The power law simple to complex crater transition for icy bodies as a function of gravity, as calculated by Aponte Hernández et al. 2021. Data points for individual planetary bodies are shown as the black crosses, while the fit is the solid black line. Also shown are when the ratio of surface strength to gravitational forces (\(\pi_{2}\)\(\pi_{3}^{-2/(2+\mu)}\)) are equal to 10 in purple, and to 1 in brown. These are calculated numercially at the locations of the circles or X’s, to which the dashed and dotted lines are fitted. For complex craters, the depth to diameter ratio is not constant, and follows a power law dependency (Pike, 1977; Holsapple, 1993; White et al., 2017; Robbins et al., 2021): \[d=\alpha D^{\beta} \tag{8}\] where \(d\) and \(D\) are the crater depth and diameter, respectively, in units of km for the purposes of these observationally derived fits. The values of the exponents in the power law are found to be a function of surface gravity (White et al., 2017). To cover the range of surface gravity found on Haumea's surface, we use three surface gravity bins, obtained from fits to depth to diameter ratios on icy bodies, namely the Saturnian satellites (White et al., 2017), and Pluto and Charon (Robbins et al., 2021). The surface gravity on these bodies ranges from 0.145 i \(g\) i 0.62 m/s\({}^{2}\), and the fits for the bodies with the lowest (Tethys) and highest (Pluto) surface gravities are extrapolated to cover the full range of surface gravity on Haumea (0.0126 m/s\({}^{2}\) i \(g\) i 1.08 m/s\({}^{2}\)). The bins, and the specific fit parameters \(\alpha\) and \(\beta\) are shown in Table 2. For the other dimensions that form the shape of the complex crater, we assume a flat crater floor and uniform slope from the crater floor diameter (\(D_{f}\)) to the rim, or overall crater diameter \(D\). \[V=\frac{\pi d}{4}\left[D^{2}+\frac{1}{3}(D-D_{f})(D+2D_{f})\right] \tag{9}\] where the flat floor diameter (\(D_{f}\)) is set equal to 0 at the transition diameter to simple craters (\(D_{t}\)), and related to the overall crater diameter \(D\) using fits to lunar crater profiles (Pike, 1977; Holsapple, 2022): \[D_{f}=0.292(2D_{t})^{-0.249}(D-D_{t})^{1.249} \tag{10}\] Equations 8 - 10 are solved simultaneously to determine the complex crater diameter \(D\) and depth \(d\) that correspond to a crater of volume V. ### Results: Crater transitions For Haumea, the transition between the strength and gravity regime for cratering begins begins for an impactor radius of 100 m at the pole, compared to a radius of 10,000 m at the equator (Figure 3a). These quoted values are for gravity-scaled size to strength ratios (\(\pi_{3}\pi_{2}^{-2/(2+\mu)}\)) of 10. These impactor radii for the strength to gravity regime transitions correspond to crater diameters of 0.9 and 200 km respectively (Figure 3b). To visualize the crater diameters at which crater morphologies transition from simple to complex, we plot the data used in the Aponte-Hernandez et al. 2021 fit, as well as the power law fit to the data, as the black crosses and line respectively in Figure 4 over the range of surface gravity found on Haumea. Crater diameters below the Aponte-Hernandez et al. 2021 relation at a given surface gravity are expected to be simple, while above the line complex craters are expected. Also plotted are lines for various values of \(\pi_{3}\pi_{2}^{-2/(2+\mu)}\), indicating the strength to gravity regime transition. For \(g>0.1\) m/s\({}^{2}\) complex craters only occur after the beginning of the strength to gravity regime transition, as would be expected. However, for \(g<0.1\) m/s\({}^{2}\), complex craters are suggested to occur partly in the strength regime, indicating that improvements in our understanding of crater transitions in icy bodies for \(g<0.1\) m/s\({}^{2}\) are necesary (see Discussion). ### Results: Crater volume and dimensions We first investigate how the crater volume varies as a function of gravity, impactor velocity, and size, to understand sensitivities in the parameter space and to focus our further modeling efforts. In Figure 5 we plot the predicted percent difference in crater volumes at the area of maximum (the poles, g = 1.08 m/s\({}^{2}\)) and minimum (location of equatorial major axis, g = 0.0126 m/s\({}^{2}\)) surface gravity, as a function of impactor velocity (\(U\)) and radius (\(a_{i}\)). The 1 - 6 km/s range for \(U\) is motivated by impact velocities predicted \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Bin \# & Bin Gravity (m/s\({}^{2}\)) & Planetary Body & Actual Gravity (m/s\({}^{2}\)) & \(\alpha\) & \(\beta\) & n \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline Simple & 0.012 – 1.08 & Tethys & 0.147 & 0.299 & 0.832 & 55 \\ Complex 1 & 0.012 – 0.2 & Tethys & 0.147 & 0.458 & 0.662 & 17 \\ Complex 2 & 0.2 – 0.45 & Iapetus, Dione, Rhea, Charon & 0.223, 0.233, 0.264, 0.288 & 0.446 & 0.544 & 67, 38, 48, 46 \\ Complex 3 & 0.45 – 1.08 & Pluto & 0.62 & 0.346 & 0.546 & 60 \\ \hline \end{tabular} \end{table} Table 2: Power law coefficients for crater depth to diameter ratios in the adopted surface gravity bins. The power law fits for the Saturnian satellites are from White et al. 2017, while those for Pluto and Charon are from Robbins et al. 2021. from statistical studies of Kuiper Belt Object orbits by Dell'Oro et al. 2013. The general trend is for greater percent differences in volume (\(\Delta\)V) for increasing impactor radius and impact velocity. \(\Delta\)V remains under 30% for \(a_{i}\) i 10\({}^{3}\) m, regardless of impact velocity. The impactor radius can be seen as the larger control on \(\Delta\)V within the parameter space for impact velocities and impactor radii expected for Haumea. I.e. there is larger variation in \(\Delta\)V for a fixed \(U\) and variable \(a_{i}\), then for variable \(U\) and fixed \(a_{i}\). For an impactor with a = 10\({}^{4}\) m, the percent difference in the volume of the resultant crater at Haumea's pole vs equatorial major axis can exceed 100 %. From these results, we focus on quantifying variations as a function of impactor radius for the rest of the manuscript, and for all later calculations in the manuscript which require specifying an impact velocity, we use \(U\) = 5 km/s. Within the impactor radius parameter space, we focus on 0.5 i \(a_{i}\) i 16 km. The limit on the low end is due to the small effect that the surface gravity has on crater volume at impactor sizes of i 1 km, as described above. On the upper end, crater diameters for impactor radii of \(\sim\)16 km approach \(\sim\)300 km. For crater's close to or much larger than the size of Haumea's smallest semi-major axis (c = 513 km), it is possible that Haumea would not survive the impact as a coherent body (for reference, Odysseus crater on Tethys, the largest known crater into an icy body, is 400 km in diameter compared to the satellite's mean radius of 531 km. Smith et al. 1982; Moore et al. 2004). In Figure 6, we plot crater volumes, diameters (\(D\)), and depths (\(d\)) at 3 representative surface gravities (in turn, representing 5 latitudes at longitude = 0\({}^{\circ}\)) on Haumea for the aforementioned range of impactor radii 0.5 i \(a_{i}\) i 16 km. These surface gravities are specifically selected to span the 3 different gravity bins used in calculating the crater \(D/d\) ratio tabulated in Table 2. Crater volumes start diverging appreciably as a function of surface location for impactors \(a_{i}\)? 2 km (Figure 6a). Comparing the calculated crater diameters (Figure 6b) and depths (Figure 6c), it becomes apparent that most of the difference in crater volume is accomodated by variations in depth rather than diameter. Crater diameters are largely consistent among the different locations on Haumea until impactors reach radii of \(a_{i}\)? 7 km. Even then, the differences in diameter are consistently i 20 %, while differences in depth can exceed 300%, with a 47% difference even between the more similarly scaled polar and mid-latitudes. The simple to complex crater transition occurs at \(D\) i 10 km for g? 0.4 m/s\({}^{2}\) (Figure 4). Thus the simple to complex transition is barely visible for the two higher surface gravity bins in Figure 6b, c. Nevertheless, the transition occurs at close to 30 km at the equator, with the transition visible as the break in the g = 0.01 m/s\({}^{2}\) line in the depth and diameter plots. The close match across the simple to complex transition for g = 0.01 m/s\({}^{2}\) is the result of our using observationally derived data for the same planetary body, Tethys, for both simple and complex craters at this surface gravity (Table 2, first two rows). ## 4 Ejecta Thickness ### Methods: Ejecta thickness In investigating how crater ejecta thickness varies across Haumea's surface, we note that the point-source solutions only predict surface gravity to affect ejecta thickness in the strength regime (Housen et al. 1983). This is a result of the strength (\(Y\)) and \(g\) being grouped into the same term \(Y/\rho gR\) from dimensional analysis, and this ratio becoming very small in the gravity regime (see Housen et al. 1983, equations 32 and 33). In the strength regime, Housen et al. 1983 finds that the ejecta thickness can be calculated as: \[\frac{B(r)}{R}=\frac{A(e_{r}-2)}{2\pi} (\sin 2\theta)^{e_{r}-2}\bigg{(}\frac{r}{R}\bigg{)}^{-e_{r}}\] \[\times\bigg{[}1+\frac{4e_{r}-5}{3}\bigg{(}\frac{r}{R\sin 2\theta} ^{-(e_{r}-2)/2}\frac{D}{r}\bigg{)}\bigg{]} \tag{11}\] with \(B(r)\) being the ejecta thickness as a function of radial coordinate \(r\), \(R\) the crater radius, and \(\theta\) the impact angle with respect to the horizontal. This relation applies for \(r\)? \(R\). The exponent \(e_{r}\) is defined as: Figure 5: The percent difference between crater volumes (\(\Delta\)V) at Haumea’s maximum (polar) and minimum (equatorial) surface gravities, as a function of impactor velocity (\(U\)) and radius (\(a_{i}\)). \[e_{r}=\frac{6+\alpha_{E}}{3-\alpha_{E}} \tag{12}\] where constant \(A\) is defined in the strength regime as: \[A=K_{4}\left(\frac{Y}{\rho gR}\right)^{3\alpha_{E}/(3-\alpha_{E})} \tag{13}\] with the exponent \(\alpha_{E}\) being related to the exponent \(\mu\) used throughout the manuscript: \[\alpha_{E}=\frac{3\mu}{2+\mu} \tag{14}\] Constant \(D\) in equation 11 is defined in the strength regime as: \[D=\left(\frac{(K_{2})^{2}Y}{\rho gR}\right)^{-b} \tag{15}\] with exponent b defined as: \[b=\frac{\alpha_{E}-3}{4\alpha_{E}} \tag{16}\] For the ejecta thickness calculations, we derive the value for \(\alpha_{E}\) from the Eulerian shock physics numerical simulations of Senft & Stewart 2008, who simulated the results of a 100 m basalt impactor into a 200 m thick ice layer on Mars. The value we derive of \(\alpha_{E}\) = 0.6117 is comparable to the value of 0.6471 that would be calculated using equation 14 from our adopted \(\mu\) value of 0.55. Because the constant \(K_{4}\) must be determined experimentally, and appropriate ejecta experiments or simulations into ice in the strength regime are not available (the Senft & Stewart 2008 Mars simulations lie in the gravity regime, which allows for determining \(\alpha_{E}\) but not \(K_{4}\)) it is not possible to calculate actual ejecta thicknesses for Haumea. Rather, we calculate the relative thickness of the ejecta at all latitudes (\(B_{g}\)), compared to the thickness at the poles where gravity is at a maximum (\(B_{g,max}\)). Due to the inverse relation between \(B\) and \(g\), ejecta thickness would be at a minimum at the poles. By taking the ratio of equation 11 for the ejecta thickness at a given latitude (\(B_{g}\)) to that at the equator (\(B_{g,max}\)), one can calculate the ejecta thickness relative to the equator: \[\frac{B_{g}}{B_{g,max}}=\left(\frac{g_{max}^{3\alpha_{E}/(3-\alpha_{E})}+g_{ max}^{3\alpha_{E}/(3-\alpha_{E})-b}}{g^{3\alpha_{E}/(3-\alpha_{E})}+g^{3 \alpha_{E}/(3-\alpha_{E})-b}}\right) \tag{17}\] ### Results: Ejecta thickness As discussed in section 3.1.1, because the ejecta thickness does not vary as a function of surface gravity in the Figure 6: (a) The logarithm of the crater volumes as a function of impactor size for the surface locations corresponding to 3 different surface gravities. (b) The crater diameters (\(D\)) for the craters whose volumes are shown in subplot a). (c) The crater depths for craters whose volumes are shown in subplot a). gravity regime, for craters larger than the strength-to-gravity regime transition size of \(\sim\) 1 km at the pole and 200 km at the equator, impactors of the same size will result in ejecta of the same thickness regardless of location on Haumea. For craters in the strength regime, smaller than the above quoted transition diameters, we plot the ejecta thickness ratio \(B_{g}/B_{g,max}\) in Figure 7. \(B_{g}/B_{g,max}\) is plotted for longitudes of 0, 180\({}^{\circ}\) and -90, 90\({}^{\circ}\). Because the longitudes of -90, 90\({}^{\circ}\) represent the meridians with the consistently highest surface gravity on Haumea, while 0, 180\({}^{\circ}\) are the meridians of minimum surface gravity, these two latitudes are the two end-members for \(B_{g}/B_{g,max}\) as a function of latitude. \(B_{g}/B_{g,max}\) for all other latitudes will fall between these two curves. Moving equatorword from the pole, ejecta thicknesses at longitudes of 0, 180\({}^{\circ}\) quickly reach double their thickness at the pole, begininning at around \(\sim\)75\({}^{\circ}\) latitude. A local maximum with \(\sim\)4 times the thickness at the pole is oberved at 60\({}^{\circ}\), the same latitude at which a local maximum in \(g\) is observed for these longitudes (Figure 1). For both longitude end members, ejecta thicknesses largely remain within a factor of 10 times the thickness at the pole. The exception is below 14\({}^{\circ}\) latitude at 0, 180\({}^{\circ}\) longitude where ejecta thicknesses rapidly increase as \(g\) is approaching its minimum of 0.0126 m/s\({}^{2}\), ultimately reaching at the equator a factor of 63 times thicker than at the poles. ## 5 Escape of Ejecta The spatial variations in surface gravity, coupled with the latitudinal variation in the tangential velocity of the surface from Haumea's rotation, suggest that in the case of an impact large enough to eject material above the local escape velocity, the amount of escaping material may vary as a function of location on Haumea's surface. ### Methods: Escape velocity We first examine the ejecta velocities that would result for a large impact (i.e. in the gravity regime). From the point-source approximations, an ejecta velocity (\(v\)) distribution is derived with a power law decay as a function of launch position, greater than some minimum distance from the impact and up to near the crater edge (Housen and Holsapple, 2011; Holsapple and Housen, 2012). Similarly, the mass fraction of ejecta faster than velocity \(v\) (which we represent as \(M(v)\)) is found to be a power law function of ejection velocity \(v\)(Holsapple and Housen, 2012). The minimum ejecta velocity \(v_{*}\) above which this power law distribution applies, can be calculated in the gravity regime as: \[v_{*}=K_{vg}\sqrt{ga} \tag{18}\] With constant \(K_{vg}\) derived from experimental data. We adopt the value of 3.3 derived for dense sand in Holsapple and Housen 2012. In an ideal representation of \(M(v)\) vs \(v\), the two for all \(v\)\(\dot{\iota}\)\(v_{*}\) can be related as: \[M(v)=M_{e}\left(\frac{v}{v_{*}}\right)^{-3\mu} \tag{19}\] where \(M_{e}\) is the total ejecta mass, and in this idealized representation, is all ejected at or above the velocity \(v_{*}\). Because of the dependence of \(v_{*}\) on \(g\), which sets the velocity above which the power law decay in ejecta mass fraction occurs, spatial variations in \(v_{*}\) will be one of the contributing factors to variations in the mass of escaping ejecta (\(M_{esc}\)) across Haumea's surface for an impactor of the same properties. The other contributing factor to spatial variabilities in \(M_{esc}\) will be variations in the local escape velocity (\(v_{esc}\)). The spatial variations in \(v_{esc}\) are a result of both the variations in surface gravity, as well as the tangential velocity of the surface stemming from Haumea's rotation. We adopt a formulation that accounts for both of these effects, while making the simplified assumption of ejecta trajectories normal to the local surface (Scheeres et al., 1996). This allows one to treat all ejecta trajectories equally, rather than considering at each point on Haumea's surface how trajectory variations add or subtract to the local escape velocity. We calculate: \[v_{esc}=-\hat{n}\cdot(\Omega\times\vec{r})+\sqrt{[\hat{n}\cdot(\Omega\times \vec{r})^{2}+2U_{max}-(\Omega\times\vec{r})^{2}} \tag{20}\] Figure 7: The ratio of the ejecta thickness (\(B_{g}\)) to the ejecta thickness at the location of Haumea’s maximum surface gravity (\(B_{g,max}\)), as a function of latitude. \(B_{g}/B_{g,max}\) is shown for longitudes of 0, 180\({}^{\circ}\) and -90, 90\({}^{\circ}\). where \(\hat{n}\) is the local surface normal, \(\Omega\) is Haumea's angular velocity vector, and \(\vec{r}\) is the vector from the origin to Haumea's surface. The quantity \(U_{max}\) is a condition for particle escape, accounting for local variations in the gravity field and is calculated as: \[\sqrt{2U_{max}}=\max[\sqrt{2U(\vec{r})},\sqrt{2GM|\vec{r}|}] \tag{21}\] with \(G\) being the gravitational constant and \(M\), Haumea's mass. \(U(\vec{r})\) is the gravitational potential at the location \(\vec{r}\) on Haumea's surface. For our exploration of minimum ejecta velocities, \(v_{*}\), as well as the fraction of total mass ejected, \(M_{esc}/M_{e}\) (equation 19 with \(v_{esc}\) used for \(v\)) across Haumea's surface, we consider an impactor with radius (\(a\)) of 10 km. This represents a diameter for which appreciable differences in crater dimensions are seen across Haumea's surface (Figure 6), while not being close to a size that could break Haumea apart. ### Results: Escape velocity In Figure 8, we plot the minimum ejecta velocities (\(v_{*}\)) for an impact of \(a\) = 10 km. Because \(v_{*}\propto\sqrt{g}\), the spatial variability in \(v_{*}\) is similar to that in the surface gravity (Figure 2). The minimum in \(v_{*}\) is at the equator at -155, 25\({}^{\circ}\) longitude, while being largest at the poles. Because of the square root on \(g\), the variability in \(v_{*}\) as a function of latitude is not as dramatic as \(g\), increasing by a little over a factor of 8 at the poles compared to the equator. The spatial variability of \(v_{esc}\) is, in turn, plotted in Figure 9. \(v_{esc}\) is higher at the poles compared to the equator, as is expected given that the surface gravity is stronger there. The latitudinal variation in \(v_{esc}\) is Figure 8: (a) The minimum ejecta velocity (\(v_{*}\)) as a function of latitude and longitude. (b) A 3D perspective of \(v_{*}\). Plotting conventions are the same as in Figure 2. Figure 9: (a) The escape velocity (\(v_{esc}\)) as a function of latitude and longitude. (b) A 3D perspective of \(v_{esc}\). Plotting conventions are the same as in Figure 2. 47% from pole to the equator, substantially less than the latitudinal variability in both \(g\), and \(v_{*}\) for a 10 km impactor, although large in the context of other planetary bodies. Nevertheless, Haumea exhibits a relatively large longitudinal variability in \(v_{esc}\) at the equator, at 38%. In Figure 10 we look at the spatial variability in the mass fraction of ejecta that escapes Haumea (\(M_{esc}/M_{e}\)) for an \(a\) = 10 km impactor. The variability in \(M_{esc}/M_{e}\) is a result of the spatial variations in both \(v_{*}\) and \(v_{esc}\). Despite Haumea's escape velocity being lower at the equator than at the poles, \(M_{esc}/M_{e}\) is actually higher at the poles vs the equator--i.e. a greater fraction of ejecta escapes for an equivalent impactor at the pole. This is a result of the minimum ejection velocity \(v_{*}\) showing greater variability from equator to pole (Figure 8) than \(v_{esc}\) (Figure 9). ## 6 Discussion That Haumea's centrifugal acceleration at the equator was comparable to its gravitational acceleration has been known previously; this is the presumed reason for Haumea's unique shape. However we have first explored here the implications for its surface environment, which are profound. Haumea's equatorial surface gravity at the locations of its major axis are almost two orders of magnitude lower than that at the poles. Furthermore, Haumea's large degree of flattening (i.e. a polar semi-major axis that is only 60% the length of even the largest equatorial axis), results in surface normal vectors at higher latitudes (\(>60^{\circ}\)) that deviate greatly from being radially outward from Haumea's center of mass. This is manifested in strong \(g_{\theta}\) gravitational terms, with Haumea's surface gravity vector at these latitudes pointing poleward relative to the surface normal. Finally, Haumea exhibits a non-monotonic increase in surface gravity strength with increasing latitude. This manifests in degeneracies in the latitudes with a given surface gravitational acceleration value between 25 and 70\({}^{\circ}\). While this is something that is seen on small bodies, it is certainly unique among known planet-sized bodies in the solar system. For our calculations of the surface gravity strength, we assumed a uniform density for Haumea for ease of calculation of the spherical harmonic gravity coefficients. This is naturally an unrealistic assumption for Haumea, which is presumed to be differentiated (Dunham et al., 2019; Noviello et al., 2022). However, the primary factor resulting in Haumea's low equatorial surface gravity, which is the comparable magnitudes of the first order radial term (\(g_{r,1}\)) and the centrifugal acceleration (\(\omega_{r}\)), do not depend on this density assumption (see the expansion of these terms in the Appendix). In using observationally derived data to predict Haumea's simple to complex crater transition diameter as a function of surface gravity, we found that this transition does largely occur outside of the strength regime, as would be predicted, by the point-source relations. Nevertheless, below \(g\) = 0.10 m/s\({}^{2}\), the simple to complex transition is predicted to occur in the strength regime--a contradiction given that gravitational forces are ultimately responsible for the slumping that turns a transient crater into a complex crater during the formation process. Improvements in our understanding of the tensile strength of cold ice at scales relevant for large impacts (\(Y\)) to refine our calculated strength-gravity regime transition, as well as additional observational studies of simple to complex crater transitions on icy bodies with \(g\)\(<\) 0.10 m/s\({}^{2}\) would be beneficial. For our examination of crater dimensions, we found that for impactors with radii \(a\)\(>\) 500 m, differences in the crater volume as a function of latitude will be accommodated by proportionally larger differences in crater depth than crater diameter. This is presumed to be a result of the high sensitivity of the crater wall collapse late in the crater formation process, to surface gravity strength. Craters in environments with stronger surface gravity, exhibit lower depth to diameter ratios than craters of equivalent volumes in lower surface gravity environments. For craters in the strength regime (smaller than \(\sim\)1 at the pole and 200 km at the equator), the range of variations in ejecta thickness is similar to that of the surface gravity, approaching two orders of magnitude. Ejecta are thinner for higher surface gravity, because as gravity increases in the strength regime, ejecta deposits encroach on the crater rim (Housen et al., 1983). While we have focused on quantifying differences in relative ejecta thickness, the implication is that the radial extent of ejecta blankets at higher latitudes (with stronger surface gravity) will also be smaller compared to ejecta at lower latitudes. Our calculations suggest that the equatorial regions near Haumea's major axis will have ejecta blankets dramatically more noticeable than elsewhere on Haumea. We note that for the spatial variations in the dimensions and ejecta thicknesses of Haumea's craters, we have focused on how crater properties will vary for an impactor of the _same_ size. In reality, Haumea will have been impacted by objects of many different sizes over its history, and characteristics of the impactor size distribution are unlikely to exhibit latitudinal or longitudinal variations. Thus, in predicting Haumea's surface charac teristics, we emphasize that the effects that we have predicted will exhibit as statisical skews in crater characteristics as a function of location on Haumea's surface. I.e. craters in Haumea's equatorial region will be preferentially deeper with thinner ejecta compared to craters in Haumea's polar regions. Haumea's mid-latitudes, which exhibit the degeneracy in surface gravitational acceleration as a function of latitude, will in turn exhibit a wider distribution of crater depths and ejecta thicknesses than exists in strictly the polar or equatorial regions. The two order of magnitude variation in surface gravity across Haumea's surface, combined with Haumea's fast rotation rate, result in large variations in the escape velocity across Haumea's equator. Just along Haumea's equator, for an object launched normal to the surface, Haumea's escape velocity varies by 38%. We note that this variability is an underestimate when compared with an object launched nearly tangential to the surface, which can be completely aligned with or opposed with the surface angular velocity vector. Nevertheless, even with such a near tangential launch, the variability on Earth at the equator for launches in opposing directions reaches only 8 %. We have demonstrated that Haumea's unique shape and short 3.92 hr day should manifest in dramatic expected variations in crater morphologies across the same planetary body. The extent of these differnces in crater types, volumes, depths, ejecta thicknesses, as well as ejecta retained during an impact are unique for currently known planet sized bodies in the solar system. There remain, however, numerous open areas of research in predicting peculiar characteristics of Haumea's environment as a result of its shape. How might Haumea's spatial surface gravity variations affect its interior structure as well as its subsurface accomodation of surface features? Might the preferential escape of impact ejecta at polar vs equatorial latitudes have implications for the long-term evolution of Haumea's escape? Such questions warrant further investigation. ## 7 Summary We have carried out the first detailed predictions of Haumea's surface morphology. We have focused on the characteristics of its likely to be numerous craters, given Haumea's surface composition being predominantly of inert water ice. We report the following findings: 1. There is an almost two order of magnitude variation in Haumea's effective surface gravitational acceleration--from 1.076 m/s\({}^{2}\) at the pole, to a minimum of 0.0126 m/s\({}^{2}\) at the equatorial locations of its major axis, due to both Haumea's shape and the strength of Haumea's centrifugal acceleration. Furthermore, Haumea exhibits a non-monotonic decrease in \(g\) with latitude, along with strong \(g_{\theta}\) terms that result in Haumea's surface gravity vectors pointing poleward relative to the surface normal at higher latitudes (\(>60^{\circ}\)). 2. The simple to complex crater transition diameter on Haumea is expected to vary greatly as a function of latitude. Using the observed transitions on icy bodies as a function of gravity we infer a simple to complex transition diameter (\(D_{t}\)) of 36.2 km at Haumea's location of minimum surface gravity at the equator, compared to \(D_{t}=6.1\) km at the poles. 3. Due to the spatial variations in Haumea's surface gravity, an impactor of the same size and impact velocity will form craters with different characteristics across Haumea's surface. For craters in Figure 10: (a) The fraction of ejecta mass that escapes Haumea (\(M_{esc}/M_{e}\)) as a function of latitude and longitude. (b) A 3D perspective of \(M_{esc}/M_{e}\). Plotting conventions are the same as in Figure 2. the gravity regime (large craters), craters near the equator will be of larger volume, larger diameters, and considerably deeper than craters at mid-latitudes, followed by craters at the pole. 4. These same spatial variations in crater characteristics for the same impactor will also extend to the ejecta, in the case of craters in the strength regime (small craters). Crater ejecta are expected to be thinnest at the location of maximum gravity at the poles, with thicknesses up to \(10\times\) higher at other locations on the surface, as well as up to \(63\times\) thicker in the immediate vicinity of the location of the major axis at Haumea's equator. 5. Haumea's escape velocity varies by \(38\%\) strictly across Haumea's equator, due to its shape as well as large angular velocity. The highest escape velocity at the pole (\(0.97\) km/s) is \(62\%\) more than the minimum equatorial escape velocity (\(0.60\) km/s). 6. Despite Haumea's escape velocity being higher at the poles, the larger minimum ejecta velocity (\(v_{*}\)) calculated for Haumea's higher latitudes result in a higher mass fraction of ejecta escaping Haumea's gravitational well at polar vs equatorial latitudes for impactors of the same size. ## Acknowledgements Support for GDM and LO was provided by a startup grant from Rutgers University. ## Appendix: Full expansion of gravitational terms We begin with the gravitational potential expressed as a series of spherical harmonics, the same relation as equation 1 in the manuscript: \[\begin{split}\Phi(r,\theta,\lambda)=-\frac{GM}{r}& \bigg{\{}1+\sum_{n=2}^{\infty}\sum_{m=0}^{n}\left(\frac{R_{o}}{r} \right)^{n}P_{n}^{m}(\cos\theta)\\ &\times[C_{nm}\cos m\lambda+S_{nm}\sin m\lambda]\bigg{\}}\\ &-\frac{1}{2}\omega^{2}r^{2}\sin^{2}\theta\end{split} \tag{22}\] evaluated explicitly up to n = 4, as we do for all calculations in the manuscript, this is: \[\Phi=-\frac{GM}{r}-\frac{\alpha_{r}GMR_{o}^{2}}{r^{3}}-\frac{\beta_{r}GMR_{o} ^{4}}{r^{5}}-\frac{1}{2}\omega^{2}r^{2}\sin^{2}\theta \tag{23}\] where \[\alpha_{r}=\left(\frac{Ro}{r}\right)^{2}\left[\frac{C_{20}}{2}(3\cos^{2} \theta-1)+3C_{22}\sin^{2}\theta\cos(2\lambda)\right] \tag{24}\] \[\begin{split}\beta_{r}=&\left(\frac{Ro}{r}\right)^{ 4}\left[\frac{C_{40}}{8}(35\cos^{4}\theta-30\cos^{2}\theta+3)\right.\\ +&\frac{15C_{42}}{2}(7\cos^{2}\theta\sin^{2}\theta \cos(2\lambda)-\sin^{2}\theta\cos(2\lambda))\\ &+C_{44}105\sin^{4}\theta\cos(4\lambda)\bigg{]}\end{split} \tag{25}\] The surface gravitational acceleration is then calculated as the negative gradient of the gravitational potential (same relation as equation 2 in the manuscript): \[\begin{split}\vec{g}=&-\vec{\nabla}\Phi\\ =&-\frac{\partial\Phi}{\partial r}\hat{r}-\frac{1}{r }\frac{\partial\Phi}{\partial\theta}\hat{\theta}-\frac{1}{r\sin\theta}\frac{ \partial\Phi}{\partial\lambda}\hat{\lambda}\end{split} \tag{26}\] The components of the gravitational acceleration are now explicitly evaluated, beginning with the \(\hat{r}\) component \(\vec{g}_{r}\): \[\begin{split}\vec{g}_{r}=&-\frac{\partial\Phi}{ \partial r}\hat{r}\\ =&\left(-\frac{GM}{r^{2}}-\frac{3\alpha_{r}GMR_{o}^{ 2}}{r^{4}}-\frac{5\beta_{r}GMR_{o}^{4}}{r^{6}}+r\omega^{2}\sin^{2}\theta \right)\hat{r}\\ =&(g_{r,1}+g_{r,2}+g_{r,3}+\omega_{r})\hat{r}\end{split} \tag{27}\] Followed by the \(\hat{\theta}\) component \(\vec{g}_{\theta}\): \[\begin{split}\vec{g}_{\theta}=&-\frac{1}{r}\frac{ \partial\Phi}{\partial\theta}\hat{\theta}\\ =&\left(\frac{\alpha_{\theta}GMR_{o}^{2}}{r^{4}}+ \frac{\beta_{\theta}GMR_{o}^{4}}{r^{6}}+r\omega^{2}\sin\theta\cos\theta \right)\hat{\theta}\\ =&(g_{\theta,1}+g_{\theta,2}+\omega_{\theta})\hat{ \theta}\end{split} \tag{28}\] where \[\alpha_{\theta}=-3C_{20}\cos\theta\sin\theta+6C_{22}\sin\theta\cos\theta\cos(2\lambda) \tag{29}\] \[\begin{split}\beta_{\theta}=&\frac{C_{40}}{8}(-140 \cos^{3}\theta\sin\theta+60\cos\theta\sin\theta)\\ +&\frac{15C_{42}}{2}[7\cos(2\lambda)(-2\cos\theta \sin^{3}\theta+2\cos^{3}\theta\sin\theta)\\ -& 2\sin\theta\cos\theta\cos(2\lambda)]+420C_{44}\sin^{3} \theta\cos\theta\cos(4\lambda)\end{split} \tag{30}\] And finally the \(\hat{\lambda}\) component \(\vec{g}_{\lambda}\): \[\vec{g}_{\lambda}=-\frac{1}{r\sin\theta}\frac{\partial\Phi}{\partial \lambda}\hat{\lambda}= \frac{\alpha_{\lambda}GMR_{o}^{2}}{r^{4}\sin\theta}+\frac{\beta_{ \lambda}GMR_{o}^{4}}{r^{6}\sin\theta} \tag{31}\] \[= (g_{\lambda,1}+g_{\lambda,2})\hat{\lambda}\] where \[\alpha_{\lambda}=-6C_{22}\sin^{2}\theta\sin(2\lambda) \tag{32}\] \[\beta_{\lambda}= -15C_{42}(7\cos^{2}\theta-1)\sin^{2}\theta\sin(2\lambda)\] (33) \[-420C_{44}\sin^{4}\theta\sin(4\lambda)\]
2305.13828
Comment on "Weak values and the past of a quantum particle"
In a recent paper, Hance, Rarity and Ladyman [Phys. Rev. Res. {\bf 5}, 023048 (2023)] criticized recent proposals connecting weak values and the past of a quantum particle. I argue that their conclusion follows from a conceptual error in understanding the approach to the past of the particle they discuss.
Lev Vaidman
2023-05-23T08:50:41Z
http://arxiv.org/abs/2305.13828v1
# Comment on "Weak values and the past of a quantum particle" ###### Abstract In a recent paper, Hance, Rarity and Ladyman [Phys. Rev. Res. **5**, 023048 (2023)] criticized recent proposals connecting weak values and the past of a quantum particle. I argue that their conclusion follows from a conceptual error in understanding the approach to the past of the particle they discuss. Hance, Rarity and Ladyman (HRL) [1] discuss the connection between the presence of a quantum particle in the past and weak values, the topic I introduced in 2013 [2]. They claim to analyze it according to my definition and also according to an alternative approach [3]. In this Comment I argue that there is a conceptual error in the HRL paper in presentation of these approaches and consequently, their conclusion "that these approaches specifically are not useful for helping identify the past path of quantum particles" misses the target. The conceptual error of the HRL paper is that according to their presentation, the discussed approaches argue for the existence of an independent ontological concept of the presence of a pre- and postselected particle. Perhaps I should refrain from discussing the alternative approach [3], but I can say that this is definitely not true for my approach. The definition of the "presence of the particle" in [2] is operational: _the particle was where it left a (weak) trace_. Therefore, to "identify the past path of quantum particles" is to find the locations where they left a trace. The weak values of the local operators are a useful tool for calculating these local traces. There is a controversy about the faithfulness of this method, but HRL mainly criticise the connection between weak values and hypothetical "particle presence" and not between weak values and weak traces, which can be calculated using standard quantum mechanics. HRL write "These approaches simply assume the particle was present wherever the weak value of an operator containing the spatial projection operator is nonzero." The approach [2]_defines_ that the particle was present where it left a trace. The purpose of this Comment is to clarify the approach by pointing out several misconceptions in the HRL paper. HRL write (in the Introduction) about "weak values only being defined over ensembles". I, as co-author of the original paper which introduced weak values [4], disagree with this statement. It is correct that we usually need an ensemble to observe a weak value, but nothing prevents us from defining it for a single system [5]. HRL also discuss disturbance in weak measurements. Apparently, they attach a weak value to a weak measurement. We do need to know the results of the preselection measurement and the postselection measurement, but the discussion of the presence of the particle between these measurements does not require weak measurement: the environment "measures" the weak value by being disturbed. The particle is also disturbed by the environment; the weak values are then modified and it is a subtle issue when this can or cannot be neglected [6]. The definition of presence based on a weak trace requires the existence of all possible types of local interactions with the environment. These interactions must be non-vanishing but can be arbitrarily small. Their purpose is to serve as a reference to the trace left on the environment in the discussed experiments relative to a hypothetical experiment with a well-localized particle in the same location. Then I do agree with "the existence of at least one operator formed from the product of the spatial projection operator for a location and some other operator, with a nonzero weak value, is both a necessary and a sufficient condition for particle presence at that location". It follows from the fact that nonvanishing local interactions ensure the first-order trace in the environment. Note that in optical interferometric experiments we always have a finite interaction of the photon with mirrors. (The HRL energy exchange estimate \(10^{-33}\), has to be replaced by considering a much larger momentum exchange of every photon bouncing off a mirror. The amplitude of the orthogonal component of the quantum state of the mirror due to the bouncing photon in the same experiment is of the order \(10^{-17}\).) I agree with the HRL view that "any attempt to form a definition of presence for quantum particles should correspond to our intuitions about classical presence, unless we have a good reason for it to deviate from this. The classical conception of a particle presence--being present at a certain place at a certain time--can be characterized as follows: (i) Every particle is located in space at all times. (ii) Particles cannot be on more than one path simultaneously. (iii) Particle trajectories are continuous (or at least as continuous as space is) so particles cannot get from one place to another without passing through the space in between. (iv) Particles interact with other objects and/or fields local to their location. (v) If a particle is on a path at a given time, and that path is within some region, then the particle is also located in that region at that time. (vi) If a particle's property is at a location, the particle must be at that location too." My attempt for a new definition came exactly when I found "a good reason". In the nested Mach-Zehnder interferometer [7] there is a contradiction between (iii) and (iv). The traces left on the environment that provide evidence of particle interactions have disconnected parts. We cannot retain all the classical characterizations of presence in the quantum world, so in [2] I abandoned (iii) and adopted (iv) as the definition. My definition allows for keeping all other properties, although there is a very subtle and paradoxical situation about property (ii). In the nested Mach-Zehnder interferometer the particle _is_ present in every one of the three arms at the same time, but it is not present in any two (or three) arms simultaneously. To be present in a particular location is to leave a trace there, and the quantum states of the environment in all three arms are changed, so the particle was in three places. However, the traces in the environment are entangled such that orthogonal components of the local states of the environment, which provide evidence of the presence in every location, are entangled with undisturbed states in all other locations. Thus, there is no trace corresponding to _simultaneous_ presence in different locations and one can claim that (ii) is fulfilled. A similar paradoxical situation arises in the Hardy paradox [8; 9; 10] which describes a pre- and postselected system of two particles: an electron was present in one arm and a positron was present in another, but the particles were not present in these arms simultaneously. There is no similar difficulty with (v), since the definition is that if anywhere in the region there is a non-vanishing trace, the particle was in this region. Note that due to the unavoidable momentum exchange of the photon with mirrors, there is no such thing as "undisturbed inner interferometer" discussed by HRL. Contrary to the HRL claim, the fact that traces might have various properties (e.g. sign) is not neglected in the two-state vector formalism and asking specific questions like: Was the photon in two places together? leads to paradoxical answers, as in the discussion of (ii), see [11]. I am puzzled by the HRL claim about the inconsistency of [12]. Why "would we expect a particle to necessarily have a non-zero weak value for the spatial projection operator for any path along which it travels" when interactions with other degrees of freedom lead to the trace? I want to repeat that I disagree with Section VI of HRL, my weak value is defined for a single system. The fact that the weak value of the velocity of a particle can be larger than the speed of light (see Sec. VIII of [13]) does not contradict the special theory of relativity. The experiments involve postselection and their low probability of success prevents a superluminal change in the probability of finding a quantum particle. I also disagree with the claim of Sec. VII of the HRL paper, according to which the weak value approach is intended to show that some quantum protocols "are not as'spooky' as they appear". The weak value approach helps to find quantum protocols which are "spooky" if analyzed in classical terms. My papers based on the weak value approach cited by HRL [14; 15; 16; 17; 18] do not try to remove paradoxical features. Instead, these papers try to correct erroneous claims about alleged counterfactual communication. In particular, HRL are correct in their weak values analysis of the protocol described in [19] and shown on their Fig. 2. The photons reaching detector \(D_{0}\) were not present at Bob's site according to the weak trace criterion (all weak values of local operators on Bob's site vanish). However, there is no contradiction with the approach because, in this case, Bob's communication with Alice fails. The click at \(D_{0}\) means that the photon did not perform any test of the presence or absence of Bob's shutter because the probability of this click does not depend on Bob's actions, see [20]. Finally, let me comment on the concluding sentence of HRL "we have shown that weak value approaches to the path of a particle do not contribute any new physics--the assumption of a connection between particle presence and weak values does not give us anything testable." First, weak values, as all other concepts and results of the two-state vector formalism are fully consistent with the standard formalism of quantum mechanics, so neither new physics, i.e., a deviation from the Schrodinger equation, nor introducing some new ontology, is proposed. My approach introduces new concepts (which I believe are useful), in particular, the local presence of a pre- and post-selected particle _defined_ by the local trace it leaves on the environment. The formalism predicts that these traces can be found based on finite weak values of local operators, and this statement is definitely testable. This work has been supported in part by the U.S.-Israel Binational Science Foundation (Grant No. 735/18) and by the Israel Science Foundation Grant No. 2064/19.
2303.11369
Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale
In this paper, we address the following problem: Given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance in MDPs. We first propose an Informed Posterior Sampling-based RL (iPSRL) algorithm that uses the offline dataset, and information about the expert's behavioral policy used to generate the offline dataset. Its cumulative Bayesian regret goes down to zero exponentially fast in N, the offline dataset size if the expert is competent enough. Since this algorithm is computationally impractical, we then propose the iRLSVI algorithm that can be seen as a combination of the RLSVI algorithm for online RL, and imitation learning. Our empirical results show that the proposed iRLSVI algorithm is able to achieve significant reduction in regret as compared to two baselines: no offline data, and offline dataset but used without information about the generative policy. Our algorithm bridges online RL and imitation learning for the first time.
Botao Hao, Rahul Jain, Dengwang Tang, Zheng Wen
2023-03-20T18:16:25Z
http://arxiv.org/abs/2303.11369v2
# Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale ###### Abstract In this paper, we address the following problem: Given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance in MDPs. We first propose an Informed Posterior Sampling-based RL (iPSRL) algorithm that uses the offline dataset, and information about the expert's behavioral policy used to generate the offline dataset. Its cumulative Bayesian regret goes down to zero exponentially fast in \(N\), the offline dataset size if the expert is competent enough. Since this algorithm is computationally impractical, we then propose the iRLSVI algorithm that can be seen as a combination of the RLSVI algorithm for online RL, and imitation learning. Our empirical results show that the proposed iRLSVI algorithm is able to achieve significant reduction in regret as compared to two baselines: no offline data, and offline dataset but used without information about the generative policy. Our algorithm bridges online RL and imitation learning for the first time. ## 1 Introduction An early vision of the Reinforcement Learning (RL) field is to design a learning agent that when let loose in an unknown environment, learns by interacting with it. Such an agent starts with a blank slate (with possibly, arbitrary initialization), takes actions, receives state and reward observations, and thus learns by "reinforcement". This remains a goal but at the same time, it is recognized that in this paradigm learning is too slow, inefficient and often impractical. Such a learning agent takes too long to learn near-optimal policies way beyond practical time horizons of interest. Furthermore, deploying an agent that learns by exploration over long time periods may simply be impractical. In fact, reinforcement learning is often deployed to solve complicated engineering problems by first collecting offline data using a behavioral policy, and then using off-policy reinforcement learning, or imitation learning methods (if the goal is to imitate the policy that generated the offline dataset) on such datasets to learn a policy. This often suffers from the _sim2real_ problem, i.e., the learnt policy upon deployment often performs poorly on out-of-distribution state-action space. Thus, there is a need for adaptation and fine-tuning upon deployment. In this paper, we propose a systematic way to use offline datasets to bootstrap online RL algorithms. Performance of online learning agents is often measured in terms of cumulative (expected) regret. We show that, as expected, there is a gain in performance (reflected in reduction in cumulative regret) of the learning agent as compared to when it did not use such an offline dataset. We call such an online learning agent as being _partially informed_. However, somewhat surprisingly, if the agent is further informed about the behavioral policy that generated the offline dataset, such an _informed (online learning) agent_ can do substantially better, reducing cumulative regret significantly. In fact, we also show that if the behavioral policy is suitably parameterized by a _competence parameter_, wherein the behavioral policy is asymptotically the optimal policy, then the higher the "competence" level, the better the performance in terms of regret reduction over the baseline case of no offline dataset. We first propose an ideal (informed) iPSRL (posterior sampling-based RL) algorithm and show via theoretical analysis that under some mild assumptions, its expected cumulative regret is bounded as \(\tilde{O}(\sqrt{T})\) where \(T\) is the number of episodes. In fact, we show that if the competence of the expert is high enough (quantified in terms of a parameter we introduce), the regret goes to zero exponentially fast as \(N\), the offline dataset size grows. This is accomplished through a novel prior-dependent regret analysis of the PSRL algorithm, the first such result to the best of our knowledge. Unfortunately, posterior updates in this algorithm can be computationally impractical. Thus, we introduce a Bayesian-bootstrapped algorithm for approximate posterior sampling, called the (informed) iRLSVI algorithm (due to its commonality with the RLSVI algorithm introduced in Osband et al. (2019)). The iRLSVI algorithm involves optimizing a loss function that is an _optimistic_ upper bound on the loss function for MAP estimates for the unknown parameters. Thus, while inspired by the posterior sampling principle, it also has an optimism flavor to it. Through, numerical experiments, we show that the iRLSVI algorithm performs substantially better than both the partially informed-RLSVI (which uses the offline dataset naively) as well as the uninformed-RLSVI algorithm (which doesn't use it at all). We also show that the iRLSVI algorithm can be seen as bridging online reinforcement learning with imitation learning since its loss function can be seen as a combination of an online learning term as well as an imitation learning term. And if there is no offline dataset, it essentially behaves like an online RL algorithm. Of course, in various regimes in the middle it is able to interpolate seamlessly. We note that this is the first algorithm of its kind. **Related Work.** Because of the surging use of offline datasets for pre-training (e.g., in Large Language models (LLMs), e.g., see Brown et al. (2020), Thoppilan et al. (2022), Hoffmann et al. (2022)), there has been a lot of interest in Offline RL, i.e., RL using offline datasets (Levine et al., 2020). A fundamental issue this literature addresses is RL algorithm design (Nair et al., 2020; Kostrikov et al., 2021; Kumar et al., 2020; Nguyen-Tang and Arora, 2023; Fujimoto et al., 2019; Fujimoto and Gu, 2021; Ghosh et al., 2022) and analysis to best address the "out-of-distribution" (OOD) problem, i.e., policies learnt from offline datasets may not perform so well upon deployment. The dominant design approach is based on 'pessimism' (Jin et al., 2021; Xie et al., 2021; Rashidinejad et al., 2021) which often results in conservative performance in practice. Some of the theoretical literature (Xie et al., 2021; Rashidinejad et al., 2021; Uehara and Sun, 2021; Agarwal and Zhang, 2022) has focused on investigation of sufficient conditions such as "concentrability measures" under which such offline RL algorithms can have guaranteed performance. Unfortunately, such measures of offline dataset quality are hard to compute, and of limited practical relevance (Argenson and Dulac-Arnold, 2020; Nair et al., 2020; Kumar et al., 2020; Levine et al., 2020; Kostrikov et al., 2021; Wagenmaker and Pacchiano, 2022]. There is of course, a large body of literature on online RL [Dann et al., 2021, Tiapkin et al., 2022, Ecoffet et al., 2021, Guo et al., 2022, Ecoffet et al., 2019, Osband et al., 2019] with two dominant design philosophies: Optimism-based algorithms such as UCRL2 in Auer et al. [2008], and Posterior Sampling (PS)-type algorithms such as PSRL [Osband et al., 2013, Ouyang et al., 2017], etc. [Osband et al., 2019, 2016, Russo and Van Roy, 2018, Zanette and Sarkar, 2017, Hao and Lattimore, 2022]. However, none of these algorithms consider starting the learning agent with an offline dataset. Of course, imitation learning [Hester et al., 2018, Beliaev et al., 2022, Schaal, 1996] is exactly concerned with learning the expert's behavioral policy (which may not be optimal) from the offline datasets but with no online finetuning of the policy learnt. Several papers have actually studied bridging offline RL and imitation learning [Ernst et al., 2005, Kumar et al., 2022, Rashidinejad et al., 2021, Hansen et al., 2022, Vecerik et al., 2017, Lee et al., 2022]. Some have also studied offline RL followed by a small amount of policy fine-tuning [Song et al., 2022, Fang et al., 2022, Xie et al., 2021b, Wan et al., 2022, Schrittwieser et al., 2021, Ball et al., 2023, Uehara and Sun, 2021, Xie et al., 2021b, Agarwal and Zhang, 2022] with the goal of finding policies that optimize simple regret. But none have studied the problem we introduce and study in this paper: Namely, given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance (in terms of cumulative regret) in MDPs. What is the best regret reduction that is achievable by use of offline datasets? How it depends on the quality and quantity of demonstrations, and what algorithms can one devise to achieve them? And does any information about the offline-dataset generation process help in regret reduction? We answer some of these questions in this paper. ## 2 Preliminaries Episodic Reinforcement Learning.Consider a scenario where an agent repeatedly interacts with an environment modelled as a finite-horizon MDP, and refer to each interaction as an episode. The finite-horizon MDP is represented by a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},P,r,H,\nu)\), where \(\mathcal{S}\) is a finite state space (of size \(S\)), \(\mathcal{A}\) is a finite action space (of size \(A\)), \(P\) encodes the transition probabilities, \(r\) is the reward function, \(H\) is the time horizon length, and \(\nu\) is the initial state distribution. The interaction protocol is as follows: at the beginning of each episode \(t\), the initial state \(s^{t}_{0}\) is independently drawn from \(\nu\). Then, at each period \(h=0,1,\ldots,H-1\) in episode \(t\), if the agent takes action \(a^{t}_{h}\in\mathcal{A}\) at the current state \(s^{t}_{h}\in\mathcal{S}\), then it will receive a reward \(r_{h}(s^{t}_{h},a^{t}_{h})\) and transit to the next state \(s^{t}_{h+1}\in P_{h}(\cdot|s^{t}_{h},a^{t}_{h})\). An episode terminates once the agent arrives at state \(s^{t}_{H}\) in period \(H\) and receives a reward \(r_{H}(s^{t}_{H})\). We abuse notation for the sake of simplicity, and just use \(r_{H}(s^{t}_{H},a^{t}_{H})\) instead of \(r_{H}(s^{t}_{H})\), though no action is taken at period \(H\). The objective is to maximize its expected total reward over \(T\) episodes. Let \(Q^{*}_{h}\) and \(V^{*}_{h}\) respectively denote the optimal state-action value and state value functions at period \(h\). Then, the Bellman equation for MDP \(\mathcal{M}\) is \[Q^{*}_{h}(s,a)=r_{h}(s,a)+\sum_{s^{\prime}}P_{h}(s^{\prime}|s,a)V^{*}_{h+1}(s^ {\prime}), \tag{1}\] where \(V^{*}_{h+1}(s^{\prime}):=\max_{b}Q^{*}_{h+1}(s^{\prime},b)\), if \(h<H-1\) and \(V^{*}_{h+1}(s^{\prime})=0\), if \(h=H-1\). We define a policy \(\pi\) as a mapping from a state-period pair to a probability distribution over the action space \(A\). A policy \(\pi^{*}\) is optimal if \(\pi_{h}^{*}(\cdot|s)\in\arg\max_{\pi_{h}}\sum_{a}Q_{h}^{*}(s,a)\pi_{h}(a|s)\) for all \(s\in\mathcal{S}\) and all \(h\). Agent's Prior Knowledge about \(\mathcal{M}\).We assume that the agent does not fully know the environment \(\mathcal{M}\); otherwise, there is no need for learning and this problem reduces to an optimization problem. However, the agent usually has some prior knowledge about the unknown part of \(\mathcal{M}\). For instance, the agent might know that \(\mathcal{M}\) lies in a low-dimensional subspace, and/or have a prior distribution over \(\mathcal{M}\). We use the notation \(\mathcal{M}(\theta)\) where \(\theta\) parameterizes the unknown part of the MDP. When we want to emphasize it as a random quantity, we will denote it by \(\theta^{*}\). Of course, different assumptions about the agent's prior knowledge lead to different problem formulations and algorithm designs. As a first step, we consider two canonical settings: * **Tabular RL:** The agent knows \(\mathcal{S},\mathcal{A},r,H\) and \(\nu\), but does not know \(P\). That is, \(\theta^{*}=P\) in this setting. We also assume that the agent has a prior over \(P\), and this prior is independent across state-period-action triples. * **Linear value function generalization:** The agent knows \(\mathcal{S},\mathcal{A},H\) and \(\nu\), but does not know \(P\) and \(r\). Moreover, the agent knows that for all \(h\), \(Q_{h}^{*}\) lies in a low-dimensional subspace \(\operatorname{span}(\Phi_{h})\), where \(\Phi_{h}\in\Re^{|\mathcal{S}||A|\times d}\) is a known matrix. In other words, \(Q_{h}^{*}=\Phi_{h}\theta_{h}^{*}\) for some \(\theta_{h}^{*}\in\Re^{d}\). Thus, in this setting \(\theta^{*}=\left[\theta_{0}^{*\top},\ldots,\theta_{H-1}^{*\top}\right]^{\top}\). We also assume that the agent has a Gaussian prior over \(\theta^{*}\). As we will discuss later, the insights developed in this paper could potentially be extended to more general cases. Offline Datasets.We denote an _offline dataset_ with \(L\) episodes as \(\mathcal{D}_{0}=\{(\bar{s}_{0}^{l},\bar{a}_{0}^{l},\cdots,\bar{s}_{H}^{l})_{l= 0}^{L-1}\}\), where \(N=HL\) denotes the dataset size in terms of number of observed transitions. For the sake of simplicity, we assume we have complete trajectories in the dataset but it can easily be generalized if not. We denote an _online dataset_ with \(t\) episodes as \(\mathcal{H}_{t}=\{(s_{0}^{l},a_{0}^{l},\cdots,s_{H}^{l})_{l=0}^{t}\}\) and \(\mathcal{D}_{t}=\mathcal{D}_{0}\oplus\mathcal{H}_{t}\). The Notion of Regret.A online learning algorithm \(\phi\) is a map for each episode \(t\), and time \(h\), \(\phi_{t,h}:\mathcal{D}_{t}\rightarrow\Delta_{A}\), the probability simplex over actions. We define the Bayesian regret of an online learning algorithm \(\phi\) over \(T\) episodes as \[\mathfrak{B}\mathfrak{R}_{T}(\phi):=\mathbb{E}\left[\sum_{t=1}^{T}\left(V_{0}^{ *}(s_{0}^{t};\theta^{*})-\sum_{h=0}^{H}r_{h}(s_{h}^{t},a_{h}^{t})\right)\right]\,,\] where the \((s_{h}^{t},a_{h}^{t})\)'s are the state-action tuples from using the learning algorithm \(\phi\), and the expectation is over the sequence induced by the interaction of the learning algorithm and the environment, the prior distributions over the unknown parameters \(\theta^{*}\) and the offline dataset \(\mathcal{D}_{0}\). Expert's behavioral policy and competence.We assume that the expert that generated the offline demonstrations may not be perfect, i.e., the actions it takes are only approximately optimal with respect to the optimal \(Q\)-value function. To that end, we model the expert's policy by use of the following generative model, \[\pi_{h}^{\beta}(a|s)=\frac{\exp(\beta(s)Q_{h}^{*}(s,a))}{\sum_{a}\exp(\beta(s )Q_{h}^{*}(s,a))}, \tag{2}\] where \(\beta(s)\geq 0\) is called the _state-dependent deliberateness_ parameter, e.g., when \(\beta(s)=0\), the expert behaves naively in state \(s\), and takes actions uniformly randomly. When \(\beta(s)\to\infty\), the expert uses the optimal policy when in state \(s\). When \(\beta(\cdot)\) is unknown, we will assume an independent exponential prior for the sake of analytical simplicity, \(f_{2}(\beta(s))=\lambda_{2}\exp(-\lambda_{2}\beta(s))\) over \(\beta(s)\) where \(\lambda_{2}>0\) is the same for all \(s\). In our experiments, we will regard \(\beta(s)\) as being the same for all states, and hence a single parameter. The above assumes the expert is knowledgeable about \(Q^{*}\). However, it may know it only approximately. To model that, we introduce a _knowledge_ parameter \(\lambda\geq 0\). The expert then knows \(\tilde{Q}\) which is distributed as \(\mathcal{N}(Q^{*},\mathbb{I}/\lambda^{2})\) conditioned on \(\theta\), and selects actions according to the softmax policy equation 2, with the \(Q^{*}\) replaced by \(\tilde{Q}\). The two parameters \((\beta,\lambda)\) together will be referred to as the _competence_ of the expert. In this case, we denote the expert's policy as \(\pi_{h}^{\beta,\lambda}\). _Remark 2.1_.: While the form of the generative policy in eq. 2 seems specific, \(\pi_{h}^{\beta}(\cdot|s)\) is a random vector with support over the entire probability simplex. In particular, if one regards \(\beta(s)\) and \(\tilde{Q}_{h}(s,\cdot)\) as parameters that parameterize the policy, the softmax policy structure as in equation 2 is enough to realize any stationary policy. Furthermore, we note that our main objective here is to yield clear and useful insights when information is available to be able to model the expert's behavioral policy with varying competence levels. Other forms of generative policies can also be used including \(\epsilon\)-optimal policies introduced in (Beliaev et al., 2022), and the framework extended. ## 3 The Informed PSRL Algorithm We now introduce a simple _Informed Posterior Sampling-based Reinforcement Learning_ (iPSRL) algorithm that naturally uses the offline dataset \(\mathcal{D}_{0}\) and action generation information to construct an informed prior distribution over \(\theta^{*}\). The realization of \(\theta^{*}\) is assumed known to the expert (but not the learning agent) with \(\tilde{Q}(\cdot,\cdot;\theta^{*})=Q(\cdot,\cdot;\theta^{*})\), and \(\beta(s):=\beta\geq 0\) (i.e., it is state-invariant) is also known to the expert. Thus, the learning agent's posterior distribution over \(\theta^{*}\) given the offline dataset is, \[\begin{split}&\mathbb{P}(\theta^{*}\in\cdot|\mathcal{D}_{0}) \propto\mathbb{P}(\mathcal{D}_{0}|\theta^{*}\in\cdot)\mathbb{P}(\theta^{*} \in\cdot)\\ =&\mathbb{P}(\theta^{*}\in\cdot)\times\int_{\theta \in\cdot}\prod_{l=0}^{L-1}\prod_{h=0}^{H-1}\theta(\bar{s}_{l}^{h+1}|\bar{s}_{ l}^{h},\bar{a}_{l}^{h})\pi_{h}^{\beta}(\bar{a}_{l}^{h}|\bar{s}_{l}^{h}, \theta)\nu(\bar{s}_{l}^{0})\,d\theta.\end{split} \tag{3}\] A PSRL agent (Osband et al., 2013; Ouyang et al., 2017) takes this as the prior, and then updates the posterior distribution over \(\theta^{*}\) as online observation tuples, \((s_{t},a_{t},s_{t}^{\prime},r_{t})\) become available. Such an agent is really an ideal agent with assumed posterior distribution updates being exact. In practice, this is computationally intractable and we will need to get samples from an approximate posterior distribution, an issue which we will address in the next section. ### Prior-dependent Regret Bound It is natural to expect some regret reduction if an offline demonstration dataset is available to warm-start the online learning. However, the degree of improvement must depend on the "quality" of demonstrations, for example through the competence parameter \(\beta\). Further note that the role of the offline dataset is via the prior distribution the PSRL algorithm uses. Thus, theoretical analysis involves obtaining a prior-dependent regret bound, which we obtain next. We use \(\mathbb{H}\) to denote Shannon entropy (with the natural logarithm). We start by establishing a Bayesian regret bound of PSRL algorithm for MDPs with any prior distribution. **Lemma 3.1**.: _Let \(\nu\) be the prior distribution of \(\pi^{*}\), then the PSRL algorithm satisfies_ \[\mathfrak{B}\mathfrak{R}_{T}(\phi^{\text{PSRL}})\leq\min\Bigl{\{}H\sqrt{(SA)^ {H}\mathbb{H}(\nu)T/2},\sqrt{S^{2}A^{2}H^{4}T\log(STH)}\Bigr{\}}.\] The proof can be found in the Appendix. Note that the first part of the above bound reflects the prior effect though \(\mathbb{H}(\nu)\) while the second part is prior-independent. This gives us the following corollary for the iPSRL algorithm: **Corollary 3.2**.: _For the iPSRL algorithm,_ \[\mathfrak{B}\mathfrak{R}_{T}(\phi^{\text{iPSRL}})\leq\min\Big{\{}H\sqrt{(SA)^ {H}\mathbb{H}(\pi^{*}|\mathcal{D}_{0})T/2},\sqrt{S^{2}A^{2}H^{4}T\log(STH)} \Big{\}}. \tag{4}\] Proof.: Conditioning on \(\mathcal{D}_{0}=\bar{\mathcal{D}}_{0}\), by applying Lemma 3.1, we can obtain \[\mathbb{E}\left[\sum_{t=1}^{T}\left(V_{0}^{*}(s_{0}^{t};\theta^{*})-\sum_{h=0 }^{H}r_{h}(s_{h}^{t},a_{h}^{t})\right)|\mathcal{D}_{0}=\bar{\mathcal{D}}_{0} \right]\leq H\sqrt{(SA)^{H}\mathbb{H}(\pi^{*}|\mathcal{D}_{0}=\bar{\mathcal{D}} _{0})T/2}.\] The corollary then follows by taking expectations on both sides over \(\mathcal{D}_{0}\) along with the concavity of the square root function. The conditional information \(\mathbb{H}(\pi^{*}|\mathcal{D}_{0})\) measures the amount of randomness of \(\pi^{*}\) given \(\mathcal{D}_{0}\). Therefore, Corollary 3.2 means that the more certain about \(\pi^{*}\) we are, the less regret the iPSRL algorithm will incur. In the rest of the proof, we provide an upper bound for \(\mathbb{H}(\pi^{*}|\mathcal{D}_{0})\) by use of Fano's inequality. For positive integer \(K\), let \([K]:=\{1,2,\ldots,K\}\). **Lemma 3.3** (Fano's Inequality).: _Let \(Y,\hat{Y}\) be random variables on \([K]\) such that \(\Pr(Y\neq\hat{Y})\leq\varepsilon\leq 1/2\). Then,_ \[\mathbb{H}(Y|\hat{Y})\leq\varepsilon\log K+\mathbf{h}(\varepsilon), \tag{5}\] _where \(\mathbf{h}(\epsilon)=-\epsilon\log\epsilon-(1-\epsilon)\log(1-\epsilon)\) is the binary entropy function._ We assume the following about the prior distribution of \(\theta^{*}\). **Assumption 3.4**.: There exists a \(\Delta>0\) such that for all \(\theta\in\Theta\), \(h\in[H]\), and \(s\in\mathcal{S}\), there exists an \(a^{*}\in\mathcal{A}\) such that \(Q_{h}(s,a^{*};\theta)\geq Q_{h}(s,a^{\prime};\theta)+\Delta,\ \forall a^{ \prime}\in\mathcal{A}\backslash\{a^{*}\}\). Define \(p_{h}(s;\theta):=\Pr_{\theta,\pi^{*}(\theta)}(s_{h}=s)\). **Assumption 3.5**.: The infimum probability of any reachable state, defined as \[\underline{p}:=\inf\{p_{h}(s;\theta):h\in[H],s\in\mathcal{S},\theta\in\Theta, p_{h}(s;\theta)>0\}\] satisfies \(\underline{p}>0\). We now describe a procedure to construct an estimator \(\hat{\pi}^{*}\) from \(\mathcal{D}_{0}\) so that \(\Pr(\pi^{*}\neq\hat{\pi}^{*})\) is small. Fix an integer \(N\), and choose a \(\delta\in(0,1)\). For each \(\theta\in\Theta\), define a deterministic Markov policy \(\pi^{*}(\theta)=(\pi^{*}_{h}(:,\theta))_{h=1}^{H}\) sequentially through \[\pi^{*}_{h}(s;\theta)=\begin{cases}\arg\max_{a}Q_{h}(s,a;\theta),&\text{if }\Pr_{ \theta,\pi^{*}_{1:h-1}(\theta)}(s_{h}=s)>0\\ \bar{a}_{0},&\text{if }\Pr_{\bar{\theta},\pi^{*}_{1:h-1}(\bar{\theta})}(s_{h}=s)>0, \end{cases} \tag{6}\] where the tiebreaker for the argmax operation is based on a fixed order on actions, and \(\bar{a}_{0}\in\mathcal{A}\) is a fixed action in \(\mathcal{A}\). It is clear that \(\pi^{*}(\theta)\) is an optimal policy for the MDP \(\theta\). Furthermore, for those states that are impossible to be visited, we choose to take a fixed action \(\bar{a}_{0}\). Although the choice of action at those states doesn't matter, our construction will be helpful for the proofs. **Construction of \(\hat{\pi}^{*}\):** Let \(N_{h}(s)\) (resp. \(N_{h}(s,a)\)) be the number of times state \(s\) (resp. state-action pair \((s,a)\)) appears at time \(h\) in dataset \(\mathcal{D}_{0}\). Define \(\hat{\pi}^{*}\) to be such that: * \(\hat{\pi}^{*}_{h}(s)=\arg\max_{a\in\mathcal{A}}N_{h}(s,a)\) (ties are broken through some fixed ordering of actions) whenever \(N_{h}(s)\geq\delta N\); * \(\hat{\pi}^{*}_{h}(s)=\bar{a}_{0}\) whenever \(N_{h}(s)<\delta N\). \(\bar{a}_{0}\) is a fixed action in \(\mathcal{A}\) that was used in the definition of \(\pi^{*}(\theta)\). The idea of the proof is that for sufficiently large \(\beta\) and \(N\), we can choose a \(\delta\in(0,1)\) such that * _Claim 1:_ If \(s\in\mathcal{S}\) is probable at time \(h\) under \(\pi^{*}(\theta)\), then \(N_{h}(s)\geq\delta N\) with large probability. Furthermore, \(\pi^{*}_{h}(s)=\arg\max_{a\in\mathcal{A}}N_{h}(s,a)\) with large probability as well. * _Claim 2:_ If \(s\in\mathcal{S}\) is improbable at time \(h\) under \(\pi^{*}(\theta)\), then \(N_{h}(s)<\delta N\) with large probability; Given the two claims, we can then conclude that \(\pi^{*}=\hat{\pi}^{*}\) with high probability via a standard union bound argument. **Lemma 3.6**.: _Let \(X\) be the sum of \(N\) i.i.d. Bernoulli random variables with mean \(p\in(0,1)\). Let \(q\in(0,1)\), then_ \[\Pr(X\leq qN) \leq\exp\left(-2N(q-p)^{2}\right),\qquad\text{if }q<p,\] \[\Pr(X\geq qN) \leq\exp\left(-2N(q-p)^{2}\right),\qquad\text{if }q>p.\] Proof.: Both inequalities can be obtained by applying Hoeffding's Inequality. **Lemma 3.7**.: _Let \(\Delta\) and \(\underline{p}\) be as in Assumptions 3.4 and 3.5 respectively and let_ \[\underline{\beta}:=[\log 3-\log\underline{p}+\log(H-1)+\log(A-1)]/\Delta.\] _For any \(\beta\geq\underline{\beta}\) and \(N\in\mathbb{N}\), there exists an estimator \(\hat{\pi}^{*}\) constructed from \(\mathcal{D}_{0}\) that satisfies_ \[\Pr(\pi^{*}\neq\hat{\pi}^{*})\leq SH\left[\exp\left(-\frac{Np^{2}}{18}\right)+ \exp\left(-\frac{Np}{36}\right)\right].\] The proof is available in the appendix. **Theorem 3.8**.: _Let \(\beta\geq\underline{\beta}\), then for all \(N\) such that \(\varepsilon_{N}\leq 1/2\), we have_ \[\mathfrak{B}\mathfrak{R}_{T}(\phi^{iPSRL})\leq\min\Big{\{}\sqrt{S^{2}A^{2}H^{4}T \log(STH)},H\sqrt{(SA)^{H}(\varepsilon_{N}SH\log A+\mathbf{h}(\varepsilon_{N}) )T/2}\Big{\}}. \tag{7}\] _where_ \[\varepsilon_{N}=SH\left[\exp\left(-\frac{Np^{2}}{18}\right)+\exp\left(-\frac{N \underline{p}}{36}\right)\right].\] Proof.: Since \(\hat{\pi}^{*}\) is a function of \(\mathcal{D}_{0}\), we have \(\mathbb{H}(\pi^{*}|\mathcal{D}_{0})\leq\mathbb{H}(\pi^{*}|\hat{\pi}^{*})\). The result then follows from Lemma 3.2, Lemma 3.3, and Lemma 3.7. Note that using the inequality \(\mathbf{h}(\varepsilon)\leq 2\sqrt{\varepsilon}-\varepsilon\) for all \(\varepsilon\in(0,1)\), we see that the right-hand side of equation 7 converges to zero exponentially fast as \(N\to\infty\). _Remark 3.9_.: (a) For fixed \(N\), and large \(S\) and \(A\), the regret bound is \(\tilde{O}(SAH^{2}\sqrt{T})\), which possibly could be improved in \(H\). (b) For a suitably large \(\beta\), the regret bound obtained goes to zero exponentially fast as \(N\), the offline dataset size, goes to infinity thus indicating the online learning algorithm's ability to learn via imitation of the expert. (c) Corollary 3.2 can be improved to remove the exponential dependency on \(H\) by using the Cauchy-Schwarz inequality in the space of state-action occupancy measures. Such technique has been successfully used in (Hao and Lattimore, 2022) in a purely online setting. We leave this refinement as a part of future work. ## 4 Approximating iPSRL ### The Informed RLSVI Algorithm The iPSRL algorithm introduced in the previous section assumes that posterior updates can be done exactly. In practice, the posterior update in Eq. equation 3 is challenging due to the loss of conjugacy while using the Bayes rule. Thus, we must find a computationally efficient way to do approximate posterior updates (and obtain samples from it) to enable practical implementation. Hence, we propose a novel approach based on Bayesian bootstrapping to obtain approximate posterior samples. The key idea is to perturb the loss function for the maximum a posterior (MAP) estimate and use the point estimate as a surrogate for the exact posterior sample. Note that in the ensuing, we regard \(\beta\) as also unknown to the learning agent (and \(\lambda=\infty\) for simplicity). Thus, the learning agent must form a belief over both \(\theta\) and \(\beta\) via a joint posterior distribution conditioned on the offline dataset \(\mathcal{D}_{0}\) and the online data at time \(t\), \(\mathcal{H}_{t}\). We denote the prior pdf over \(\theta\) by \(f(\cdot)\) and prior pdf over \(\beta\) by \(f_{2}(\cdot)\). For the sake of compact notation, we denote \(Q_{h}^{*}(s,a;\theta)\) as \(Q_{h}^{\theta}(s,a)\) in this section. Now, consider the offline dataset, \[\mathcal{D}_{0}=\{((s_{h}^{l},a_{h}^{l},\check{s}_{h}^{l},r_{h}^{l})_{h=0}^{H- 1})_{l=1}^{L}\}\] and denote \(\theta=(\theta_{h})_{h=0}^{H-1}\). We introduce the _temporal difference error_\(\mathcal{E}_{h}^{l}\) (parameterized by a given \(Q^{\theta}\)), \[\mathcal{E}_{h}^{l}(Q^{\theta}):=\left(r_{h}^{l}+\max_{b}Q_{h+1}^{\theta}( \check{s}_{h}^{l},b)-Q_{h}^{\theta}(s_{h}^{l},a_{h}^{l})\right).\] We will regard \(Q^{\theta}_{h}\) to only be parameterized by \(\theta_{h}\), i.e., \(Q^{\theta_{h}}_{h}\) but abuse notation for the sake of simplicity. We use this to construct a _parameterized offline dataset_, \[\mathcal{D}_{0}(Q^{\theta})=\{((s^{l}_{h},a^{l}_{h},\check{s}^{l}_{h}, \mathcal{E}^{l}_{h}(Q^{\theta}))_{h=0:H-1})_{l=1:L}\}.\] A parametrized online dataset \(\mathcal{H}_{t}(Q^{\theta})\) after episode \(t\) can be similarly defined. To ease notation, we will regard the \(j\)th episode during the online phase as the \((L+j)\)th observed episode. Thus, \[\mathcal{H}_{t}(Q^{\theta})=\{((s^{k}_{h},a^{k}_{h},\check{s}^{k}_{h}, \mathcal{E}^{k}_{h}(Q^{\theta}))_{h=0:H-1})_{k=L+1:L+t}\},\] the dataset observed during the online phase by episode \(t\). Note that \(Q^{\theta}\) is to be regarded as a parameter. Now, at time \(t\), we would like to obtain a **MAP estimate** for \((\theta,\beta)\) by solving the following: **MAP:**\[\arg\max_{\theta,\beta}\log P(\mathcal{H}_{t}(Q^{\theta})| \mathcal{D}_{0}(Q^{\theta}),\theta,\beta)+\log P(\mathcal{D}_{0}(Q^{\theta})| \theta,\beta)+\log f(\theta)+\log f_{2}(\beta).\] (8) Denote a perturbed version of the \(Q^{\theta}\)-parameterized offline dataset by \[\tilde{\mathcal{D}}_{0}(Q^{\theta})=\{((s^{l}_{h},\check{a}^{l}_{h},\check{s} ^{l}_{h},\check{\mathcal{E}}^{l}_{h})_{h=0:H-1})_{l=1:L}\}\] where random perturbations are added: (i) actions have perturbation \(w^{h}_{l}\sim\exp(1)\), (ii) rewards have perturbations \(z^{l}_{h}\sim\mathcal{N}(0,\sigma^{2})\), and (iii) the prior \(\tilde{\theta}\sim\mathcal{N}(0,\Sigma_{0})\). Note that the first and second terms involving \(\mathcal{H}_{t}\) and \(\mathcal{D}_{0}\) in equation 8 are independent of \(\beta\) when conditioned on the actions. Thus, we have a sum of _log-likelihood of TD error, transition and action_ as follows: \[\log P(\tilde{\mathcal{D}}_{0}(Q^{\theta})|Q^{\theta}_{0:H}) =\sum_{l=1}^{L}\sum_{h}\Big{(}\log P(\tilde{\mathcal{E}}^{l}_{h} |\check{s}^{l}_{h},a^{l}_{h},s^{l}_{h},Q^{\theta}_{0:H})+\log P(\check{s}^{l}_ {h}|a^{l}_{h},s^{l}_{h},Q^{\theta}_{0:H})+\log P(a^{l}_{h}|s^{l}_{h},Q^{\theta }_{0:H})\Big{)}\] \[\leq\sum_{l=1}^{L}\sum_{h}\Big{(}\log P(\tilde{\mathcal{E}}^{l}_{ h}|\check{s}^{l}_{h},a^{l}_{h},s^{l}_{h},Q^{\theta}_{h:h+1})+\log\pi^{\beta}_{h} (a^{l}_{h}|s^{l}_{h},Q^{\theta}_{h})\Big{)}.\] By ignoring the log-likelihood of the transition term (akin to optimizing an upper bound on the negative loss function), we are actually being _optimistic_. For the terms in the upper bound above, under the random perturbations assumed above, we have \[\log P(\tilde{\mathcal{E}}^{l}_{h}|\check{s}^{l}_{h},a^{l}_{h},s^{l}_{h},Q^{ \theta}_{h:h+1})=-\frac{1}{2}\left(r^{l}_{h}+z^{l}_{h}+\max_{b}Q^{\theta}_{h+1 }(\check{s}^{l}_{h},b)-Q^{\theta}_{h}(s^{l}_{h},a^{l}_{h})\right)^{2}+\text{ constant}\] and \[\log\pi^{\beta}_{h}(a^{l}_{h}|s^{l}_{h},Q^{\theta}_{h})=w^{l}_{h} \left(\beta Q^{\theta}_{h}(s^{l}_{h},a^{l}_{h})-\log\sum_{b}\exp\left(\beta Q^ {\theta}_{h}(s^{l}_{h},b)\right)\right).\] Now, denote a perturbed version of the \(Q^{\theta}\)-parametrized online dataset, \[\tilde{\mathcal{H}}_{t}(Q^{\theta})=\{((s^{k}_{h},a^{k}_{h},\check{s}^{k}_{h}, \check{\mathcal{E}}^{k}_{h})_{h=0:H-1})_{k=L+1:L+t}\},\] and thus similar to before, we have \[\log P(\tilde{\mathcal{H}}_{t}(Q^{\theta})|\tilde{\mathcal{D}}_{0}(Q ^{\theta}),Q^{\theta}_{0:H}) =\sum_{k=L+1}^{L+t}\sum_{h}\Big{(}\log P(\tilde{\mathcal{E}}_{h}^{ k}(Q^{\theta})|\check{s}_{h}^{k},a_{h}^{k},s_{h}^{k},Q^{\theta}_{0:H})+\log P( \check{s}_{h}^{k}|a_{h}^{k},s_{h}^{k},Q^{\theta})\Big{)},\] \[\leq\sum_{k=L+1}^{L+t}\sum_{h}\left(\log P(\tilde{\mathcal{E}}_{h} ^{k}|\check{s}_{h}^{k},a_{h}^{k},s_{h}^{k},Q^{\theta}_{h:h+1})\right),\] where we again ignored the transition term to obtain an _optimistic_ upper bound. Given the random perturbations above, we have \[\log P(\tilde{\mathcal{E}}_{h}^{k}(Q^{\theta})|\check{s}_{h}^{k},a_{h}^{k},s_ {h}^{k},Q^{\theta}_{h:h+1})=-\frac{1}{2}\left(r_{h}^{k}+z_{h}^{k}+\max_{b}Q^{ \theta}_{h+1}(\check{s}_{h}^{k},b)-Q^{\theta}_{h}(s_{h}^{k},a_{h}^{k})\right) ^{2}+\text{ constant}.\] The prior over \(\beta\), \(f_{2}(\beta)\) is assumed to be an exponential pdf \(\lambda_{2}\exp(-\lambda_{2}\beta),\beta\geq 0\), while that over \(\theta\) is assumed Gaussian. Thus, putting it all together, we get the following _optimistic loss function_ (to minimize over \(\theta\) and \(\beta\)), \[\begin{split}&\tilde{\mathcal{L}}(\theta,\beta)=\frac{1}{2\sigma^{2 }}\sum_{k=1}^{L+t}\sum_{h=0}^{H-1}\left(r_{h}^{k}+z_{h}^{k}+\max_{b}Q^{\theta} _{h+1}(\check{s}_{h}^{k},b)-Q^{\theta}_{h}(s_{h}^{k},a_{h}^{k})\right)^{2}\\ &-\sum_{l=1}^{L}\sum_{h=0}^{H-1}w_{h}^{l}\left(\beta Q^{\theta}_{ h}(s_{h}^{l},a_{h}^{l})-\log\sum_{b}\exp\left(\beta Q^{\theta}_{h}(s_{h}^{l},b) \right)\right)+\frac{1}{2}(\theta-\tilde{\theta})^{\top}\Sigma_{0}(\theta- \tilde{\theta})+\lambda_{2}\beta.\end{split} \tag{9}\] The above loss function is difficult to optimize in general due to the max operation, and the \(Q\)-value function in general having a nonlinear form. _Remark 4.1_.: Note that the loss function in equation 9 can be hard to jointly optimize over \(\theta\) and \(\beta\). In particular, estimates of \(\beta\) can be quite noisy when \(\beta\) is large, and the near-optimal expert policy only covers the state-action space partially. Thus, we consider other methods of estimating \(\beta\) that are more robust, which can then be plugged into the loss function in equation 9. Specifically, we could simply look at the entropy of the empirical distribution of the action in the offline dataset. Suppose the empirical distribution of \(\{\bar{a}_{0}^{l},\ldots\bar{a}_{H}^{l}\}_{l=1}^{L}\) is \(\mu_{A}\). Then we use \(c_{0}/\mathcal{H}(\mu_{A})\) as an estimation for \(\beta\), where \(c_{0}>0\) is a hyperparameter. The intuition is that for smaller \(\beta\), the offline actions tend to be more uniform and thus the entropy will be large. This is an unsupervised approach and agnostic to specific offline data generation process. _Remark 4.2_.: In the loss function in equation 9, the parameter \(\theta\) appears inside the max operation. Thus, it can be quite difficult to optimize over \(\beta\). Since the loss function is typically optimized via an iterative algorithm such as a gradient descent method, a simple and scalable solution that works well in practice is to use the parameter estimate \(\theta\) from the previous iteration inside the max operation, and thus optimize over \(\theta\) only in the other terms. ### iRLSVI bridges Online RL and Imitation Learning In the previous subsection, we derived iRLSVI, a Bayesian-bootstrapped algorithm. We now present interpretation of the algorithm as bridging online RL (via commonality with the RLSVI algorithm (Osband et al., 2016) and imitation learning, and hence a way for its generalization. Consider the RLSVI algorithm for online reinforcement learning as introduced in (Osband et al., 2019). It draws its inspiration from the posterior sampling principle for online learning, and has excellent cumulative regret performance. RLSVI, that uses all of the data available at the end of episode \(t\), including any offline dataset involves minimizing the corresponding loss function at each time step: \[\tilde{\mathcal{L}}_{\text{RLSVI}}(\theta)=\frac{1}{2\sigma^{2}}\sum_{k=1}^{L+t }\sum_{h=0}^{H-1}\left(r_{h}^{k}+\max_{b}Q_{h+1}^{\theta}(\check{s}_{h}^{k},b)-Q _{h}^{\theta}(s_{h}^{k},a_{h}^{k})\right)^{2}+\frac{1}{2}(\theta_{0:H}-\tilde{ \theta}_{0:H})^{\top}\Sigma_{0}(\theta_{0:H}-\tilde{\theta}_{0:H}).\] Now, let us consider an imitation learning setting. Let \(\tau_{l}=(s_{h}^{l},a_{h}^{l},\check{s}_{h}^{l})_{h=0}^{H-1}\) be the trajectory of the \(l\)th episode. Let \(\hat{\pi}_{h}(a|s)\) denote the empirical estimate of probability of taking action \(a\) in state \(s\) at time \(h\), i.e., an empirical estimate of the expert's randomized policy. Let \(p(\tau)\) denote the probability of observing the trajectory under the policy \(\hat{\pi}\). Let \(\pi_{h}^{\beta,\theta}(\cdot|s)\) denote the parametric representation of the policy used by the expert. And let \(p^{\beta,\theta}(\tau)\) denote the probability of observing the trajectory \(\tau\) under the policy \(\pi^{\beta,\theta}\). Then, the loss function corresponding to the KL divergence between \(\Pi_{l=1}^{L}p(\tau_{l})\) and \(\Pi_{l=1}^{L}p^{\beta,\theta}(\tau_{l})\) is given by \[\tilde{\mathcal{L}}_{\text{IL}}(\beta,\theta) =D_{KL}\left(\Pi_{l=1}^{L}p(\tau_{l})||\Pi_{l=1}^{L}p^{\beta, \theta}(\tau_{l})\right)=\int\Pi_{l=1}^{L}p(\tau_{l})\log\frac{\Pi_{l=1}^{L}p( \tau_{l})}{\Pi_{l=1}^{L}p^{\beta}(\tau_{l})}=\sum_{l=1}^{L}\int p(\tau_{l}) \log\frac{p(\tau_{l})}{p^{\beta,\theta}(\tau_{l})},\] \[=\sum_{l=1}^{L}\sum_{h=0}^{H-1}\log\frac{\hat{\pi}_{h}(a_{h}^{l}| s_{h}^{l})}{\pi_{h}^{\beta,\theta}(a_{h}^{l}|s_{h}^{l})}\] \[=\sum_{l=1}^{L}\sum_{h=0}^{H-1}[\log\hat{\pi}_{h}(a_{h}^{l}|s_{h}^ {l})-\log\pi_{h}^{\beta,\theta}(a_{h}^{l}|s_{h}^{l})]\] \[=-\sum_{l=1}^{L}\sum_{h=0}^{H-1}\left(\beta Q_{h}^{\theta}(s_{h}^ {l},a_{h}^{l})-\log\sum_{b}\exp\left(\beta Q_{h}^{\theta}(s_{h}^{l},b)\right) \right)\quad+\text{constant}.\] _Remark 4.3_.: (i) The loss function \(\tilde{\mathcal{L}}_{\text{IL}}(\beta,\theta)\) is the same as the second (action-likelihood) term in equation 9 while the loss function \(\tilde{\mathcal{L}}_{\text{RLSVI}}(\theta)\) is the same as the first and third terms there (except for perturbation) and minus the \(\lambda_{2}\beta\) term that corresponds to the prior over \(\beta\). (ii) Note that while we used the more common KL divergence for the imitation learning loss function, use of log loss would yield the same outcome. Thus, the iRLSVI loss function can be viewed as \[\tilde{\mathcal{L}}(\beta,\theta)=\tilde{\mathcal{L}}_{\text{RLSVI}}(\theta)+ \tilde{\mathcal{L}}_{\text{IL}}(\beta,\theta)+\lambda_{2}\beta, \tag{10}\] thus establishing that the proposed algorithm may be viewed as bridging Online RL with Imitation Learning. Note that the last term corresponds to the prior over \(\beta\). If \(\beta\) is known (or uniform), it will not show up in the loss function above. The above also suggests a possible way to generalize and obtain other online learning algorithms that can bootstrap by use of offline datasets. Namely, at each step, they can optimize a general loss function of the following kind: \[\tilde{\mathcal{L}}_{\alpha}(\beta,\theta)=\alpha\tilde{\mathcal{L}}_{\text{ ORL}}(\theta)+(1-\alpha)\tilde{\mathcal{L}}_{\text{IL}}(\beta,\theta)+ \lambda_{2}\beta, \tag{11}\] where \(\mathcal{\tilde{L}}_{\text{ORL}}\) is a loss function for an Online RL algorithm, \(\mathcal{\tilde{L}}_{\text{IL}}\) is a loss function for some Imitation Learning algorithm, and factor \(\alpha\in[0,1]\) provides a way to tune between emphasizing the offline imitation learning and the online reinforcement learning. ## 5 Empirical Results **Performance on the Deep Sea Environment.** We now present some empirical results on "deep sea", a prototypical environment for online reinforcement learning (Osband et al., 2019). We compare three variants of the iRLSVI agents, which are respectively referred to as _informed_ RLSVI (iRLSVI), _partially informed_ RLSVI (piRLSVI), and _uninformed_ RLSVI (uRLSVI). All three agents are tabular RLSVI agents with similar posterior sampling-type exploration schemes. However, they differ in whether or not and how to exploit the offline dataset. In particular, uRLSVI ignores the offline dataset; piRLSVI exploits the offline dataset but does not utilize the information about the generative policy; while iRLSVI fully exploits the information in the offline dataset, about both the generative policy and the reward feedback. We note no other algorithms are known for the problem as posed. Deep sea is an episodic reinforcement learning problem with state space \(\mathcal{S}=\{0,1,\ldots,M\}^{2}\) and, where \(M\) is its size. The state at period \(h\) in episode \(t\) is \(s^{t}_{h}=\left(x^{t}_{h},d^{t}_{h}\right)\in\mathcal{S}\), where \(x^{t}_{h}=0,1,\ldots,M\) is the horizontal position while \(d^{t}_{h}=0,1,\ldots,M\) is the depth (vertical position). Its action space is \(\mathcal{A}=\{\texttt{left},\texttt{right}\}\) and time horizon length is \(H=M\). Its reward function is as follows: If the agent chooses an action right in period \(h<H\), then it will receive a reward \(-0.1/M\), which corresponds to a "small cost"; If the agent successfully arrives at state \((M,M)\) in period \(H=M\), then it will receive a reward \(1\), which corresponds to a "big bonus"; otherwise, the agent will receive reward \(0\). The system dynamics are as follows: for period \(h<H\), the agent's depth in the next period is always increased by \(1\), i.e., \(d^{t}_{h+1}=d^{t}_{h}+1\). For the agent's horizontal position, if \(a^{t}_{h}=\texttt{left}\), then \(x^{t}_{h+1}=\max\{x^{t}_{h}-1,0\}\), i.e., the agent will move left if possible. On the other hand, if \(a^{t}_{h}=\texttt{right}\), then we have \(x^{t}_{h+1}=\min\{x^{t}_{h}+1,M\}\) with prob. \(1-1/M\) and \(x^{t}_{h+1}=x^{t}_{h}\) with prob. \(1/M\). The initial state of this environment is fixed at state \((0,0)\). The offline dataset is generated based on the expert's policy specified in Eq. equation 2, and we assume \(\beta(s)=\beta\) (a constant) across all states. We set the size of the offline dataset \(\mathcal{D}_{0}\) as \(|\mathcal{D}_{0}|=\kappa|\mathcal{A}||\mathcal{S}|\), where \(\kappa\geq 0\) is referred to as _data ratio_. We fix the size of deep sea as \(M=10\). We run the experiment for \(T=300\) episodes, and the empirical cumulative regrets are averaged over \(50\) simulations. The experimental results are illustrated in Figure 1, as well as Figure 3 in Appendix C. Specifically, Figure 1 plots the cumulative regret in the first \(T=300\) episodes as a function of the expert's deliberateness \(\beta\), for two different data ratio \(\kappa=1,5\). There are several interesting observations based on Figure 1: (i) Figure 1 shows that iRLSVI and piRLSVI tend to perform much better than uRLSVI, which demonstrates the advantages of exploiting the offline dataset, and this improvement tends to be more dramatic with a larger offline dataset. (ii) When we compare iRLSVI and piRLSVI, we note that their performance is similar when \(\beta\) is small, but iRLSVI performs much better than piRLSVI when \(\beta\) is large. This is because when \(\beta\) is small, the expert's generative policy does not contain much information; and as \(\beta\) gets larger, it contains more information and eventually it behaves like imitation learning and learns the optimal policy as \(\beta\to\infty\). Note that the error bars denote the standard errors of the empirical cumulative regrets, hence the improvements are statistically significant. **Robustness to misspecification of \(\beta\).** We also investigate the robustness of various RLSVI agents with respect to the possible misspecification of \(\beta\). In particular, we demonstrate empirically that in the deep sea environment with \(M=10\), with offline dataset is generated by an expert with deliberateness \(\beta=5\), the iRLSVI agent is quite robust to moderate misspecification. Here, the misspecified deliberateness parameter is denoted \(\tilde{\beta}\). The empirical results are illustrated in Figure 2, where the experiment is run for \(T=300\) episodes and the empirical cumulative regrets are averaged over 50 simulations. Since uRLSVI and piRLSVI do not use parameter \(\tilde{\beta}\), thus, as expected, their performance is constant over \(\tilde{\beta}\). On the other hand, iRLSVI explicitly uses parameter \(\tilde{\beta}\). As Figure 2 shows, the performance of iRLSVI does not vary much as long as \(\tilde{\beta}\) has the same order of magnitude as \(\beta\). However, there will be significant performance loss when \(\tilde{\beta}\) is too small, especially when the data ratio is also small. This makes sense since when \(\tilde{\beta}\) is too small, iRLSVI will choose to ignore all the information about the generative policy and eventually reduces to piRLSVI. ## 6 Conclusions In this paper, we have introduced and studied a new problem: Given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance in MDPs. We have followed a principled approach and introduced two algorithms: the ideal iPSRL algorithm, and the iRLSVI algorithm that is computationally practical and seamlessly Figure 1: Cumulative regret vs. \(\beta\) in deep sea. Figure 2: Robustness of iRLSVI to misspecification. bridges online RL and imitation learning in a very natural way. We have shown significant reduction in regret both empirically, and theoretically as compared to two natural baselines. The dependence of the regret bound on some of the parameters (e.g., \(H\)) could be improved upon, and is a good direction for future work. In future work, we will also combine the iRLSVI algorithm with deep learning to leverage offline datasets effectively for continuous state and action spaces as well.
2310.11251
Smallest denominators
We establish higher dimensional versions of a recent theorem by Chen and Haynes [Int. J. Number Theory 19 (2023), 1405-1413] on the expected value of the smallest denominator of rational points in a randomly shifted interval of small length, and of the closely related 1977 Kruyswijk-Meijer conjecture recently proved by Balazard and Martin [Bull. Sci. Math. 187 (2023), Paper No. 103305]. We express the distribution of smallest denominators in terms of the void statistics of multidimensional Farey fractions and prove convergence of the distribution function and certain finite moments. The latter was previously unknown even in the one-dimensional setting. We furthermore obtain a higher dimensional extension of Kargaev and Zhigljavsky's work on moments of the distance function for the Farey sequence [J. Number Theory 65 (1997), 130-149] as well as new results on pigeonhole statistics.
Jens Marklof
2023-10-17T13:19:13Z
http://arxiv.org/abs/2310.11251v2
# Small denominators ###### Abstract. We establish higher dimensional versions of a recent theorem by Chen and Haynes [Int. J. Number Theory 19 (2023), 1405-1413] on the expectation value of of the smallest denominator of rational points in a randomly shifted interval of small length, and of the closely related 1977 Kruyswijk-Meijer conjecture recently proved by Balazard and Martin [Bull. Sci. Math. 187 (2023), Paper No. 103305]. We express the distribution of smallest denominators in terms of the void statistics of multidimensional Farey fractions and prove convergence of the distribution function and certain finite moments. The latter was previously unknown even in the one-dimensional setting. We furthermore obtain a higher dimensional extension of Kargaev and Zhigljavsky's work on moments of the distance function for the Farey sequence [J. Number Theory 65 (1997), 130-149] as well as new results on pigeonhole statistics. Research supported by EPSRC grant EP/S024948/1. Data supporting this study are included within the article. MSC (2020): 11K60, 11J13, 37A17 ## 1. Introduction (the one-dimensional case) Motivated by Meiss and Sander's recent paper [21] (which we will return to in Section 6), Chen and Haynes [5] investigate the smallest denominator of all fractions in a small interval of length \(\delta\) with random center \(x\), \[q_{\min}(x,\delta)=\min\left\{q\in\mathbb{N}:\exists\frac{p}{q}\in Q\cap(x- \frac{\delta}{2},x+\frac{\delta}{2})\right\}. \tag{1.1}\] Their main results are (a) an explicit formula for the distribution for fixed \(\delta\) and \(x\) uniformly distributed in the unit interval, and (b) the asymptotics of the expectation of \(q_{\min}(x,\delta)\) as \(\delta\to 0\), which they show is \(\frac{16}{\pi^{2}}\delta^{-1/2}+O(\log^{2}\delta)\). We will see below that the statistics of \(q_{\min}(x,\delta)\) is in fact given by a scaled version of the Hall distribution for the gaps between Farey fractions. This complements recent work of Artiles [1] who proved the existence of the limit distribution using dynamics on the space of lattices. (We will comment on the link between the two approaches at the end of this introduction.) Farey fractions of level \(Q\) are defined as the finite set \[\mathscr{F}_{Q}=\left\{\frac{p}{q}\in[0,1):(p,q)\in\widehat{\mathbb{Z}}^{2}, \,0<q\leq Q\right\}, \tag{1.2}\] where \(\widehat{\mathbb{Z}}^{2}\) denotes the set of primitive lattice points, i.e., integer vectors with coprime coordinates. The number of elements is asymptotically \(\#\mathscr{F}_{Q}\sim\alpha_{Q}:=\frac{3}{\pi^{2}}Q^{2}\) as \(Q\to\infty\). The Hall distribution \(H(s)\)[10] describes the relative frequency of gaps in \(\mathscr{F}_{Q}\) of size larger than \(s\sigma_{Q}^{-1}\) as \(Q\to\infty\); see [18] for the relevant background. We have the explicit formula \[H(s)=\begin{cases}1&\text{if }t\in[1,\infty)\\ -1+2t-2t\log t&\text{if }t\in[\frac{1}{4},1]\\ -1+2t+2\sqrt{\frac{1}{4}-t}-4t\log\left(\frac{1}{2}+\sqrt{\frac{1}{4}-t} \right)&\text{if }t\in[0,\frac{1}{4}],\end{cases} \tag{1.3}\] with shorthand \(t=(\frac{\pi^{2}}{3}s)^{-1}\). Formula (1.3) was rediscovered by Kargaev and Zhigljavsky in their study of the void distribution of \(\mathscr{F}_{Q}\)[12, Theorem 1.2 and Lemma 2.6]. There is now an extensive literature on the statistical properties of Farey fractions in dimension one, see [3] and references therein. Our first observation is the following. **Proposition 1**.: _For any interval \(\mathcal{D}\subset[0,1]\) and \(L>0\), we have_ \[\lim_{\delta\to 0}\operatorname{vol}\left\{x\in\mathcal{D}:\delta^{1/2}q_{ \min}(x,\delta)>L\right\}=\operatorname{vol}\mathcal{D}\int_{L}^{\infty} \eta(s)\,ds \tag{1.4}\] _with the probability density_ \[\eta(s)=\tfrac{6}{\pi^{2}}\,s\,H(\tfrac{3}{\pi^{2}}s^{2}). \tag{1.5}\] Proof.: We have \[q_{\min}(x,\delta)>L\delta^{-1/2}\Leftrightarrow\left\{(p,q)\in\widehat{ \mathbb{Z}}^{2}:0<q\leq L\delta^{-1/2},\tfrac{p}{q}\in\left(x-\tfrac{\delta}{ 2},x+\tfrac{\delta}{2}\right)\right\}=\emptyset. \tag{1.6}\] Now, for the choice \(Q=L\delta^{-1/2}\), \(s=\tfrac{3}{\pi^{2}}L^{2}\), the right hand side of (1.6) is equivalent to \[\mathcal{F}_{Q}\cap\left(x-\tfrac{s}{2\sigma_{q}},x+\tfrac{s}{2\sigma_{Q}} \right)+\mathbb{Z}=\emptyset. \tag{1.7}\] As proved in [12] for \(\mathcal{D}=[0,1]\), and in [18] for general \(\mathcal{D}\), the Lebesgue measure of the set of \(x\in\mathcal{D}\) satisfying (1.7) has a limit, namely the void distribution \(P(0,[-\tfrac{s}{2},\tfrac{s}{2}])=P(0,[0,s])\) in the notation of [18]. Note that the limit is independent of the choice of \(\mathcal{D}\). It is a general fact that the derivative of the void distribution yields the gap distribution [17], \[-\frac{d}{ds}P(0,[0,s])=P_{0}(0,[0,s]), \tag{1.8}\] which in the present case is the classic Hall distribution \(H(s)\)[18]. Both distributions are continuous and equal to \(1\) at \(s=0\), so integrating (1.8) yields \[P(0,[0,s])=\int_{s}^{\infty}P_{0}(0,[0,s^{\prime}])\,ds^{\prime}. \tag{1.9}\] Figure 1. The limit density \(\eta(s)\) compared to the distribution of the smallest denominator of rationals in each interval \([\tfrac{j}{3000},\tfrac{j+1}{3000})\), \(j=0,\dots,2999\), cf. Section 3. The same law describes the shortest cycle length of a large random circulant directed graph of (in- and out-) degree 2 [20]. The limit in (1.4) is therefore \[\int_{\frac{3}{n^{2}}L^{2}}^{\infty}\!\!H(s)\,ds, \tag{1.10}\] and the formula for the limit density \(\eta(s)\) follows by differentiation. From (1.3) and (1.5) we deduce the explicit formula \[\eta(s)=\frac{6}{n^{2}}\times\begin{cases}s&\text{if $s\in[0,1]$}\\ -s+2s^{-1}+4s^{-1}\log s&\text{if $s\in[1,2]$}\\ -s+2s^{-1}+2s\sqrt{\frac{1}{4}-s^{-2}}-4s^{-1}\log\left(\frac{1}{2}+\sqrt{ \frac{1}{4}-s^{-2}}\right)&\text{if $s\geq 2$,}\end{cases} \tag{1.11}\] see Figure 1. Note that \(H(s)\sim\frac{36}{n^{4}}s^{-2}\) for \(s\) large, and hence \(\eta(s)=\frac{6}{n^{2}}\,sH(\frac{3}{n^{2}}s^{2})\sim\frac{24}{n^{2}}s^{-3}\). Interestingly, \(\eta(s)\) also describes the distribution of the shortest cycle length of a large random circulant directed graph of (in- and out-) degree \(2\)[20, Figure 5 and Eq. (5.19)]. We extend Proposition 1 to rational points in arbitrary dimensions in Section 2, Proposition 3. The key ingredients here are limit theorems for the fine-scale statistics of multidimensional Farey fractions [18]. Our next result is an extension of the Chen-Haynes asymptotics for the expectation value to general (but small) moments. **Proposition 2**.: _For any interval \(\mathcal{D}\subset[0,1]\) and \(\alpha\in\mathbb{C}\) with \(|\operatorname{Re}\alpha|<2\), we have_ \[\lim_{\delta\to 0}\delta^{\alpha/2}\!\int_{\mathcal{D}}q_{\min}(x,\delta)^{ \alpha}dx=\operatorname{vol}\mathcal{D}\,M(\alpha),\quad with\quad M(\alpha) =\int_{0}^{\infty}s^{\alpha}\eta(s)\,ds. \tag{1.12}\] It is interesting that the convergence of moments smallest denominators is not an immediate corollary of the convergence of moments for the void distribution of Farey fractions proved in [12], even though the limits coincide. We will prove Proposition 2 as a special case of Proposition 4 (valid in any dimension) in Section 2. Ref. [12] provides explicit formulas and asymptotics for the moments of the void statistics. In particular, Figure 2. The function \(M(\alpha)\), with the height of the graph representing its absolute value and the colour its argument. for \(|\operatorname{Re}\alpha|<2\), these yield \[M(\alpha)=\frac{3}{\pi^{2}}\int_{0}^{\infty}t^{-(\alpha+4)/2}F(t)dt=\frac{6}{\pi ^{2}(\alpha+2)}\int_{0}^{1}t^{-(\alpha+2)/2}dF(t), \tag{1.13}\] where \(F(t)=H(s)\) is the function on the right hand side of (1.3). The last integral in (1.13) is computed in [12, Lemma 2.6], and we obtain for \(|\operatorname{Re}\alpha|<2\) \[M(\alpha)=\frac{24}{\pi^{2}\alpha(\alpha+2)}\left(\frac{2}{\alpha}+2^{a} \operatorname{B}\left(-\frac{\alpha}{2},\frac{1}{2}\right)\right), \tag{1.14}\] where \(\operatorname{B}(x,y)\) is the beta function (Euler's integral of the first kind); see Figure 2 for a plot of \(M(\alpha)\), and Figure 3 in Section 3 for a comparison with numerical data. For \(\alpha=1\) the above expression evaluates to \(\frac{16}{\pi^{2}}\), which is the constant found by Chen and Haynes [5]. The remainder of this paper is organised as follows. Following the same argument as in dimension one outlined above, we translate in Section 2 the problem of smallest denominators in small subsets of \(\mathbb{R}^{n}\) to the statistics of multidimensional Farey fractions. We then apply the setting in [18] and use equidistribution and escape-of-mass estimates for group actions on the space of lattices. For the distribution for rationals in sets with random center, the relevant action on the space of lattices is the \(\mathbb{R}^{n}\)-action by the horospherical subgroup. If we move to the setting of the Kruyswijk-Meijer conjecture [14, 23], which was proved recently by Balazard and Martin [2], then the Lebesgue integral is replaced by a discrete average and, as we will explain in Section 3, the relevant action is a \(\mathbb{Z}^{n}\)-action by the "time-one" map of the horospherical subgroup. This leads to the proof of convergence of the full distribution function, and we will see that the limit is the same as in the case of continuous sampling. It also provides an alternative proof of the Kruyswijk-Meijer conjecture, including extensions to other moments and again to higher dimensions. The role of the void statistics for Farey fractions is now replaced by the so-called pigeonhole statistics, which is of independent interest and the content of Section 4. We will use a similar strategy of proof as recently employed by Pattison [22] for the pigeonhole statistics of \(\sqrt{n}\) mod \(1\). In Section 5 we discuss moments of the distance function for the multidimensional Farey sequence, thus extending results of Kargaev and Zhigljavsky [12]. Section 6 concludes this study with a limit theorem for the Meissner distribution [21] for minimal resonances in volume-preserving maps, which was the original motivation for Chen and Haynes [5]. The interpretation of smallest denominators in terms of the space of lattices was recently pointed out by Artiles [1]. Artiles proves convergence of the distribution function (an analogue of Proposition 3), using the strategy developed by Strombergsson and the author in [19] for more general lattice point problems concerning thin randomly sheared or rotated domains. Ref. [19] includes an application to directional statistics for visible lattice points in arbitrary dimension and also formed the basis for the study of multidimensional Farey fractions in [18]. The equivalence (1.6)-(1.7) (cf. also (2.7) below) thus explains the link between [1] and [18], both of which use equidistribution of closed horospheres to establish limit theorems for smallest denominators and Farey statistics, respectively. For a generalisation of the results in [18] to Farey fractions subject to congruence conditions see Heersink [11]. More restrictive constraints related to thin groups are discussed in the work of Lutsko [15]. The present paper does not include any discussion of rates of convergence, although there is no principal obstruction in obtaining these since the horospherical equidistribution results we use here are available with precise error terms. ## 2. Smallest denominators for multidimensional fractions Define the set of \(n\)-dimensional Farey fractions of level \(Q\geq 1\) (\(Q\) not necessarily an integer) by \[\mathcal{F}_{Q}=\Big{\{}\frac{\boldsymbol{p}}{q}\in[0,1)^{n}:(\boldsymbol{p},q) \in\widehat{\mathbb{Z}}^{n+1},\,0<q\leq Q\Big{\}}. \tag{2.1}\] For large \(Q\), we have \[\#\mathcal{F}_{Q}\sim\sigma_{Q}:=\frac{Q^{n+1}}{(n+1)\zeta(n+1)}. \tag{2.2}\] Given a bounded set \(\mathcal{A}\subset\mathbb{R}^{n}\) with boundary of Lebesgue measure zero and non-empty interior, define \[q_{\min}(\boldsymbol{x},\delta,\mathcal{A})=\min\Big{\{}q\in\mathbb{N}:\exists \frac{\boldsymbol{p}}{q}\in\mathbb{Q}^{n}\cap\boldsymbol{x}+\delta\mathcal{A} \Big{\}}\,. \tag{2.3}\] Assuming \(\mathcal{A}\) has non-empty interior ensures the minimum exists. Set furthermore \(G=\operatorname{SL}(n+1,\mathbb{R})\), \(\Gamma=\operatorname{SL}(n+1,\mathbb{Z})\), and \[P(0,\mathcal{A})=\mu\{g\in\Gamma\setminus G:\widehat{\mathbb{Z}}^{n+1}g\cap \mathfrak{C}(\mathcal{A})=\emptyset\}, \tag{2.4}\] where \(\mu\) is the Haar probability measure on \(\Gamma\setminus G\) and \[\mathfrak{C}(\mathcal{A})=\{(\boldsymbol{x},y)\in\mathbb{R}^{n}\times(0,1]: \boldsymbol{x}\in\sigma_{1}^{-1/n}y\mathcal{A}\}\subset\mathbb{R}^{n+1} \tag{2.5}\] is a cone with cross section \(\mathcal{A}\). **Proposition 3**.: _For \(\mathcal{A}\subset\mathbb{R}^{n}\) bounded and \(\mathcal{D}\subset[0,1]^{n}\), both with boundary of Lebesgue measure zero and non-empty interior, \(L>0\), we have_ \[\lim_{\delta\to 0}\frac{\operatorname{vol}\big{\{}\boldsymbol{x}\in\mathcal{D} :\delta^{n/(n+1)}q_{\min}(\boldsymbol{x},\delta,\mathcal{A})>L\big{\}}}{ \operatorname{vol}\mathcal{D}}=E_{\mathcal{A}}(L) \tag{2.6}\] _with \(E_{\mathcal{A}}(L)=P(0,\sigma_{1}^{1/n}L^{1+1/n}\mathcal{A})\)._ Proof.: By the same token as in the one-dimensional case, we have \[q_{\min}(\boldsymbol{x},\delta,\mathcal{A})>L\delta^{-n/(n+1)}\Leftrightarrow \mathcal{F}_{Q}\cap\boldsymbol{x}+\sigma_{Q}^{-1/n}s\mathcal{A}+\mathbb{Z}^{n }=\emptyset, \tag{2.7}\] with \(Q=L\delta^{-n/(n+1)}\) and \(s=\sigma_{1}^{1/n}L^{1+1/n}\). Theorem 3 in [18] (which is based on the results in [19]; see also Proposition 7 in Section 4) states that the volume of the set of \(\boldsymbol{x}\in\mathcal{D}\) satisfying (2.7) converges to \(P(0,s\mathcal{A})\). In dimension \(n>1\) we have no explicit expressions for \(E_{\mathcal{A}}(L)\). We know it is continuous in \(L\), and continuously differentiable if \(\mathcal{A}\) is a ball, see [19, Remark 2.6]. If \(\mathcal{A}\) is a fixed ball, we have \(P(0,s\mathcal{A})\simeq s^{-n}\) as \(s\to\infty\), see [24, Section 1.3]. We can obtain upper (resp. lower) tail estimates for general bounded \(\mathcal{A}\) with non-empty interior by using a ball that is contained in (resp. contains) \(\mathcal{A}\). This yields \(E_{\mathcal{A}}(L)\simeq L^{-(n+1)}\) for \(L\to\infty\). For \(n=1\) this is consistent with the tail of \(E_{(-\frac{1}{2},\frac{1}{2})}(L)=\int_{L}^{\infty}\eta(s)\,ds\). Let us now turn to the convergence of moments. **Proposition 4**.: _For \(\mathcal{A}\subset\mathbb{R}^{n}\) bounded and \(\mathcal{D}\subset[0,1]^{n}\), both with boundary of Lebesgue measure zero and non-empty interior, \(\alpha\in\mathbb{C}\) with \(|\operatorname{Re}\alpha|<n+1\), we have_ \[\lim_{\delta\to 0}\frac{\delta^{\alpha n/(n+1)}}{\operatorname{vol}\mathcal{D}} \int_{\mathcal{D}}q_{\min}(\boldsymbol{x},\delta,\mathcal{A})^{\alpha}d \boldsymbol{x}=\int_{0}^{\infty}L^{\alpha}\,dE_{\mathcal{A}}(L). \tag{2.8}\] Note that the For \(n=1\), Proposition 4 specialises to Proposition 2. The proof will require the notion of a Siegel set defined as follows. For \[\boldsymbol{u}=(u_{12},\ldots,u_{1(n+1)},u_{23},\ldots,u_{2(n+1)},\ldots,u_{n(n+ 1)})\in\mathbb{R}^{n(n+1)/2}\] and \[\boldsymbol{v}=(v_{1},v_{2},\ldots,v_{n+1})\in\mathcal{T},\quad\text{with} \quad\mathcal{T}=(v_{1},\ldots,v_{n+1})\in\mathbb{R}_{>0}^{n+1},v_{1}\cdots v _{n+1}=1),\] let \[n(\boldsymbol{u}):=\begin{pmatrix}1&u_{12}&\cdots&u_{1(n+1)}\\ &\ddots&&\vdots\\ &&1&u_{n(n+1)}\\ &&&1\end{pmatrix},\quad a(\boldsymbol{v}):=\begin{pmatrix}v_{1}&&\\ &v_{2}&&\\ &&\ddots&\\ &&&v_{n+1}\end{pmatrix}. \tag{2.9}\] The Iwasawa decomposition of \(g\in G\) is then given by \[g=n(\boldsymbol{u})a(\boldsymbol{v})k, \tag{2.10}\] where \(\boldsymbol{u}\in\mathbb{R}^{n(n+1)/2}\), \(\boldsymbol{v}\in\mathcal{T}\) and \(k\in\operatorname{SO}(n+1)\). The Siegel set \[\mathcal{S}_{\Gamma}:=\left\{n(\boldsymbol{u})a(\boldsymbol{v})k:\boldsymbol {u}\in[-\tfrac{1}{2},\tfrac{1}{2}]^{n(n+1)/2},\,0<v_{j+1}\leq\frac{2}{\sqrt{3 }}v_{j},\,k\in\operatorname{SO}(n+1)\right\}\subset G \tag{2.11}\] has the property that it contains a fundamental domain \(\mathcal{F}_{\Gamma}\subset G\) of the \(\Gamma\)-action and can be covered with a finite number of \(\Gamma\)-translates of \(\mathcal{F}_{\Gamma}\). We fix \(\mathcal{F}_{\Gamma}\) and set \(v_{j}(\Gamma g)=v_{j}(g)=v_{j}\), with \(g=n(\boldsymbol{u})a(\boldsymbol{v})k\in\mathcal{F}_{\Gamma}\). Proof when \(\operatorname{Re}\alpha=0\).: Proposition 3 implies (and, given the continuity of \(E_{\mathcal{A}}(L)\) in \(L\), is in fact equivalent to) the statement that for any bounded continuous function \(h:\mathbb{R}_{\geq 0}\to\mathbb{C}\), \[\lim_{\delta\to 0}\frac{1}{\operatorname{vol}\mathcal{D}}\int_{\mathcal{D}}h \left(\delta^{n/(n+1)}q_{\min}(\boldsymbol{x},\delta,\mathcal{A})\right)d \boldsymbol{x}=\int_{0}^{\infty}h(L)\,dE_{\mathcal{A}}(L). \tag{2.12}\] Now take \(h(x)=x^{\alpha}\) and the claim is proved. Proof when \(\operatorname{Re}\alpha>0\).: We have \[\delta^{an/(n+1)}\int_{\mathcal{D}}q_{\min}(\boldsymbol{x},\delta,\mathcal{A })^{\alpha}d\boldsymbol{x}=\alpha\int_{0}^{\infty}L^{\alpha-1}\operatorname{ vol}\left\{\boldsymbol{x}\in\mathcal{D}:\delta^{n/(n+1)}q_{\min}(\boldsymbol{x}, \delta,\mathcal{A})>L\right\}dL. \tag{2.13}\] In view of Proposition 3, for any \(R>r>0\), \[\lim_{\delta\to 0}\int_{r}^{R}L^{\alpha-1}\operatorname{vol}\left\{ \boldsymbol{x}\in\mathcal{D}:\delta^{n/(n+1)}q_{\min}(\boldsymbol{x},\delta, \mathcal{A})>L\right\}dL=\operatorname{vol}\mathcal{D}\int_{r}^{R}L^{\alpha-1 }E_{\mathcal{A}}(L)\,dL. \tag{2.14}\] Therefore, all that remains to be shown is that \[\lim_{R\to\infty}\limsup_{\delta\to 0}\int_{R}^{\infty}L^{\operatorname{Re} \alpha-1}\operatorname{vol}\left\{\boldsymbol{x}\in\mathcal{D}:\delta^{n/(n+1) }q_{\min}(\boldsymbol{x},\delta,\mathcal{A})>L\right\}dL=0, \tag{2.15}\] \[\lim_{r\to 0}\limsup_{\delta\to 0}\int_{0}^{r}L^{\operatorname{Re} \alpha-1}\operatorname{vol}\left\{\boldsymbol{x}\in\mathcal{D}:\delta^{n/(n+1) }q_{\min}(\boldsymbol{x},\delta,\mathcal{A})>L\right\}dL=0. \tag{2.16}\] Relation (2.16) is immediate since the integrand is bounded above by \(L^{\operatorname{Re}\alpha-1}\). We will establish (2.15) by proving that there is a constant \(C\) such that for all \(\delta>0\), \(L\geq 1\), we have \[\operatorname{vol}\left\{\boldsymbol{x}\in[0,1]^{n}:\delta^{n/(n+1)}q_{\min}( \boldsymbol{x},\delta,\mathcal{A})>L\right\}\leq CL^{-(n+1)}. \tag{2.17}\] To this end, recall the observation (2.7) and furthermore (the starting point of [18]) that \[\mathcal{F}_{Q}\cap\boldsymbol{x}+\sigma_{Q}^{-1/n}s\mathcal{A}+\mathcal{I}^{ n}=\emptyset\Leftrightarrow\widehat{\mathbb{Z}}^{n+1}h(\boldsymbol{x})a(Q)\cap \mathfrak{C}(s\mathcal{A})=\emptyset, \tag{2.18}\] where \[h(\mathbf{x})=\begin{pmatrix}1_{n}&\mathbf{0}\\ -\mathbf{x}&1\end{pmatrix},\qquad a(y)=\begin{pmatrix}y^{1/n}1_{n}&\mathbf{0}\\ \mathbf{0}&y^{-1}\end{pmatrix}. \tag{2.19}\] With the choice \(Q=L\delta^{-n/(n+1)}\) and \(s=\sigma_{1}^{1/n}L^{1+1/n}\) this becomes \[\widehat{\mathbb{Z}}^{n+1}h(\mathbf{x})a(\delta^{-n/(n+1)})\cap\mathfrak{C}(\sigma _{1}^{1/n}L^{1+1/n}\mathcal{A})a(L^{-1})=\phi, \tag{2.20}\] where we note that \[\mathfrak{C}(\sigma_{1}^{1/n}L^{1+1/n}\mathcal{A})a(L^{-1})=L\mathfrak{C}( \sigma_{1}^{1/n}\mathcal{A}) \tag{2.21}\] is the homothetic dilation by \(L\) of the fixed cone \(\mathfrak{C}(\sigma_{1}^{1/n}\mathcal{A})\). Since \(\mathfrak{C}(\sigma_{1}^{1/n}\mathcal{A})\) is a cone with vertex at the origin, relation (2.20) is equivalent to \[(\mathbb{Z}^{n+1}\setminus\{0\})\,h(\mathbf{x})a(\delta^{-n/(n+1)})\cap L \mathfrak{C}(\sigma_{1}^{1/n}\mathcal{A})=\emptyset. \tag{2.22}\] Because \(\mathcal{A}\) has non-empty interior, \(\mathfrak{C}(\sigma_{1}^{1/n}\mathcal{A})\) contains an open ball \(\mathcal{B}_{0}\) of radius \(r_{0}>0\) not containing the origin, and hence also \(L\mathcal{B}_{0}\subset L\mathfrak{C}(\sigma_{1}^{1/n}\mathcal{A})\). Now, [24, Lemma 2.1] tells us, given \(r_{0}\) there is a constant \(r_{1}>0\) such that for all \(L>0\), we have \(v_{1}(g)\geq r_{1}L\) for any lattice \(\mathbb{Z}^{n+1}g\) with \(g=n(\mathbf{u})a(\mathbf{v})k\in\mathcal{S}_{\Gamma}\) which does not intersect a ball of radius \(r_{0}L\). The left hand side of (2.17) is thus bounded above by \[\operatorname{vol}\left\{\mathbf{x}\in[0,1]^{n}:v_{1}\big{(}\Gamma h(\mathbf{x})a(Q) \big{)}\geq r_{1}L\right\}. \tag{2.23}\] An upper bound for (2.23) follows from the proof of Proposition 5.1 (case B1) in [13]. For \(1\leq s\leq n\) and \(\underline{l}=(l_{1},\cdots,l_{s})\in\mathbb{Z}_{\geq 0}^{s}\), we set (cf. [13, (5.5)]) \[\Xi_{\underline{l}}^{s}:=\left\{g\in\mathcal{F}_{\Gamma}:s(g)=s,\delta_{n+1} 2^{l_{i}}<v_{i}(g)\leq\delta_{n+1}2^{l_{i}+1}\,(i=1,\cdots,s)\right\} \tag{2.24}\] with \(\delta_{d}=d4^{d}\) and \(s(g)\) is the largest \(i\) for which \(v_{i}(g)>1\). With this, the estimate leading to [13, (5.21)] shows that (2.23) is bounded above by \[\begin{split}\sum_{s=1}^{n}&\sum_{\underline{l}\in \mathbb{Z}_{\geq 0}^{s}\atop\delta_{n+1}2^{l_{1}+1}\geq r_{1}L}\operatorname{vol} \left\{\mathbf{x}\in[0,1]^{n}:\exists\gamma\in\Gamma\text{ s.t. }\gamma h(\mathbf{x})a(Q)\in\Xi_{\underline{l}}^{s}\right\}\\ &\ll\sum_{s=1}^{n}\sum_{\underline{l}\in\mathbb{Z}_{\geq 0}^{s} \atop\delta_{n+1}2^{l_{1}+1}\geq r_{1}L}\prod_{i=1}^{s}2^{-(n+1)l_{i}}\ll L^{ -(n+1)},\end{split} \tag{2.25}\] and therefore \[\operatorname{vol}\left\{\mathbf{x}\in[0,1]^{n}:v_{1}\big{(}\Gamma h(\mathbf{x})a(Q) \big{)}\geq r_{1}L\right\}\ll L^{-(n+1)}. \tag{2.26}\] This yields (2.17) and the proof for positive \(\operatorname{Re}\alpha\) is complete. Proof when \(\operatorname{Re}\alpha<0\).: We now write \[\delta^{an/(n+1)}\int_{\mathcal{D}}q_{\min}(\mathbf{x},\delta,\mathcal{A})^{a}d \mathbf{x}=-\alpha\int_{0}^{\infty}L^{a-1}\operatorname{vol}\left\{\mathbf{x}\in \mathcal{D}:\delta^{n/(n+1)}q_{\min}(\mathbf{x},\delta,\mathcal{A})\leq L\right\}dL. \tag{2.27}\] The argument is analogous to the previous case of positive \(\alpha\). We now need to establish \[\lim_{R\to\infty}\limsup_{\delta\to 0}\int_{R}^{\infty}L^{\operatorname{Re} \alpha-1}\operatorname{vol}\left\{\mathbf{x}\in\mathcal{D}:\delta^{n/(n+1)}q_{ \min}(\mathbf{x},\delta,\mathcal{A})\leq L\right\}dL=0, \tag{2.28}\] \[\lim_{r\to 0}\limsup_{\delta\to 0}\int_{0}^{r}L^{\operatorname{Re}\alpha-1} \operatorname{vol}\left\{\mathbf{x}\in\mathcal{D}:\delta^{n/(n+1)}q_{\min}(\mathbf{x}, \delta,\mathcal{A})\leq L\right\}dL=0. \tag{2.29}\] Here (2.28) is immediate since the integrant is bounded above by \(L^{\operatorname{Re}a-1}\), and it remains to check (2.29). Instead of (2.20) we must now satisfy \[\widehat{Z}^{n+1}h(\boldsymbol{x})a(\delta^{-n/(n+1)})\cap L\mathfrak{C}( \sigma_{1}^{\text{L/$n$}}\mathcal{A})\neq\emptyset, \tag{2.30}\] which leads us to the volume \(V(L,\delta^{-n/(n+1)})\) of \(\boldsymbol{x}\in[0,1]^{n}\) so that we have an element in \(\widehat{Z}^{n+1}h(\boldsymbol{x})a(\delta^{-n/(n+1)})\) of norm at most \(bL\), for some constant \(b\) depending only on the choice of \(\mathcal{A}\). If we set \(y=\delta^{-n/(n+1)}>1\) and denote by \(\chi_{R}\) the characteristic function of a ball of radius \(R=bL\) centered at the origin, then the desired volume is bounded above by \[\begin{split} V(L,y)&\leq\int_{[0,1]^{n}}\sum_{ \begin{subarray}{c}\boldsymbol{(p,q)}\in\mathbb{Z}^{n+1}\\ q\geq 0\end{subarray}}\chi_{bL}((\boldsymbol{p},q)h(\boldsymbol{x})a(y))d \boldsymbol{x}\\ &=\int_{[0,1]^{n}}\sum_{\begin{subarray}{c}\boldsymbol{(p,q)} \in\mathbb{Z}^{n}\\ q>0\end{subarray}}\chi_{bL}((\boldsymbol{p}-q\boldsymbol{x})y^{1/n},qy^{-1}) d\boldsymbol{x}\\ &=\int_{[0,1]^{n}}\sum_{\boldsymbol{m}\in\mathbb{Z}^{n}}\sum_{ \begin{subarray}{c}\boldsymbol{(p,q)}\in\mathbb{Z}^{n}\\ 0\leq p_{j}<q\end{subarray}}\chi_{bL}((\boldsymbol{p}+q\boldsymbol{m}-q \boldsymbol{x})y^{1/n},qy^{-1})d\boldsymbol{x}\\ &\leq\int_{\mathbb{R}^{n}}\sum_{q=1}^{\infty}q^{n}\chi_{bL}( \boldsymbol{x}qy^{1/n},qy^{-1})d\boldsymbol{x}\\ &=y^{-1}L^{n}\int_{\mathbb{R}^{n}}\sum_{q=1}^{\infty}\chi_{b}( \boldsymbol{x},(Ly)^{-1}q)\,d\boldsymbol{x}\\ &\ll L^{n+1}\end{split} \tag{2.31}\] where the implied constant is independent of \(0<L\leq 1\) and \(y>1\). We conclude \[\operatorname{vol}\left\{\boldsymbol{x}\in\mathcal{D}:\delta^{n/(n+1)}q_{ \min}(\boldsymbol{x},\delta,\mathcal{A})\leq L\right\}\ll L^{n+1} \tag{2.32}\] for all \(\delta>0\) and \(0<L\leq 1\). Hence (2.29) follows for \(-(n+1)<\operatorname{Re}\alpha<0\). ## 3. Discrete sampling So far we have considered \(\boldsymbol{x}\) as a random point uniformly distributed (with respect to the Lebesgue measure) in \(\mathcal{D}\subset[0,1]^{n}\). We will replace this with a discrete sampling over points \(\boldsymbol{x}_{j,N}=N^{-1}j\) in \(\mathcal{D}\), with \(j\) ranging over \(\mathbb{Z}^{n}\). We will also allow an additional shift by a fixed \(\boldsymbol{x}_{0}\in\mathbb{R}^{n}\). **Proposition 5**.: _For \(\mathcal{A}\subset\mathbb{R}^{n}\) bounded and \(\mathcal{D}\subset[0,1]^{n}\), both with boundary of Lebesgue measure zero and non-empty interior, \(L>0\), \(\boldsymbol{x}_{0}\in\mathbb{R}^{n}\), \(c>0\), we have_ \[\lim_{\begin{subarray}{c}\delta\to 0,N\to\infty\\ c\delta^{-1}\leq N\end{subarray}}\frac{\#\left\{\boldsymbol{j}\in\mathbb{Z}^{ n}\cap N\mathcal{D}:\delta^{n/(n+1)}q_{\min}(\boldsymbol{x}_{0}+N^{-1}\boldsymbol{j}, \delta,\mathcal{A})>L\right\}}{N^{n}\operatorname{vol}\mathcal{D}}=E_{ \mathcal{A}}(L) \tag{3.1}\] _with \(E_{\mathcal{A}}(L)\) as in Proposition 3._ Proof.: This follows from the same argument as the proof of Proposition 3, if we replace the convergence of the void statistics for Farey fractions from [18] with Proposition 7 in Section 4, a new result on pigeonhole statistics. In the one-dimensional case \(n=1\), with \(\delta=1/N\), \(\mathcal{D}=[0,1)\), \(\mathcal{A}=[0,1)\) and \(\boldsymbol{x}_{0}=0\), (3.1) simplifies to \[\lim_{N\to\infty}\frac{1}{N}\#\left\{j=0,\ldots,N-1:N^{-1/2}q_{\min}(N^{-1}j, N^{-1},[0,1))>L\right\}=\int_{L}^{\infty}\eta(s)ds. \tag{3.2}\] Figure 1 gives a comparison with numerical data for \(N=3000\), which was generated by Mathematica with the input ``` n=3000; data= ParallelTrable[n^(-1/2)Min[Denominator[Select[FareySequence[n],j/n<(j+1)/n&]]], {j,0,n-1}]; ``` We have here used the fact that \(q_{\min}(N^{-1}j,N^{-1},[0,1))\leq N\) for rationals in an interval of length \(1/N\). Let us now turn to the convergence of moments. **Proposition 6**.: _For \(\mathcal{A}\subset\mathbb{R}^{n}\) bounded and \(\mathcal{D}\subset[0,1]^{n}\), both with boundary of Lebesgue measure zero and non-empty interior, \(\alpha\in\mathbb{C}\) with \(|\operatorname{Re}\alpha|<n+1\), \(\boldsymbol{x}_{0}\in\mathbb{R}^{n}\), \(c>0\), we have_ \[\lim_{\begin{subarray}{c}\delta\to 0,N\to\infty\\ c\delta^{-1}\leq N\end{subarray}}\frac{\delta^{an(n+1)}}{N^{n}\operatorname{ vol}\mathcal{D}}\sum_{\boldsymbol{j}\in\mathbb{Z}^{n}\cap N\mathcal{D}}q_{\min}( \boldsymbol{x}_{0}+N^{-1}\boldsymbol{j},\delta,\mathcal{A})^{\alpha}d \boldsymbol{x}=\int_{0}^{\infty}L^{\alpha}\,dE_{\mathcal{A}}(L). \tag{3.3}\] Proof.: The proof follows the same steps as for Proposition 4, with the continuous average replaced by the discrete. The crucial step is to show that, for \(0<\operatorname{Re}\alpha<n+1\), \[\lim_{R\to\infty}\lim_{\begin{subarray}{c}\delta\to 0,N\to\infty\\ c\delta^{-1}\leq N\end{subarray}}\int_{R}^{\infty}L^{\operatorname{Re}\alpha-1 }\frac{\#\left\{\boldsymbol{j}\in\mathbb{Z}^{n}\cap N\mathcal{D}:\delta^{n/(n +1)}q_{\min}(\boldsymbol{x}_{0}+N^{-1}\boldsymbol{j},\delta,\mathcal{A})>L \right\}}{N^{n}\operatorname{vol}\mathcal{D}}dL=0, \tag{3.4}\] and for \(-(n+1)<\operatorname{Re}\alpha<0\), \[\lim_{r\to 0}\lim_{\begin{subarray}{c}\delta\to 0,N\to\infty\\ c\delta^{-1}\leq N\end{subarray}}\int_{0}^{r}L^{\operatorname{Re}\alpha-1} \frac{\#\left\{\boldsymbol{j}\in\mathbb{Z}^{n}\cap N\mathcal{D}:\delta^{n/(n +1)}q_{\min}(\boldsymbol{x}_{0}+N^{-1}\boldsymbol{j},\delta,\mathcal{A})\leq L \right\}}{N^{n}\operatorname{vol}\mathcal{D}}dL=0. \tag{3.5}\] As to the former, let \(\mathcal{A}_{0}\) be an open ball contained in \(\mathcal{A}\). Since by assumption \(c\delta^{-1}\leq N\), there is an \(\epsilon\in(0,\frac{\delta}{2}]\) such that \[\boldsymbol{r}+\tfrac{1}{2}\mathcal{A}_{0}\subset\mathcal{A}_{0} \tag{3.6}\] for every \(\boldsymbol{r}\in\left[-\frac{\epsilon}{\delta N},\frac{\epsilon}{\delta N} \right]^{n}\), and therefore \[q_{\min}(\boldsymbol{x}_{0}+N^{-1}\boldsymbol{j},\delta,\mathcal{A})\leq q_{ \min}(\boldsymbol{x}_{0}+N^{-1}(\boldsymbol{j}+\boldsymbol{r}),\delta,\tfrac{ 1}{2}\mathcal{A}_{0}). \tag{3.7}\] This implies \[\begin{split}&\frac{1}{N^{n}}\#\left\{\boldsymbol{j}\in\mathbb{Z}^{ n}\cap N\mathcal{D}:\delta^{n/(n+1)}q_{\min}(\boldsymbol{x}_{0}+N^{-1} \boldsymbol{j},\delta,\mathcal{A})>L\right\}\\ \leq&\left(\frac{\delta}{2\epsilon}\right)^{n}\int_{ \left[-\frac{\epsilon}{\delta N},\frac{\epsilon}{\delta N}\right]^{n}}\# \left\{\boldsymbol{j}\in\mathbb{Z}^{n}\cap N[0,1]^{n}:\delta^{n/(n+1)}q_{\min}( \boldsymbol{x}_{0}+N^{-1}(\boldsymbol{j}+\boldsymbol{r}),\delta,\tfrac{1}{2} \mathcal{A}_{0})>L\right\}\,d\boldsymbol{r}\\ \leq&\,2\left(\frac{\delta}{2\epsilon}\right)^{n} \operatorname{vol}\left\{\boldsymbol{x}\in[0,1]^{n}:\delta^{n/(n+1)}q_{\min}( \boldsymbol{x},\delta,\tfrac{1}{2}\mathcal{A}_{0})>L\right\}.\end{split} \tag{3.8}\] We can now apply our previous estimate (2.17) to get the required upper bound. The case of negative \(\alpha\) is analogous. We now take a ball \(\mathcal{A}_{1}\) containing \(\mathcal{A}\). There exists \(c\in(0,\frac{\delta}{2}]\) such that \[\mathcal{A}_{1}\subset\boldsymbol{r}+2\mathcal{A}_{1} \tag{3.9}\] for every \(\boldsymbol{r}\in\left[-\frac{\epsilon}{\delta N},\frac{\epsilon}{\delta N} \right]^{n}\), and therefore \[q_{\min}(\boldsymbol{x}_{0}+N^{-1}(\boldsymbol{j}+\boldsymbol{r}),\delta,2 \mathcal{A}_{0})\leq q_{\min}(\boldsymbol{x}_{0}+N^{-1}\boldsymbol{j},\delta, \mathcal{A}). \tag{3.10}\] Following the same steps as in (3.8), we can now reduce to the continuous sampling estimate (2.32). Alternatively, we could have used for the proof of negative \(\alpha\) the following counterpart of (2.31), \[\frac{1}{N^{n}}\sum_{\boldsymbol{j}\in\mathcal{Z}^{n}\cap[0,N]^{n}}\sum_{ \begin{subarray}{c}\boldsymbol{(p,q)}\in\mathcal{Z}^{n+1}\\ q>0\end{subarray}}\chi_{bL}((\boldsymbol{p},q)(h(\boldsymbol{x}_{0}+N^{-1} \boldsymbol{j})a(y))\ll L^{n+1} \tag{3.11}\] uniformly for \(1\leq y=\delta^{-n/(n+1)}\leq c^{-n/(n+1)}N^{n/(n+1)}\). To prove this note that \[\sum_{\begin{subarray}{c}\boldsymbol{(p,q)}\in\mathcal{Z}^{n+1}\\ q>0\end{subarray}}\chi_{bL}((\boldsymbol{p}-q\boldsymbol{x})y^{1/n},q\,y^{-1 }))\leq\sum_{\begin{subarray}{c}\boldsymbol{(p,q)}\in\mathcal{Z}^{n+1}\\ q>0\end{subarray}}\chi_{(1+n^{1/2}c^{-1})bL}((\boldsymbol{p}-q(\boldsymbol{x }+\boldsymbol{r}))y^{1/n},q\,y^{-1})), \tag{3.12}\] provided \(\|\boldsymbol{r}\|_{\infty}\leq N^{-1}\). Now take \(\boldsymbol{x}=\boldsymbol{x}_{0}+N^{-1}\boldsymbol{j}\) and integrate \(\boldsymbol{r}\) over the cube \([-\frac{N}{2},\frac{N}{2})^{n}\). This shows that the left hand side of (3.11) is bounded above by the left hand side of (2.31) with \(L\) replaced by \((1+n^{1/2}c^{-1})L\), and the claim (3.11) is proved. We remark that, for \(\delta=1/N\), (3.3) becomes \[\lim_{N\to\infty}\frac{1}{N^{n+an/(n+1)}\mathrm{vol}\mathcal{D}}\sum_{ \boldsymbol{j}\in\mathcal{Z}^{2}\cap N\mathcal{D}}q_{\min}(\boldsymbol{x}_{0} +N^{-1}\boldsymbol{j},N^{-1},\mathcal{A})^{\alpha}d\boldsymbol{x}=\int_{0}^{ \infty}L^{\alpha}\,d\boldsymbol{E}_{\mathcal{A}}(L) \tag{3.13}\] which, for \(n=1\), \(\alpha=1\), \(\mathcal{D}=(0,1]\), \(\mathcal{A}=(0,1]\) and \(\boldsymbol{x}_{0}=0\), yields the Kruyswijk-Meijer conjecture [14, 23, 2]. A numerical comparison of the actual moments and the limit are plotted in Figure 3. ## 4. Pigeonhole statistics for Farey fractions We start by recalling Theorem 3 in [18] regarding the fine-scale statistics of Farey fractions. The case \(k=0\) corresponds to the void statistics, which we used in the proof of Proposition 3 for the limit distribution of smallest denominators in the case of continuous sampling. Figure 3. The limiting moments \(M(\alpha)\) for real \(\alpha\) (blue) compared with finite-\(N\) approximations (red) corresponding to the left hand side of (3.13) with \(N=100\), \(50\), \(25\) (top to bottom) and \(n=1\), \(\alpha=1\), \(\mathcal{D}=[0,1)\), \(\mathcal{A}=[0,1)\), \(\boldsymbol{x}_{0}=0\). **Proposition 7**.: _For \(\mathcal{A}\subset\mathbb{R}^{n}\) bounded and \(\mathcal{D}\subset[0,1]^{n}\), both with boundary of Lebesgue measure zero and non-empty interior, \(k\in\mathbb{Z}_{\geq 0}\), we have_ \[\lim_{\delta\to 0}\frac{\operatorname{vol}\left\{\mathbf{x}\in\mathcal{D}:\#( \mathcal{F}_{Q}\cap\mathbf{x}+\sigma_{Q}^{-1/n}\mathcal{A}+\mathbb{Z}^{n})=k\right\} }{\operatorname{vol}\mathcal{D}}=P(k,\mathcal{A}) \tag{4.1}\] _where_ \[P(k,\mathcal{A})=\mu\{g\in\Gamma\backslash G:\#(\widehat{\mathbb{Z}}^{n+1}g \cap\mathcal{C}(\mathcal{A}))=k\}. \tag{4.2}\] The following proposition will play the analogous role in the discrete sampling case, and provide the final ingredient of the proof of Proposition 5. **Proposition 8**.: _For \(\mathcal{A}\subset\mathbb{R}^{n}\) bounded and \(\mathcal{D}\subset[0,1]^{n}\), both with boundary of Lebesgue measure zero and non-empty interior, \(k\in\mathbb{Z}_{\geq 0}\), \(\mathbf{x}_{0}\in\mathbb{R}^{n}\), \(c>0\), we have_ \[\lim_{\begin{subarray}{c}Q,N\to\infty\\ cQ^{n+1}\leq N^{n}\end{subarray}}\frac{\#\left\{\mathbf{j}\in\mathbb{Z}^{n}\cap N \mathcal{D}:\#(\mathcal{F}_{Q}\cap\mathbf{x}_{0}+N^{-1}\mathbf{j}+\sigma_{Q}^{-1/n} \mathcal{A}+\mathbb{Z}^{n})=k\right\}}{N^{n}\operatorname{vol}\mathcal{D}}=P( k,\mathcal{A}) \tag{4.3}\] _with \(P(k,\mathcal{A})\) as in Proposition 7._ We refer to the above as "pigeonhole statistics" for the following reason. Take \(\mathcal{A}=[0,s)^{n}\) for some given \(s\), let \(N\) run through the positive integers, and chose \(Q=Q_{N}\) so that \(\sigma_{1}Q^{n+1}=s^{n}N^{n}\). Then the cubes (=pigeon holes) \[\mathbf{x}_{0}+N^{-1}\mathbf{j}+\sigma_{Q}^{-1/n}\mathcal{A}=\mathbf{x}_{0}+N^{-1}(\mathbf{j}+ [0,1)^{n}) \tag{4.4}\] tile \(\mathbb{T}^{n}:=\mathbb{R}^{n}/\mathbb{Z}^{n}\), and the left hand side of (4.3) counts the number of cubes that contain exactly \(k\) Farey points with denominator at most \(Q_{N}\). Following the strategy of proof of Proposition 7 (Theorem 3 in [18]), we need to replace the equidistribution theorem for closed horospheres (Theorem 1 in [18]) by the following discrete version. There has been significant interest recently in studying the distribution of rational points on horospheres. We refer the interested reader to [4, 7, 8, 9, 22] and references therein. **Proposition 9**.: _For \(f:\mathbb{T}^{n}\times\Gamma\backslash G\to\mathbb{R}\) bounded continuous, \(c>0\), we have_ \[\lim_{\begin{subarray}{c}N,Q\to\infty\\ cQ^{n+1}\leq N^{n}\end{subarray}}\frac{1}{N^{n}}\sum_{j\in\mathbb{Z}^{n}/N \mathbb{Z}^{n}}f\big{(}N^{-1}\mathbf{j},h(\mathbf{x}_{0}+N^{-1}\mathbf{j})a(Q)\big{)}= \int_{\mathbb{T}^{n}\times\Gamma\backslash G}f(\mathbf{x},g)\,d\mathbf{x}\,d\mu(g). \tag{4.5}\] Proof.: By a standard measure-theoretic argument, it will be sufficient to show that for \(\mathcal{D}\subset\mathbb{T}^{n}\) with boundary of Lebesgue measure zero and non-empty interior, \(f:\Gamma\backslash G\to\mathbb{R}\) bounded continuous, we have \[\lim_{\begin{subarray}{c}N,Q\to\infty\\ cQ^{n+1}\leq N^{n}\end{subarray}}\frac{1}{N^{n}\operatorname{vol}\mathcal{D}} \sum_{j\in\mathbb{Z}^{n}/N\mathbb{Z}^{n}\cap N\mathcal{D}}f\big{(}h(\mathbf{x}_{0} +N^{-1}\mathbf{j})a(Q)\big{)}=\int_{\Gamma\backslash G}f(g)\,d\mu(g). \tag{4.6}\] For a given sequence of \((N_{j},Q_{j})\), the left hand side of (4.6) defines a sequence of probability measures \(\nu_{j}\) on \(\Gamma\backslash G\) via \[\nu_{j}(f)=\frac{1}{\#(\mathbb{Z}^{n}/N_{i}\mathbb{Z}^{n}\cap N_{i}\mathcal{D })}\sum_{j\in\mathbb{Z}^{n}/N_{i}\mathbb{Z}^{n}\cap N_{i}\mathcal{D}}f\big{(} h(\mathbf{x}_{0}+N_{i}^{-1}\mathbf{j})a(Q_{i})\big{)} \tag{4.7}\] which we need to show converges weakly to the probability measure \(\mu\). By Mahler's compactness criterion for the space of lattices, the complement of large-volume compact sets are characterised by lattices with short vectors. The estimate (3.11) therefore shows that \((\nu_{j})_{j}\) is tight and thus each subsequence contains a convergent subsequence. We may now assume (without loss of generality) that \(f\) has compact support and is therefore uniformly continuous. A key observation is that \[h(\mathbf{x}_{0}+N^{-1}\mathbf{j})a(\mathbf{Q})=h(\mathbf{x}_{0})a(Q)h(Q^{1+1/n}N^{-1}\mathbf{j}). \tag{4.8}\] In the following we restrict to subsequences along which \(Q_{i}^{1+1/n}N_{i}^{-1}\to\tau_{0}\) for some \(\tau_{0}\in[0,c^{-1/n}]\). In the case \(\tau_{0}=0\) the discrete average is uniformly close to the continuous average (by uniform continuity of \(f\)), and thus by Theorem 1 in [18] the limit is given by \(\mu\). If \(\tau_{0}>0\), then any weak limit is invariant under the map \(\Gamma\backslash G\to\Gamma\backslash G\), \(\Gamma g\mapsto\Gamma gh(\tau_{0}\mathbf{j})\) for any \(\mathbf{j}\in\mathbb{Z}^{n}\). Since the action of \(G\) on \(\Gamma\backslash G\) by right multiplication is mixing with respect to \(\mu\), we have in particular that the action of the subgroup \(H_{\tau_{0}}=\{h(\tau_{0}\mathbf{j}):\mathbf{j}\in\mathbb{Z}^{n}\}\) is \(\mu\)-ergodic. There are various avenues to determine the possible limit points of \((\nu_{j})_{j}\), for example referring to disjointness results for mixing actions. Here we take a more direct path shown to me by M. Einsiedler. Let us \(\epsilon\)-broaden the probability measure \(\nu_{j}\) by setting, for \(0<\epsilon<\tau_{0}\), \[\nu_{i}^{\epsilon}(f)=\nu_{j}(f_{\epsilon}),\qquad f_{\epsilon}(g):=\frac{1}{ \epsilon^{n}}\int_{[-\frac{\epsilon}{2},\frac{\epsilon}{2}]^{n}}f\big{(}gh( \mathbf{x})\big{)}\,d\mathbf{x}. \tag{4.9}\] We also define the probability measure corresponding to the continuous horospherical average \[\mu_{i}(f)=\frac{1}{\operatorname{vol}\mathcal{D}}\int_{\mathcal{D}}f\big{(}h( \mathbf{x})a(Q_{i})\big{)}\,d\mathbf{x}, \tag{4.10}\] and the complementary probability measure \[\overline{\nu}_{i}^{\epsilon}=\frac{\mu_{i}-\epsilon^{n}\nu_{i}^{\epsilon}}{1 -\epsilon^{n}}. \tag{4.11}\] Suppose \(\nu_{i}^{\epsilon}\to\nu^{\epsilon}\) along a converging subsequence. As we have \(\mu_{i}\to\mu\) (along any subsequence), by construction \(\overline{\nu}_{i}^{\epsilon}\to\overline{\nu}^{\epsilon}\) along the same subsequence as \(\nu_{i}^{\epsilon}\), and the limits satisfy the relation \[\epsilon\nu^{\epsilon}+(1-\epsilon)\overline{\nu}^{\epsilon}=\mu. \tag{4.12}\] All three limit measures are \(H_{\tau_{0}}\)-invariant. Since the action of \(H_{\tau_{0}}\) is \(\mu\)-ergodic, by the extremality property of ergodic measures, we conclude \(\nu^{\epsilon}=\overline{\nu}^{\epsilon}=\mu\) for every given \(\epsilon>0\). Because \(f\) is uniformly continuous, we have \[\limsup_{\epsilon\to 0}\sup_{i}|\nu_{i}(f)-\nu_{i}^{\epsilon}(f)|=0 \tag{4.13}\] and thus every limit point of \((\nu_{i})_{i}\) must be equal to \(\mu\). ## 5. Moments of the distance function for the Farey sequence It is instructive to compare the moments of the smallest denominator of [5] with the distance of a random point to the Farey sequence in [12]. Let us already consider the higher dimensional distance function, for \(\mathbf{x}\in\mathbb{T}^{n}\), \(Q\geq 1\). \[\operatorname{dist}(\mathbf{x},\mathcal{F}_{Q})=\min\{\|\mathbf{x}+\mathbf{r}+\mathbf{m}\|:\mathbf{ r}\in\mathcal{F}_{Q},\,\mathbf{m}\in\mathbb{Z}^{n}\}, \tag{5.1}\] where \(\|\cdot\|\) can be any of the standard norms on \(\mathbb{R}^{n}\). We first of all note that, for every \(s\geq 0\), \[\sigma_{Q}^{\text{L/n}}\operatorname{dist}(\mathbf{x},\mathcal{F}_{Q})>s\Leftrightarrow \mathcal{F}_{Q}\cap\mathbf{x}+\sigma_{Q}^{-1/n}\mathcal{B}_{s}+\mathbb{Z}^{n}= \emptyset, \tag{5.2}\] where \(\mathcal{B}_{s}=\{\mathbf{y}\in\mathbb{R}^{n}:\|\mathbf{y}\|\leq s\}\), which gives the connection with the void statistics with \(\mathcal{A}=\mathcal{B}_{s}\), and via (2.7) to the smallest denominator. Note that for finite \(Q\) resp. \(\delta\) the moments of the two distributions are different, although the limiting distributions are the same, up to a simple scaling. The following statement generalises the results of [12, Theorem 1.4] in dimension \(n=1\) (in the second of the four ranges, which corresponds to \(-1<\beta<1\) in the statement below) to arbitrary dimension. **Proposition 10**.: _For \(\mathscr{D}\subset[0,1]^{n}\) with boundary of Lebesgue measure zero and non-empty interior, \(\beta\in\mathbb{C}\) with \(|\operatorname{Re}\beta|<n\), we have_ \[\lim_{\mathcal{Q}\to\infty}\frac{\sigma_{Q}^{\beta n}}{\operatorname{vol} \mathscr{D}}\int_{\mathscr{D}}\operatorname{dist}(\boldsymbol{x},\mathscr{F} _{Q})^{\beta}d\boldsymbol{x}=\int_{0}^{\infty}s^{\beta}\,dF_{\mathscr{B}_{1}} (s), \tag{5.3}\] _with \(F_{\mathscr{B}_{1}}(s)=P(0,\mathscr{B}_{s})=E_{\mathscr{B}_{1}}(\sigma_{1}^{-1 /(n+1)}s^{n/(n+1)})\)._ Proof.: The strategy is the same as for the proof of Proposition 4. The key point is to show that when \(\operatorname{Re}\beta>0\), \[\lim_{R\to\infty}\limsup_{Q\to\infty}\int_{R}^{\infty}s^{\operatorname{Re} \beta-1}\operatorname{vol}\left\{\boldsymbol{x}\in\mathscr{D}:\sigma_{Q}^{1/ n}\operatorname{dist}(\boldsymbol{x},\mathscr{F}_{Q})>s\right\}ds=0. \tag{5.4}\] Since \(Q^{-1}\mathbb{Z}^{n}\cap[0,1)^{n}\subset\mathscr{F}_{Q}\) we have \(\operatorname{dist}(\boldsymbol{x},\mathscr{F}_{Q})\leq CQ^{-1}\) for some constant \(C\) (depending only on the choice of norm \(\|\cdot\|\)) and hence can restrict the integral to \(s\leq Co_{Q}^{1/n}Q^{-1}=C\sigma_{1}^{1/n}Q^{1/n}\). We need to estimate the volume of \(\boldsymbol{x}\) such that \[\widehat{\mathbb{Z}}^{n+1}h(\boldsymbol{x})a(Q)\cap\mathfrak{C}(\mathscr{B}_ {s})=\emptyset, \tag{5.5}\] which is equivalent to \[\widehat{\mathbb{Z}}^{n+1}h(\boldsymbol{x})a(Q)a(s^{-n/(n+1)})\cap s^{n/(n+1) }\mathfrak{C}(\mathscr{B}_{1})=\emptyset. \tag{5.6}\] We now apply (2.26) with \(y=Qs^{-n/(n+1)}\) in place of \(Q\), and \(L=s^{n/(n+1)}\). (Note here that, since \(s\leq C\sigma_{1}^{1/n}Q^{1/n}\) we have \(y=Qs^{-n/(n+1)}\geq C^{-n/(n+1)}\sigma_{1}^{-1/(n+1)}Q^{1-1/(n+1)}>1\) for all sufficiently large \(Q\).) This yields \[\operatorname{vol}\left\{\boldsymbol{x}\in\mathscr{D}:\sigma_{Q}^{1/n} \operatorname{dist}(\boldsymbol{x},\mathscr{F}_{Q})>s\right\}\ll s^{-n}. \tag{5.7}\] Now (5.7) implies (5.4). In the case \(\operatorname{Re}\beta<0\) we need to check that \[\lim_{r\to 0}\limsup_{Q\to\infty}\int_{0}^{r}s^{\operatorname{Re}\beta-1} \operatorname{vol}\left\{\boldsymbol{x}\in\mathscr{D}:\sigma_{Q}^{1/n} \operatorname{dist}(\boldsymbol{x},\mathscr{F}_{Q})\leq s\right\}ds=0. \tag{5.8}\] which leads to the condition \[\widehat{\mathbb{Z}}^{n+1}h(\boldsymbol{x})a(Q)a(s^{-n/(n+1)})\cap s^{n/(n+1) }\mathfrak{C}(\mathscr{B}_{1})\neq\emptyset, \tag{5.9}\] hence the above lattice has an element of norm at most \(s^{n/(n+1)}\). From (2.31) applied to \(L=s^{n/(n+1)}\) and \(y=Qs^{-n/(n+1)}>1\) we conclude \[\operatorname{vol}\left\{\boldsymbol{x}\in\mathscr{D}:\sigma_{Q}^{1/n} \operatorname{dist}(\boldsymbol{x},\mathscr{F}_{Q})\leq s\right\}\ll s^{n} \tag{5.10}\] for all \(0<s\leq 1\). This proves (5.8). For completeness, we also state the counterpart of Proposition 10 in the case of discrete sampling, which may be viewed as the moments of the pigeonhole void distribution. **Proposition 11**.: _For \(\mathscr{D}\subset[0,1]^{n}\) with boundary of Lebesgue measure zero and non-empty interior, \(\beta\in\mathbb{C}\) with \(|\operatorname{Re}\beta|<n\), \(\boldsymbol{x}_{0}\in\mathbb{R}^{n}\), \(c>0\), we have_ \[\lim_{\begin{subarray}{c}N,Q\to\infty\\ cQ^{n+1}\leq N^{n}\end{subarray}}\frac{\sigma_{Q}^{\beta/n}}{N^{n}}\sum_{ \boldsymbol{j}\in\mathbb{Z}^{n}/N\mathbb{Z}^{n}}\operatorname{dist}( \boldsymbol{x}_{0}+N^{-1}\boldsymbol{j},\mathscr{F}_{Q})^{\beta}=\int_{0}^{ \infty}s^{\beta}\,dF_{\mathscr{B}_{1}}(s). \tag{5.11}\] The proof of this statement follows along the same lines as Proposition 6. ## 6. Minimal resonance orders The paper [21] which motivated [5] was in fact interested in the distribution of the minimal resonance order \[M(\omega,\rho)=\min_{\boldsymbol{p}\in\mathbb{Z}^{n}\setminus\{0\}}\left\{\| \boldsymbol{p}\|_{1}:\min_{q\in\mathbb{Z}}\Delta_{\boldsymbol{p},q}(\omega) \leq\rho\right\} \tag{6.1}\] in the limit of small \(\rho\), where \[\Delta_{\boldsymbol{p},q}(\omega)=\frac{|\boldsymbol{p}\cdot\omega-q|}{\| \boldsymbol{p}\|_{2}}. \tag{6.2}\] This quantity appears naturally in the study of the breakdown of invariant tori in integrable dynamical systems under perturbation. In the same vein as our previous discussion, the key fact is that \[M(\omega,\rho)>L\rho^{-1/(n+1)}\Leftrightarrow\left\{(\boldsymbol{p},q)\in \mathbb{Z}^{n+1}:0<\|\boldsymbol{p}\|_{1}\leq L\rho^{-1/(n+1)},\,\Delta_{ \boldsymbol{p},q}(\omega)\leq\rho\right\}=\emptyset. \tag{6.3}\] The last statement is equivalent to \[\left\{(\boldsymbol{p},q)\in\mathbb{Z}^{n+1}\setminus\{\boldsymbol{0}\}:\| \boldsymbol{p}\|_{1}\leq L\rho^{-1/(n+1)},\,|\boldsymbol{p}\cdot\omega-q| \leq\rho\|\boldsymbol{p}\|_{2}\right\}=\emptyset, \tag{6.4}\] which is equivalent to \[\left\{(\boldsymbol{p},q)\in\widehat{\mathbb{Z}}^{n+1}:\|\boldsymbol{p}\|_{1 }\leq L\rho^{-1/(n+1)},\,|\boldsymbol{p}\cdot\omega-q|\leq\rho\|\boldsymbol{p} \|_{2}\right\}=\emptyset. \tag{6.5}\] This in turn can be written as (6.6) \[\widehat{\mathbb{Z}}^{n+1}\cap\mathfrak{B}_{L}\begin{pmatrix}\rho^{-1/(n+1)} \mathbb{1}_{n}&\boldsymbol{0}\\ \boldsymbol{0}&\rho^{n/(n+1)}\end{pmatrix}\begin{pmatrix}1_{n}&\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}} \boldsymbol{} \boldsymbol{}} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{ }\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{} \boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{}\boldsymbol{
2301.07372
A Novel low-latency DBA for Virtualised PON implemented through P4 In-Network Processing
We present a novel dual-DBA allocation, with a fast P4-enabled scheduler to provide low latency upstream grant allocations. We show latency reduction of 37% and 43%, respectively, compared to standard and virtual PONs
D. R. Mafioletti, F. Slyne, R. Giller, M. OHanlon, B. Ryan, M. Ruffini
2023-01-18T08:41:18Z
http://arxiv.org/abs/2301.07372v1
# A Novel low-latency DBA for Virtualised PON ###### Abstract We present a novel dual-DBA allocation, with a fast P4-enabled scheduler to provide low latency upstream grant allocations. We show latency reduction of 37% and 43%, respectively, compared to standard and virtual PONs. 1CONNECT Centre, Trinity College Dublin, 2Intel Corporation, Ireland, 3Federal Institute of Espriito Santo {rossimad,flyne,marco.ruffini}@tcd.ie, {robin.giller, michael.a.ohanlon, brendan.ryan}@intel.com ## 1 Introduction Passive Optical Networks (PONs) are considered a cost-effective architecture for ubiquitous broadband delivery, due to their ability to share cost and capacity across end points. For this reason, they are being increasingly considered as a possible solution to connect small cells in 5G functional split architecture (i.e., supporting ORAN 7.2 Remote Unit(RU)-Distributed Unit(DU) split as well as higher level splits). One of the main drawbacks of the PON point-to-multipoint topology is upstream latency, which is higher compared to simpler point-to-point solution, as the scheduling mechanism requires exchange of reports and bandwidth map calculations that introduce additional few hundreds microsecond delay. Approaches such as Cooperative DBA [1], recently standardised as Cooperative Transport Interface (CTI) [2], have addressed this latency issue by providing coordination between mobile scheduling at the DU and OLT scheduling. However, CTI works because in Cloud-RAN the low latency issue is generated by a protocol mismatch between PON and RAN rather than by application-level requirements. For this reason, the CTI is able to fix the issue by sharing the advanced scheduling information from the DU with the OLT. This exchange of information is further facilitated by PON virtualisation mechanisms [3], which simplifies the integration between wireless and optical technologies. In this paper we address a more challenging issue, where the low latency requirement comes from the application. This means that neither an OLT nor a DU (in case the application runs over a mobile network) can know in advance when an upstream transmission request will arrive at the ONU. Currently, the only known working solution is to assign a fixed upstream allocation to a given ONU, which is thus allowed to transmit a given number of bytes, potentially every frame, without requesting a grant allocation. While this mechanism does provide the lowest latency, it is highly inefficient as capacity is statically assigned to the ONU. For this reason it is also not scalable. Other mechanisms to reduce latency are based on prediction of traffic arrival [4]. These however need to be tuned application by application and can only work for specific applications that present well defined packet arrival patterns that can be predicted several hundreds of \(\mu s\) in advance (their arrival time also needs to be estimated with sub-microsecond precision). A well performing low latency PON algorithm was presented in [5], however this only applies to Ethernet PON standard, while in this work we focus on the ITU-T standard. In this paper we propose a novel mechanism, that splits the upstream DBA scheduling in two parts. One that operates according to standard DBA procedures, where the bandwidth map is calculated after all required grant requests (Dynamic Bandwidth Report units - DBRus) are received for a given cycle. Here, the only difference is that our mechanism requires that part of this bandwidth map is initially left unallocated. A second mechanism, which we call Fast Intercept, operates independently a faster grant allocation that updates the standard bandwidth map (BWMAP) before it's sent to all ONUs, to provide immediate grants to newly arrived DBRu messages that are associated with low-latency service (i.e., without waiting for an entire DBA cycle). One of the key features of our implementation, which is optimised for virtual PON architectures, is that the standard DBA runs as a Virtual Network Function (VNF) on a general purpose processor (i.e., in the server running the virtual OLT and other network VNFs), while the Fast Intercept mechanism runs in the network card, operating the low-latency grant processing. In our implementation, this is carried out on a programmable P4 [6] pipeline. ## 2 Low-latency DBA description and implementation Figure 0(a) reports the different steps involved in a DBA process, together with typical latency times, from the moment a packet arrives at the ONU queue, until the moment that ONU is allowed to transmit the packet. Some of the latency times are typical of DBA implementation, while others are experimentally measured in our setup (and further discussed in section 3 below). a) the ONU needs to wait for the opportunity to piggy back the DBRu to an upstream message (between 0 and 125 \(\mu s\), for an average of 62.5 \(\mu s\)); b) the DBRu propagates trough fibre (assume 50 \(\mu s\) for a 10 km distance); c) the information travels between the physical card and the virtual process (this only occurs for virtual PON implementations, about 22 \(\mu s\) from our experimental data,.i.e. half the round trip time of 41.96 \(\mu s\)); d) the DBA process waits for a given time window to receive DBRus from multiple ONUs (between 0 and 125 \(\mu s\), for an average of 62.5 \(\mu s\)); e) the OLT runs the DBA algorithm to calculate the Bandwidth Map (assume DBA calculation time of 77 \(\mu s\), according to results in figure 2 - difference between second and third bars in the plot); f) the Bandwidth Map is included at the beginning of the next downstream frame (between 0 and 125 \(\mu s\), for an average of 62.5 \(\mu s\)); g) the Bandwidth Map travels between the virtual functional and the physical card (same consideration as c); h) the Bandwidth Map propagates trough fibre (same considerations as b); i) the ONU can transmit the data at its allocated time (considering we can schedule low latency allocations at the beginning of a frame, we assume between 0 and 20 \(\mu s\), for an average of 10 \(\mu s\)). Our proposed approach is illustrated in Figure 1b. The main difference here is that the grant calculations for the Fast Intercept mechanism occur in parallel in the P4 NIC, while waiting for a BWMAP to arrive from the CPU VNF. We assume that the T-CONT ID is used to determine whether an allocation requires low-latency support. As soon as it arrives, the Fast Intercept mechanism modifies the BWMAP to include the latest arrival low-latency grant requests. Thus, with respect to the stages above (we adopt the same step labels for ease of comparison), we have: a) The ONU needs to wait for the opportunity to piggy back the DBRu to an upstream message; b) the DBRu propagates through fibre; f) the next Bandwidth Map arrives at the NIC (between 0 and 125 \(\mu s\), for an average of 62.5 \(\mu s\)), in parallel the NIC calculates the BWMAP update for the low latency grants (7.55 \(\mu s\) according to our results in section 3); f2) the NIC updates the BWMAP including the low latency grant allocations (2.5 \(\mu s\) according to our results in section 3); h) the BWMAP propagates through fibre; i) the ONU can transmit the data at its allocated time. Considering the calculations provided above, the minimum average time for low latency allocation through a classical DBA mechanism is of 374.5 \(\mu s\), which is increased to 418.5 \(\mu s\) for a virtual implementation (i.e. considering of the additional steps c) and g) above). On the other hand, the proposed Fast Allocation mechanism, under similar conditions, can reduce this value to 237 \(\mu s\). This values represent a reduction of 37% and 43%, respectively, compared to a classical PON and virtual PON implementation. Before going into further implementation details, one consideration we want to make is whether allocating part of the bandwidth map for low latency applications could be considered wasteful, in case there are not enough applications requiring it on any given frame. In our implementation we easily solve this issue: as all DBRu request always pass through the P4 NIC, the Fast Intercept mechanism can fill in spare allocations using requests from other lower priority grant requests (i.e., that do not have low latency constraints), to avoid wasting capacity. The opposite issue could also occur, that the unallocated bandwidth map is not large enough to accommodate all low latency requests. In this case we implement a policy where the Fast Intercept mechanism could preempt additional allocations in the BWMAP that are currently associated to best effort services. _P4 implementation details_ We developed a P4 embedded network function (eNF) on a Nethrome SmartNIC that processes the BWMAPs and DBRus data structures [7]. Figure 2 shows the P4 Parser and Ingress pipelines on the SmartNIC. For each upstream burst arriving to the SmartNIC, the eNF checks the T-CONTs and stores their content (DBRus) in a data structure for subsequent checking in the downstream direction. Thus, on the downstream direction, the eNF analyses the data stored previously, runs a simplified fast allocation DBA algorithm on the network hardware and updates the upcoming BWMAP accordingly. The algorithm modifies the start_time and grant_size in the BWMAP. As mentioned above, the system starts with part of the BWMAP reserved for low latency traffic. The process then follows the following 4 steps: 1. The eNF identifies the DBRus in transit into a P4 register, which is a fast memory on the network hardware. 2. If the DBRu **does not** include a low latency grant request, the DBRu packets continues to its destination (the DBA in the CPU), without any modification. 3. If the DBRu **does** include a low latency grant request, the eNF prepares the allocation that will be used to modify the incoming BWMAP, applying any required modification to the grant_size fields. 4. When the next BWMAP arrives to the NIC from the CPU, the P4 process modifies this accordingly to include the low latency allocations, before forwarding it to the ONUs. ## 3 Experimental Results Figure 2(a) reports both DBA computation time and transmission time between SmartNIC and CPU. These affects the steps c), e) and g) shown in Fig. 1. The first bar in Fig. 2(a) reports the time required to send DBRus from NIC to CPU, for the DPU to calculate the BWMAP and for this to be sent from the CPU to the NIC. This is a baseline scenario, where no specific optimisation is carried out, using Linux Netdev, and show the longest time of about 393 \(\mu s\). The second bar represents the same process, but when implemented through our optimised DPDK solution for virtual PON. This bypasses the Linux network stack, and runs in user space in poll mode, and it reduces the timing to 119.51 \(\mu s\). The third bar shows the time required for a round trip time between NIC and CPU when using DPDK (41.96 \(\mu s\)). From the difference we can infer the DBA processing time in the DPDK implementation of 77.55 \(\mu s\).The forth bar finally, shows the time required by the NIC to operate the Fast Intercept mechanism, inclusive of grant calculation and update of the BWMAP. As this does not require any data transfer between NIC and CPU and allows fast P4 processing for the fast DBA, it can be computed is only 7.47 \(\mu s\). It should also be noted that since the PON is a synchronous TDM technology, the eNF knows exactly when the BWMAP will be received form the CPU. Thus it can initiate the fast DBA calculation 8 \(\mu s\) before it receives the BWMAP. In this case the only additional time will be that required to modify the BWMAP. This is reported in Fig. 2(b), which shows the break down of the eNF computation time (i.e., the measured timings of the four steps described at the end of section 2). We can see that the time required to modify the BWMAP is of the order of 2.5 \(\mu s\), which is thus the only additional time by which the BWMAP is delayed after arriving at the NIC. Summarising the experimental results in Fig. 2(a) and 2(b), re-iterating the overall DBA calculation times reported in section 2 and Fig. 0(a) and 0(b), we have calculated that a standard DBA mechanism (assuming the DBA is calculated for each service interval), would have an average latency of 374.5 \(\mu s\) and 418.5 \(\mu s\), respectively for an OEM and virtual implementation of a PON DBA. On the other hand our proposed mechanism, split into a CPU and SmartNIC implementation can provide an average latency of 237.5 \(\mu s\). This is a significant reduction of 37% and 43%, for upstream PON latency, which can enhance the PON support for low latency applications. ## Acknowledgements Financial support from Science Foundation Ireland grants 14/IA/2527 and 13/RC/2077 is gratefully acknowledged.
2302.01253
$6$-regular partitions: new combinatorial properties, congruences, and linear inequalities
We consider the number of the $6$-regular partitions of $n$, $b_6(n)$, and give infinite families of congruences modulo $3$ (in arithmetic progression) for $b_6(n)$. We also consider the number of the partitions of $n$ into distinct parts not congruent to $\pm 2$ modulo $6$, $Q_2(n)$, and investigate connections between $b_6(n)$ and $Q_2(n)$ providing new combinatorial interpretations for these partition functions. In this context, we discover new infinite families of linear inequalities involving Euler's partition function $p(n)$. Infinite families of linear inequalities involving the $6$-regular partition function $b_6(n)$ and the distinct partition function $Q_2(n)$ are proposed as open problems.
Cristina Ballantine, Mircea Merca
2023-02-02T17:38:40Z
http://arxiv.org/abs/2302.01253v1
# 6-regular partitions: new combinatorial properties, congruences, and linear inequalities ###### Abstract We consider the number of the 6-regular partitions of \(n\), \(b_{6}(n)\), and give infinite families of congruences modulo 3 (in arithmetic progression) for \(b_{6}(n)\). We also consider the number of the partitions of \(n\) into distinct parts not congruent to \(\pm 2\) modulo 6, \(Q_{2}(n)\), and investigate connections between \(b_{6}(n)\) and \(Q_{2}(n)\) providing new combinatorial interpretations for these partition functions. In this context, we discover new infinite families of linear inequalities involving Euler's partition function \(p(n)\). Infinite families of linear inequalities involving the 6-regular partition function \(b_{6}(n)\) and the distinct partition function \(Q_{2}(n)\) are proposed as open problems. **Keywords:** partitions, theta series, theta products **MSC 2010:** 11P81, 11P82, 05A19, 05A20 ## 1 Introduction Recall that a partition of a positive integer \(n\) is a sequence of positive integers whose sum is \(n\). The order of the summands is unimportant when writing the partitions of \(n\), but for consistency, a partition of \(n\) will be written with the summands in a nonincreasing order [2]. As usual, we denote by \(p(n)\) the number of integer partitions of \(n\) and we have the generating function \[\sum_{n=0}^{\infty}p(n)\,q^{n}=\frac{1}{(q;q)_{\infty}}.\] Here and throughout, we use the following customary \(q\)-series notation: \[(a;q)_{n}=\begin{cases}1,&\text{for $n=0$},\\ (1-a)(1-aq)\cdots(1-aq^{n-1}),&\text{for $n>0$};\end{cases}\] \[(a;q)_{\infty}=\lim_{n\to\infty}(a;q)_{n}.\] Moreover, we use the short notation \[(a_{1},a_{2},\ldots,a_{n};q)_{\infty}=(a_{1};q)_{\infty}(a_{2};q)_{\infty} \cdots(a_{n};q)_{\infty}.\] Because the infinite product \((a;q)_{\infty}\) diverges when \(a\neq 0\) and \(|q|\geqslant 1\), whenever \((a;q)_{\infty}\) appears in a formula, we shall assume \(|q|<1\). For an integer \(\ell>1\), a partition is called \(\ell\)-regular if none of its parts is divisible by \(\ell\). The number of the \(\ell\)-regular partitions of \(n\) is usually denoted by \(b_{\ell}(n)\) and its arithmetic propertys are investigated in many interesting papers by Z. Ahmed and N. D. Baruah [1], R. Carlson and J. J. Webb, [16], S.-P. Cui and N. S. S. Gu [17], B. Dandurand and D. Penninston [18], D. Furcy and D. Penniston [19], M. D. Hirschhorn and J. A. Sellers [22], Q.-H. Hou, L. H. Sun and L. Zhang [23], J. Lovejoy and D. Penniston [25], D. Penniston [43, 44], E. X. W. Xia [48], E. X. W. Xia and O. X. M. Yao [49], L. Wang [51, 52], and J. J. Webb [53]. Elementary techniques in the theory of partitions give the generating function \[\sum_{n=0}^{\infty}b_{\ell}(n)\,q^{n}=\frac{(q^{\ell};q^{\ell})_{\infty}}{(q;q )_{\infty}}. \tag{1}\] In 2010, G. E. Andrews, M. D. Hirschhorn and J. A. Sellers [3] proved that \(b_{4}(n)\) satisfies two infinite families of congruences modulo 3. After a year, J. J. Webb [53] proved an analogous result for \(b_{13}(n)\). In 2012, D. Funcry and D. Penniston [19] extended these results to other values of \(\ell\) which are congruent to 1 modulo 3, i.e., \(\ell\in\{7,19,25,34,37,43,49\}\). All these congruences are of the form \[b_{\ell}(3^{\beta}n+d)\equiv 0\pmod{3}.\] In addition, D. Furcy and D. Penniston [19] proved that \[b_{10}(9n+3)\equiv b_{22}(27n+16)\equiv b_{28}(27n+9)\equiv 0\pmod{3}.\] More recently, in 2015, Q.-H. Hou, L. H. Sun and L. Zhang [23] found infinite families of congruence relations modulo 3, 5 and 7 for \(\ell\)-regular partitions with \(\ell\in\{3,5,6,7,10\}\). In particular, when \(\ell=6\), they proved that for \(\alpha,n\) nonnegative integers, \(p_{i}\) primes congruent to \(13,17,19,23\pmod{24}\) and \(j\not\equiv 0\pmod{p_{\alpha+1}}\), \[b_{6}\left(p_{1}^{2}\cdots p_{\alpha+1}^{2}n+\frac{p_{1}^{2}\cdots p_{\alpha}^{ 2}p_{\alpha+1}(24j+5p_{\alpha+1})-5}{24}\right)\equiv 0\pmod{3}. \tag{2}\] Then, setting \(\alpha=0\) in (2), it follows that for all \(n\geqslant 0\), \(p\equiv 13,17,19,23\pmod{24}\) prime, and \(j\not\equiv 0\pmod{p}\), \[b_{6}\left(p^{2}n+pj+5\,\frac{p^{2}-1}{24}\right)\equiv 0\pmod{3}. \tag{3}\] It turns out that the result in [23] can be extended to other choices of primes. **Theorem 1.1**.: _Let \(\alpha\) be a nonnegative integer and let \(p_{i}\geqslant 5\), \(1\leqslant i\leqslant\alpha+1\) be primes. If \(p_{\alpha+1}\equiv 3\pmod{4}\) and \(j\not\equiv 0\pmod{p_{\alpha+1}}\), then for all integers \(n\geqslant 0\) we have_ \[b_{6}\left(p_{1}^{2}\cdots p_{\alpha+1}^{2}n+\frac{p_{1}^{2}\cdots p_{\alpha}^ {2}p_{\alpha+1}(24j+5p_{\alpha+1})-5}{24}\right)\equiv 0\pmod{3}. \tag{4}\] In particular, if \(\alpha=0\), Theorem 1.1 states that (3) holds for all primes \(p\equiv 3\pmod{4}\), \(j\not\equiv 0\pmod{p}\) and \(n\geqslant 0\). This statement can be reformulated as follows. For a prime \(p\geqslant 5\), we set \[\alpha_{p}:=5\,\frac{p^{2}-1}{24}\mod p,\] where by \(a\mod m\) we mean the residue of \(a\) modulo \(m\). Equivalently, \[\alpha_{p}=\left\lfloor 5p^{2}/24\right\rfloor\mod p\] and also \[\alpha_{p}=-5\cdot 24_{p}^{-1}\mod p,\] where \(24_{p}^{-1}\) is the inverse of \(24\) modulo \(p\). Then, from Theorem 1.1 with \(\alpha=0\) and (3), we obtain the following result, **Corollary 1.2**.: _If \(p\) is a prime congruent to \(7,11,13,17,19,23\) modulo \(24\) and \(0\leqslant j\leqslant p-1\), \(j\not=\left\lfloor 5p/24\right\rfloor\), then for all \(n\geqslant 0\) we have_ \[b_{6}\left(p^{2}n+pj+\alpha_{p}\right)\equiv 0\pmod{3}.\] We also consider the partitions of \(n\) into distinct parts not congruent to \(\pm 2\) modulo \(6\) in order to provide other properties for the number of \(6\)-regular partitions of \(n\). **Definition 1**.: Let \(n\) be a nonnegative integer. We define: 1. \(b_{6,e}(n)\) to be the number of \(6\)-regular partitions of \(n\) into an even number of parts; 2. \(b_{6,o}(n)\) to be the number of \(6\)-regular partitions of \(n\) into an odd number of parts. Clearly \(b_{6}(n)=b_{6,e}(n)+b_{6,o}(n)\). For example, the partitions of \(7\) into parts that are not multiples of \(6\) are: \[(7),\ (5,2),\ (5,1,1),\ (4,3),\ (4,2,1),(4,1,1,1),\ (3,3,1),\ (3,2,2),\ (3,2,1,1),\] \[(3,1,1,1,1),\ (2,2,2,1),\ (2,2,1,1,1),\ (2,1,1,1,1),\ (1,1,1,1,1,1).\] We see that \(b_{6}(7)=14\), \(b_{6,e}(7)=6\) and \(b_{6,o}(7)=8\). **Definition 2**.: Let \(n\) be a nonnegative integer. We define \(Q_{2}(n)\) to be the number of partitions of \(n\) into distinct parts which are not congruent to \(\pm 2\) modulo \(6\). For example, the partitions of \(14\) into distinct parts not congruent to \(\pm 2\) modulo \(6\) are: \[(13,1),\ (11,3),\ (10,3,1),\ (9,5),\ (8,5,1).\] Thus, \(Q_{2}(14)=5\). The standard methods for producing partition generating functions (cf. [2, Ch. 1]) reveal directly that \[\sum_{n=0}^{\infty}Q_{2}(n)\,q^{n}=(-q,-q^{3},-q^{5},-q^{6};q^{6})_{\infty} \tag{5}\] and the expansion starts as \[1+q+q^{3}+q^{4}+q^{5}+2q^{6}+2q^{7}+2q^{8}+3q^{9}+3q^{10}+3q^{11}+5q^{12}+5q^{ 13}+5q^{14}+\cdots.\] We remark that the sequences \(Q_{2}(n)\) is known and can be seen in the On-Line Encyclopedia of Integer Sequence [45, A328796]. The following result introduces a new combinatorial interpretation for the partition function \(Q_{2}(n)\). **Theorem 1.3**.: _For \(n\geqslant 0\), \((-1)^{n}\,Q_{2}(n)=b_{6,e}(n)-b_{6,o}(n)\)._ As a corollary of this theorem, we deduce the following parity result. **Corollary 1.4**.: _For \(n\geqslant 0\), \(Q_{2}(n)\) and \(b_{6}(n)\) have the same parity._ In order to obtain other combinatorial interpretations for the \(6\)-regular partitions of \(n\) and the partitions of \(n\) into distinct parts not congruent to \(\pm 2\) modulo \(6\), we consider the following restricted partition functions. **Definition 3**.: Let \(n\) be a nonnegative integer. We define 1. \(c(n)\) to be the number of partitions of \(n\) into parts which are not congruent to \(0\), \(\pm 2\), \(\pm 20\), \(\pm 22\), \(24\) modulo \(48\); 2. \(d(n)\) to be the number of partitions of \(n\) into parts which are not congruent to \(0\), \(\pm 4\), \(\pm 10\), \(\pm 14\), \(24\) modulo \(48\). We have the following result. **Theorem 1.5**.: _Let \(n\) be a nonnegative integer. Then_ 1. \(Q_{2}(n)=c(n)-d(n-2)\)_;_ 2. \(b_{6}(n)=c(n)+d(n-2)\)_._ The following corollary is a consequence of Theorems 1.3 and 1.5. This result introduces new combinatorial interpretations for the \(6\)-regular partition functions \(b_{6,e}(n)\) and \(b_{6,o}(n)\). **Corollary 1.6**.: _For \(n\geqslant 0\),_ 1. \(b_{6,e}(n)=\begin{cases}c(n),&\text{if $n$ is even}\\ d(n-2),&\text{if $n$ is odd;}\end{cases}\)__ 2. \(b_{6,o}(n)=\begin{cases}c(n),&\text{if $n$ is odd}\\ d(n-2),&\text{if $n$ is even.}\end{cases}\)__ From Corollay 1.6 we can obtain other combinatorial interpretations for the restricted partition functions \(c(n)\) and \(d(n)\). **Definition 4**.: Let \(n\) be a nonnegative integer. We define 1. \(b_{6,ee}(n)\) to be the number of \(6\)-regular partitions of \(n\) with an even number of even parts; 2. \(b_{6,eo}(n)\) to be the number of \(6\)-regular partitions of \(n\) with an odd number of even parts. Clearly \(b_{6}(n)=b_{6,ee}(n)+b_{6,eo}(n)\). For example, the \(6\)-regular partitions of \(7\) with an even number of even parts are: \[(7),\ (5,1,1),\ (4,2,1),\ (3,3,1),\ (3,2,2),\] \[(3,1,1,1,1),\ (2,2,1,1,1),\ (1,1,1,1,1,1),\] while the \(6\)-regular partitions of \(7\) with an odd number of even parts are: \[(5,2),\ (4,3),\ (4,1,1,1),\ (3,2,1,1),(2,2,2,1),\ (2,1,1,1,1,1).\] We see that \(b_{6,ee}(7)=8\) and \(b_{6,eo}(7)=6\). Since the parity of the number of odd parts in a partition of \(n\) is determined by the parity of \(n\), we have \(b_{6,ee}(n)\) equals \(b_{6,e}(n)\) (respectively \(b_{6,o}(n)\)) if \(n\) is even (respectively odd); and similarly for \(b_{6,eo}(n)\). Thus, we have the following equivalent form of Corollary 1.6. **Corollary 1.7**.: _For \(n\geqslant 0\)_ 1. \(c(n)=b_{6,ee}(n)\)_;_ 2. \(d(n)=b_{6,eo}(n+2)\)_._ In [4], while investigating the truncated form of Euler's pentagonal number theorem, \[(q;q)_{\infty}=\sum_{n=-\infty}^{\infty}(-1)^{n}\,q^{n(3n-1)/2}, \tag{6}\] G. E. Andrews and M. Merca introduced the partition function \(M_{k}(n)\), which counts the number of partitions of \(n\) where \(k\) is the least positive integer that is not a part and there are more parts \(>k\) than there are parts \(<k\). For instance, we have \(M_{3}(18)=3\) because the three partitions in question are \[(5,5,5,2,1),\ (6,5,4,2,1),\ (7,4,4,2,1).\] Recently, Xia and Zhao [50] defined \(\widetilde{P}_{k}(n)\) to be the number of partitions of \(n\) in which every part \(\leqslant k\) appears at least once and the first part larger that \(k\) appears at least \(k+1\) times. For example, \(\widetilde{P}_{2}(17)=9\), and the partitions in question are \[(5,3,3,3,2,1),\ (4,4,4,2,2,1),\ (4,4,4,2,1,1,1),\] \[(4,3,3,3,2,1,1),\ (3,3,3,3,2,2,1),\ (3,3,3,3,2,1,1,1),\] \[(3,3,3,2,2,2,1,1),\ (3,3,3,2,2,1,1,1,1),\ (3,3,3,2,1,1,1,1,1).\] Considering (1), we easily deduce that the 6-regular partition function \(b_{6}(n)\) is closely related to Euler's partition function \(p(n)\), i.e., \[b_{6}(n)=\sum_{j=-\infty}^{\infty}(-1)^{j}\,p\big{(}n-3j(3j-1)\big{)}. \tag{7}\] There are two more general results for which identity (7) is the limiting cases \(k\to\infty\). **Theorem 1.8**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k}\left(b_{6}(n)-\sum_{j=-(k-1)}^{k}(-1)^{j}\,p\big{(}n-3j(3j-1)\big{)} \right)=\sum_{j=0}^{\lfloor n/6\rfloor}b_{6}(n-6j)\,M_{k}(j).\] **Theorem 1.9**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k-1}\left(b_{6}(n)-\sum_{j=-k}^{k}(-1)^{j}\,p\big{(}n-3j(3j-1)\big{)} \right)=\sum_{j=0}^{\lfloor n/6\rfloor}b_{6}(n-6j)\,\widetilde{P}_{k}(j).\] On the other hand, by (1) and (6), we can easily derive a linear recurrence relation similar to the Euler recurrence relation for \(p(n)\), i.e., \[\sum_{j=-\infty}^{\infty}(-1)^{j}\,b_{6}\big{(}n-j(3j-1)/2\big{)}=\begin{cases}(- 1)^{k},&\text{if $n=3k(3k-1)$, $k\in\mathbb{Z}$,}\\ 0,&\text{otherwise.}\end{cases} \tag{8}\] **Remark 1**.: If we denote by \(p_{<6}(n)\) the number of partitions of \(n\) in which parts occur at most five times, Glaischer's bijection shows combinatorially that \(b_{6}(n)=p_{<6}(n)\), for all \(n\geqslant 0\). Then, the first proof of Theorem 1.1 in [14] with 4 replaced by 6 gives a combinatorial proof of (8). Apart from this recurrence relation, there is another linear recurrence relation for \(b_{6}(n)\). For any integer \(k\), let \[\rho_{k}:=\begin{cases}-2,&\text{if $k\equiv 1\pmod{3}$},\\ 1,&\text{otherwise.}\end{cases}\] **Theorem 1.10**.: _For \(n\geqslant 0\),_ \[\sum_{j=0}^{\infty}\rho_{j}\,b_{6}\big{(}n-j(j+1)/2\big{)}=\begin{cases}(-1)^{ k},&\text{if $n=k(3k-2)$, $k\in\mathbb{Z}$,}\\ 0,&\text{otherwise.}\end{cases}\] As a consequence of Theorem 1.10, we remark the following parity result which involves the generalized octagonal numbers, \(n(3n\pm 2)\). **Corollary 1.11**.: _For \(n\geqslant 0\),_ \[\sum_{j=-\infty}^{\infty}b_{6}\big{(}n-3j(3j-1)/2\big{)}\equiv 1\pmod{2}\] _if and only if \(n\) is a generalized octagonal number._ In analogy with (7), we have the following result which shows that the partition function \(Q_{2}(n)\) can be express in terms of Euler's partition function \(p(n)\) in two different ways. **Theorem 1.12**.: _For \(n\geqslant 0\),_ 1. \(Q_{2}(n)=\sum_{j=0}^{\infty}\rho_{j}\,p\big{(}n-j(j+1)\big{)}\)_;_ 2. \(Q_{2}(n)=\sum_{j=-\infty}^{\infty}p\left(\frac{n-j(3j-2)}{3}\right)\)_,_ _where_ \(p(x)=0\) _when_ \(x\) _is not a nonnegative integer._ **Remark 2**.: Using the notation of Andrews and Newman [7], \(\operatorname{mex}_{2,2}(\lambda)\) denotes the smallest even positive integer that is not a part of \(\lambda\). We denote by \(pm_{2j}(n)\) (respectively \(pm_{>2j}(n)\)) the number of partitions \(\lambda\) of \(n\) with \(\operatorname{mex}_{2,2}(\lambda)=2j\) (respectively \(\operatorname{mex}_{2,2}(\lambda)>2j\)). If \(\lambda\) is a partition of \(n-j(j+1)\) then \(\lambda\cup(2j,2(j-1),\ldots,4,2)\) is a partition of \(n\) with \(\operatorname{mex}_{2,2}(\lambda)>2j\). Hence \[p\big{(}n-j(j+1)\big{)}-p\big{(}n-(j+1)(j+2)\big{)}=pm_{2j}(n).\] Then, Theorem 1.12 (i) is equivalent to the statement that \(Q_{2}(n)\) equals the number of partitions \(\lambda\) of \(n\) with \(\operatorname{mex}_{2,2}(\lambda)\equiv 2\pmod{6}\) minus the number of partitions \(\lambda\) of \(n\) with \(\operatorname{mex}_{2,2}(\lambda)\equiv 4\pmod{6}\). Theorem 1.12 (i) allows us to derive the following congruence identities. **Corollary 1.13**.: _For \(n\geqslant 0\),_ 1. \(\sum_{j=-\infty}^{\infty}p\big{(}n-3j(3j-1)\big{)}\equiv Q_{2}(n) \pmod{2}\)_;_ 2. \(\sum_{j=0}^{\infty}p\big{(}n-j(j+1)\big{)}\equiv Q_{2}(n) \pmod{3}\)_._ Theorem 1.12 (ii) can be considered an identity of Watson type. More details about identities of Watson type can be found in [8]. In analogy with Theorem 1.10, we have the following linear recurrence relations for the partition function \(Q_{2}(n)\). **Theorem 1.14**.: _For \(n\geqslant 0\),_ 1. \(\sum_{j=-\infty}^{\infty}(-1)^{j}\,Q_{2}\big{(}n-j(3j-1)/2\big{)}= \begin{cases}\rho_{k},&\text{if $n=k(k+1)$, $k\in\mathbb{N}_{0}$}\\ 0,&\text{otherwise;}\end{cases}\)__ 2. \(\sum_{j=-\infty}^{\infty}(-1)^{j}\,Q_{2}\big{(}n-3j(3j-1)/2\big{)}= \begin{cases}1,&\text{if $n=k(3k-2)$, $k\in\mathbb{Z}$}\\ 0,&\text{otherwise.}\end{cases}\)__ Theorem 1.14 (ii) provides a simple and reasonably efficient way to compute the value of \(Q_{2}(n)\). The number of terms in this linear recurrence relation is about \(\sqrt{8n/9}\). In fact, computing the value of \(Q_{2}(n)\) with this linear recurrence relation requires all the values of \(Q_{2}(k)\) with \(k<n\). The rest of this paper is organized as follows. Theorem 1.1 will be proved in Section 2. In Sections 3-7, we will provide proofs of Theorems 1.3, 1.5, 1.8, 1.10, 1.12 and 1.14. Our proof of these theorems rely on generating functions. For Theorems 1.3, 1.12 (ii), and 1.14 (ii) we also give combinatorial proofs. (It would be very interesting to find combinatorial proofs for the remaining theorems.) In the last section of this paper, we propose as conjectures new infinite families of linear inequalities for the partition functions \(p(n)\), \(b_{6}(n)\), and \(Q_{2}(n)\). ## 2 Proof of Theorem 1.1 Let \[f(-q)=\sum_{n=-\infty}^{\infty}(-1)^{n}q^{\frac{n(3n+1)}{2}}=(q;q)_{\infty}\] and \[\psi(q)=\sum_{n=0}^{\infty}q^{\binom{n+1}{2}}=\frac{(q^{2};q^{2})_{\infty}}{(q; q^{2})_{\infty}}\] be Ramanujan's theta functions. As mentioned in [23] and also easily seen directly, \[\sum_{n=0}^{\infty}b_{6}(n)q^{n} =\prod_{n=1}^{\infty}\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\] \[\equiv\frac{(q^{2};q^{2})_{\infty}^{2}}{(q;q^{2})_{\infty}}\pmod{ 3}\] \[=f(-q^{2})\psi(q)\pmod{3}.\] We rewrite the above expression as \[f(-q^{2})\psi(q)=(q^{2};q^{2})_{\infty}\frac{(q^{2};q^{4})_{\infty}(q^{4};q^{ 4})_{\infty}}{(q;q^{2})_{\infty}}=(q^{2};q^{2})_{\infty}(q^{4};q^{4})_{\infty }(-q;q^{2})_{\infty}\] Replacing \(q\) by \(-q\), we obtain \[f(-q^{2})\psi(-q)=\frac{(q^{2};q^{2})_{\infty}^{2}}{(-q;q^{2})_{\infty}}=(q^{ 2};q^{2})_{\infty}(q^{4};q^{4})_{\infty}(q;q^{2})_{\infty}=(q;q)_{\infty}(q^{ 4};q^{4})_{\infty}.\] Hence, \[\sum_{n=0}^{\infty}b_{6}(n)(-q)^{n}\equiv(q;q)_{\infty}(q^{4};q^{4})_{\infty} \pmod{3}.\] Let \(\alpha\) be a nonnegative integer. Suppose \(p_{i}\geqslant 5\), \(1\leqslant i\leqslant\alpha+1\) are primes, \(p_{\alpha+1}\equiv 3\pmod{4}\) and \(j\not\equiv 0\pmod{p_{\alpha+1}}\). Given \(n\geqslant 0\), we set \[m_{n}:=p_{1}^{2}\cdots p_{\alpha+1}^{2}n+\frac{p_{1}^{2}\cdots p_{\alpha}^{2} p_{\alpha+1}(24j+5p_{\alpha+1})-5}{24}.\] We show that for all nonnegative integers \(n\) the coefficient of \(q^{m_{n}}\) in \(f(-q^{2})\psi(-q)\) is zero. We use Euler's pentagonal number theorem (6) twice to see that \[f(-q^{2})\psi(-q)=(q;q)_{\infty}(q^{4};q^{4})_{\infty}=\sum_{i,j=-\infty}^{ \infty}(-1)^{i+j}q^{\frac{i(3i+1)}{2}+4\cdot\frac{j(3j+1)}{2}}.\] We consider the equation \(\frac{i(3i+1)}{2}+4\cdot\frac{j(3j+1)}{2}=m_{n}\) which is equivalent to \[a^{2}+(2b)^{2}=24m_{n}+5 \tag{9}\] with \(a=6i+1\), \(b=6j+1\). Since \(j\not\equiv 0\pmod{p_{\alpha+1}}\), it follows that \(p_{\alpha+1}\mid 24m_{n}+5\) and \(p_{\alpha+1}\) appears in the factorization of \(24m_{n}+5\) with odd exponent. Then, equation (9) has no solution and the coefficient of \(q^{m_{n}}\) in \(f(-q^{2})\psi(-q)\) is zero. Hence \(b_{6}(m_{n})\equiv 0\pmod{3}\). This concludes the proof of Theorem 1.1. **Remark 3**.: In the poof of Theorem 1.1 we reduced the congruence problem to a question of representing an integer as a sum of two squares. The proof of [23, Theorem 2.3] relies on a different Diophantine equation. We note that [23, Theorem 2.2] is a congruence result for \(b_{3}(n)\), the number of \(3\)-regular partitions of \(n\). It is easy to see that the proof of [23] reduces to representing an integer as the sum of two squares. Thus the same argument as in the proof of Theorem 1.1 can be used to show that the congruence modulo \(3\) in [23, Theorem 2.2] holds in greater generality, i.e., only the prime \(p_{\alpha+1}\) must be congruent to \(3\) modulo \(4\). For the convenience of the reader, we give the general statement below. **Theorem 2.1**.: _Let \(\alpha\) be a nonnegative integer and let \(p_{i}\geqslant 5\), \(1\leqslant i\leqslant\alpha+1\) be primes. If \(p_{\alpha+1}\equiv 3\pmod{4}\) and \(j\not\equiv 0\pmod{p_{\alpha+1}}\), then for all integers \(n\geqslant 0\)_ \[b_{3}\left(p_{1}^{2}\cdots p_{\alpha}^{2}p_{\alpha+1}^{2}n+\frac{p_{1}^{2} \cdots p_{\alpha}^{2}p_{\alpha+1}(12j+p_{\alpha+1})-1}{12}\right)\equiv 0 \pmod{3}.\] ## 3 Proof of Theorem 1.3 ### Analytic proof Define \[F(z,q)=\prod_{k=0}^{\infty}\frac{1}{(1-zq^{6k+1})(1-zq^{6k+2})(1-zq^{6k+3})(1- zq^{6k+4})(1-zq^{6k+5})}.\] On the other hand, we have \[F(z,q)=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}b_{6}(n,m)\,z^{m}\,q^{n},\] where \(b_{6}(n,m)\) is the number of partitions of \(n\) with \(m\) parts all of which are not congruent to \(0\) modulo \(6\). Thus, considering the generating functions of \(b_{6}(n,m)\) and \(Q_{2}(n)\), we can write \[F(-1,q)=\sum_{n=0}^{\infty}\left(b_{6,e}(n)-b_{6,o}(n)\right)q^{n}\] and \[F(-1,q)=\frac{1}{(-q,-q^{2},-q^{3},-q^{4},-q^{5};q^{6})_{\infty}}\] \[=\frac{(-q^{6};q^{6})_{\infty}}{(-q;q)_{\infty}}\] \[=(q;q^{2})_{\infty}\,(-q^{6};q^{6})_{\infty}\] \[=\sum_{n=0}^{\infty}(-1)^{n}\,Q_{2}(n)\,q^{n},\] where we have invoked the Euler identity [2, (1.2.5)] \[\frac{1}{(q;q^{2})_{\infty}}=(-q;q)_{\infty}.\] ### Combinatorial proof We remark first that the set \((S_{1},S_{2})\) with \(S_{1}=\{n\in\mathbb{N}:n\not\equiv 0\pmod{6}\}\) and \(S-2=\{n\in\mathbb{N}:n\not\equiv\pm 2\pmod{6}\}\) is not an Euler pair and the statement of Theorem 1.3 is not a special case of Theorem 3.1 of [13]. However, the ideas used in the proof of [13, Theorem 3.1] can be used here. Given a partition \(\lambda\), denote by \(\ell(\lambda)\) the number of parts in \(\lambda\). Note that in a \(6\)-regular partition, even parts are congruent to \(\pm 2\) modulo \(6\). Let \(\mathcal{B}^{\prime}_{6}(n)\) be the set of \(6\)-regular partitions \(\lambda\) of \(n\) such that \(\lambda\) has at least one even part or at least one repeated part which is not congruent to \(3\) modulo \(6\). Moreover, denote by \(\mathcal{B}^{\prime}_{6,e}(n)\), respectively \(\mathcal{B}^{\prime}_{6,o}(n)\), the subset of partitions in \(\mathcal{B}^{\prime}_{6}(n)\) with \(\ell(\lambda)\) even, respectively odd. We define an involution \(\varphi\) on \(\mathcal{B}^{\prime}_{6}(n)\) that reverses the parity of \(\ell(\lambda)\). Start with \(\lambda\in\mathcal{B}^{\prime}_{6}(n)\). We denote by \(r\) the largest repeated part of \(\lambda\) that is not congruent to \(3\) modulo \(6\) and by \(e\) the largest even part of \(\lambda\). If \(r\) or \(e\) do not exist, we set them equal to \(0\). 1. If \(2r>e\), we define \(\varphi(\lambda)\) to be the partition obtained from \(\lambda\) by replacing two parts equal to \(r\) by a single part equal to \(2r\). Note that, since \(r\not\equiv 3\pmod{6}\), we have \(2r\not\equiv 0\pmod{6}\). Thus, \(\varphi(\lambda)\in\mathcal{B}^{\prime}_{6}(n)\). 2. If \(2r\leqslant e\), we define \(\varphi(\lambda)\) to be the partition obtained from \(\lambda\) by replacing one part equal to \(e\) by two parts equal to \(e/2\). Note that since \(e\equiv\pm 2\pmod{6}\), we have \(e/2\not\equiv 0,3\pmod{6}\). Thus, \(\varphi(\lambda)\in\mathcal{B}^{\prime}_{6}(n)\). Since \(\varphi:\mathcal{B}^{\prime}_{6}(n)\to\mathcal{B}^{\prime}_{6}(n)\) is an involution that reverses the parity of \(\ell(\lambda)\), we have that \(|\mathcal{B}^{\prime}_{6,e}(n)|=|\mathcal{B}^{\prime}_{6,o}(n)|\). Let \(\mathcal{Q}^{\prime}_{2}(n)\) be the set of partitions \(\lambda\in\mathcal{B}_{6}(n)\) with odd parts and only parts congruent to \(3\) modulo \(6\) may be repeated. Since all parts of \(\lambda\) are odd, \(\ell(\lambda)\equiv n\pmod{2}\). Thus, \[b_{6,e}(n)-b_{6,o}(n)=(-1)^{n}|\mathcal{Q}^{\prime}_{2}(n)|.\] Finally, we create a bijection \(\psi:\mathcal{Q}^{\prime}_{2}(n)\to\mathcal{Q}_{2}(n)\), where \(\mathcal{Q}_{2}(n)\) is the set of partitions of \(n\) with distinct parts not congruent to \(\pm 2\) modulo \(6\). Here and throughout, if \(k\) is a positive integer and \(\eta\) is a partition with all parts divisible by \(k\), we write \(\eta_{/k}\) for the partition whose parts are the parts of \(\eta\) divided by \(k\) For any partition \(\eta\), we denote by \(k\eta\) the partition whose parts are the parts of \(\eta\) multiplied by \(k\). Let \(\lambda\in\mathcal{Q}_{2}^{\prime}(n)\). Here and throughout, by the union of two partition we mean the union of their multisets of parts arranged in nondecreasing order. Write \(\lambda=(\alpha,\beta)\) where \(\alpha\cup\beta=\lambda\), \(\alpha\) is a partition into distinct parts, and \(\beta\) is a partition whose parts have even multiplicity. Thus, all parts of \(\beta\) are congruent to \(3\) modulo \(6\) and \(\beta_{/3}\) is a partition into odd parts each with even multiplicity. We denote by \(\varphi_{Gl}\) Glaisher's bijection which maps a partition of \(n\) with odd parts to a partition of \(n\) into distinct parts. Then, \(\varphi_{Gl}(\beta_{/3})\) is a partition with even distinct parts. The partition \(3\varphi_{Gl}(\beta_{/3})\) has distinct parts all congruent to \(0\) modulo \(6\). Set \(\psi(\lambda):=\alpha\cup 3\varphi_{Gl}(\beta_{/3})\). Then, \(\psi\) is a bijection from \(\mathcal{Q}_{2}^{\prime}(n)\) to \(\mathcal{Q}_{2}(n)\), which completes the proof of the theorem. **Remark 4**.: We note that, in fact, the involution \(\varphi\) in the combinatorial proof above reverses the parity of the number of even parts of a partition. Hence, the combinatorial proof above is also a proof for the following corrolary of Theorem 1.3. **Corollary 3.1**.: _For \(n\geqslant 0\), \(Q_{2}(n)=b_{6,ee}(n)-b_{6,eo}(n)\)._ ## 4 Proof of Theorem 1.5 The Watson quintuple product identity [15, 47] states that \[\sum_{n=-\infty}^{\infty}q^{n(3n+1)/2}\,(z^{-3n}-z^{3n+1})=(z,q/z,q;q)_{\infty} \,(qz^{2},q/z^{2};q^{2})_{\infty}. \tag{10}\] Elementary techniques in the theory of partitions give the following generating functions \[\sum_{n=0}^{\infty}c(n)\,q^{n} =\frac{(q^{2},q^{20},q^{22},q^{24},q^{26},q^{28},q^{46},q^{48};q^ {48})_{\infty}}{(q;q)_{\infty}}\] \[=\frac{(q^{2},q^{22},q^{24};q^{24})_{\infty}\,(q^{20},q^{28};q^{4 8})_{\infty}}{(q;q)_{\infty}}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}q^{12n(3n+1)} \,(q^{-6n}-q^{6n+2})\] \[\text{(By (\ref{eq:10}), with $q$ replaced by $q^{24}$ and $z$ replaced by $q^{2}$)}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{6n(6n+1)}-q ^{(6n+1)(6n+2)})\] and \[\sum_{n=0}^{\infty}d(n)\,q^{n} =\frac{(q^{4},q^{10},q^{14},q^{24},q^{34},q^{38},q^{44},q^{48};q ^{48})_{\infty}}{(q;q)_{\infty}}\] \[=\frac{(q^{10},q^{14},q^{24};q^{24})_{\infty}\,(q^{4},q^{44};q^{48})_{ \infty}}{(q;q)_{\infty}}\] \[=\frac{(q^{10},q^{14},q^{24};q^{24})_{\infty}\,(q^{4},q^{44};q^{48} )_{\infty}}{(q;q)_{\infty}}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}q^{12n(3n+1)} \,(q^{-30n}-q^{30n+10})\] \[\text{(By (\ref{eq:10}), with $q$ replaced by $q^{24}$ and $z$ replaced by $q^{10}$)}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{6n(6n-3)}- q^{(6n+2)(6n+5)}).\] We can write \[\sum_{n=0}^{\infty}\left(c(n)-d(n-2)\right)q^{n}\] \[=\sum_{n=0}^{\infty}c(n)\,q^{n}-\sum_{n=0}^{\infty}d(n)\,q^{n+2}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{6n(6n+1)}- q^{(6n+1)(6n+2)}-q^{6n(6n-3)+2}+q^{(6n+2)(6n+5)+2})\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{6n(6n+1)}- q^{(6n+1)(6n+2)}-q^{(6n-1)(6n-2)}+q^{(6n+3)(6n+4)})\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{3n(3n-1)}- q^{(3n+1)(3n+2)})\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}q^{3n(3n+1)}(q^ {-6n}-q^{6n+2})\] \[=\frac{(q^{2},q^{4},q^{6};q^{6})_{\infty}\,(q^{2},q^{10};q^{12})_{ \infty}}{(q;q)_{\infty}}\] \[\text{(By (\ref{eq:10}), with $q$ replaced by $q^{6}$ and $z$ replaced by $q^{2}$)}\] \[=\frac{(q^{2};q^{2})_{\infty}\,(q^{2},q^{10};q^{12})_{\infty}}{(q; q^{2})_{\infty}\,(q^{2};q^{2})_{\infty}}\] \[=(-q;q)_{\infty}\,(q^{2},q^{10};q^{12})_{\infty}\] \[=(-q;q^{2})_{\infty}\,(-q^{2};q^{2})_{\infty}\,(q^{2},q^{10};q^{12 })_{\infty}\] \[=(-q;q^{2})_{\infty}\,(-q^{6};q^{6})_{\infty}\,(-q^{2},-q^{4};q^{ 6})_{\infty}\,(q^{2},q^{10};q^{12})_{\infty}\] \[=(-q;q^{2})_{\infty}\,(-q^{6};q^{6})_{\infty}\,\frac{(q^{2},q^{4}, q^{8},q^{10};q^{12})_{\infty}}{(q^{2},q^{4};q^{6})_{\infty}}\] \[=\sum_{n=0}^{\infty}Q_{2}(n)\,q^{n}.\] and \[\sum_{n=0}^{\infty}\left(c(n)+d(n-2)\right)q^{n}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{6n(6n+1)}-q^{(6n +1)(6n+2)}+q^{(6n-1)(6n-2)}-q^{(6n+3)(6n+4)})\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(q^{6n(6n+1)}-q^ {(6n+3)(6n+4)})\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}(-1)^{n}q^{3n(3 n-1)}\] \[=\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\] (By (6) with \[q\] replaced by \[q^{6}\] ) \[=\sum_{n=0}^{\infty}b_{6}(n)\,q^{n}.\] This concludes the proof. ## 5 Proof of Theorems 1.8 and 1.9 G. E. Andrews and M. Merca [4] proved the following truncated form of (6): For any \(k\geqslant 1\), \[\frac{1}{(q;q)_{\infty}}\sum_{n=-(k-1)}^{k}(-1)^{n}\,q^{n(3n-1)/2}=1+(-1)^{k-1 }\sum_{n=k}^{\infty}\frac{q^{\binom{k}{2}+(k+1)n}}{(q;q)_{n}}\begin{bmatrix} n-1\\ k-1\end{bmatrix}, \tag{11}\] where \[\begin{bmatrix}n\\ k\end{bmatrix}=\begin{cases}\frac{(q;q)_{n}}{(q;q)_{k}(q;q)_{n-k}},&\text{if }0 \leqslant k\leqslant n,\\ 0,&\text{otherwise}.\end{cases}\] We note that the series on the right hand side of (11) is the generating function for \(M_{k}(n)\), i.e., \[\sum_{n=0}^{\infty}M_{k}(n)\,q^{n}=\sum_{n=k}^{\infty}\frac{q^{\binom{k}{2}+( k+1)n}}{(q;q)_{n}}\begin{bmatrix}n-1\\ k-1\end{bmatrix}.\] By (11), with \(q\) replaced by \(q^{6}\), we get \[\frac{1}{(q^{6};q^{6})_{\infty}}\sum_{n=-(k-1)}^{k}(-1)^{n}\,q^{3n(3n-1)}=1+(- 1)^{k-1}\sum_{n=0}^{\infty}M_{k}(n)\,q^{6n}.\] Multiplying both sides of this identity by \[\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}},\] we obtain \[\frac{1}{(q;q)_{\infty}}\sum_{n=-(k-1)}^{k}(-1)^{n}\,q^{3n(3n-1)}-\frac{(q^{6};q^{ 6})_{\infty}}{(q;q)_{\infty}}=(-1)^{k-1}\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{ \infty}}\sum_{n=0}^{\infty}M_{k}(n)\,q^{6n}\] or \[\left(\sum_{n=0}^{\infty}p(n)\,q^{n}\right)\left(\sum_{n=-(k-1)}^{ k}(-1)^{n}\,q^{3n(3n-1)}\right)-\sum_{n=0}^{\infty}b_{6}(n)\,q^{n}\] \[\qquad=(-)^{k-1}\left(\sum_{n=0}^{\infty}b_{6}(n)\,q^{n}\right) \left(\sum_{n=0}^{\infty}M_{k}(n)\,q^{6n}\right).\] The assertion of Theorem 1.8 follows by comparing coefficients of \(q^{n}\) on both sides of this equation. The proof of Theorem 1.9 is quite similar to the proof of Theorem 1.8. In [50], E. X. W. Xia and X. Zhao considered Euler's pentagonal number theorem (6) and they proved the following truncated form: For any \(k\geqslant 1\), \[\frac{1}{(q;q)_{\infty}}\sum_{n=-k}^{k}(-1)^{n}\,q^{n(3n-1)/2}=1+(-1)^{k}\, \frac{q^{k(k+1)/2}}{(q;q)_{k}}\sum_{n=0}^{\infty}\frac{q^{(n+k+1)(k+1)}}{(q^{ n+k+1};q)_{\infty}}. \tag{12}\] We remark that the series on the right hand side of (12) is the generating function for \(\widetilde{P}_{k}(n)\), i.e., \[\sum_{n=0}^{\infty}\widetilde{P}_{k}(n)\,q^{n}=\frac{q^{k(k+1)/2}}{(q;q)_{k}} \sum_{n=0}^{\infty}\frac{q^{(n+k+1)(k+1)}}{(q^{n+k+1};q)_{\infty}}.\] By (12), with \(q\) replaced by \(q^{6}\), we get \[\frac{1}{(q^{6};q^{6})_{\infty}}\sum_{n=-k}^{k}(-1)^{n}\,q^{3n(3n-1)}=1+(-1)^ {k}\,\frac{q^{3k(k+1)}}{(q^{6};q^{6})_{k}}\sum_{n=0}^{\infty}\frac{q^{6(n+k+1 )(k+1)}}{(q^{6(n+k+1)};q^{6})_{\infty}}.\] Multiplying both sides of this identity by the generating function of \(b_{6}(n)\), we obtain \[(-1)^{k}\left(\Big{(}\sum_{n=1}^{\infty}p(n)\,q^{n}\Big{)}\Big{(} \sum_{n=-k}^{k}(-1)^{n}\,q^{n(3n-1)/2}\Big{)}-\sum_{n=1}^{\infty}b_{6}(n)\,q^{ n}\right)\] \[\qquad\qquad\qquad\qquad\qquad=\left(\sum_{n=1}^{\infty}b_{6}(n) \,q^{n}\right)\left(\sum_{n=0}^{\infty}\widetilde{P}_{k}(n)\,q^{6n}\right).\] The proof of Theorem 1.9 follows easily considering Cauchy's multiplication of two power series. Proof of Theorem 1.10 The Jacobi triple product identity (cf. [20, Eq. (1.6.1)]) states that \[\sum_{n=-\infty}^{\infty}(-z)^{n}\,q^{n(n-1)/2}=(z,q/z,q;q)_{\infty}. \tag{13}\] Considering (13) with \(q\) replaced by \(q^{6}\) and \(z\) replaced by \(q\), we can write \[\sum_{n=-\infty}^{\infty}(-1)^{n}\,q^{n(3n-2)} =(q,q^{5},q^{6};q^{6})_{\infty}\] \[=\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\cdot(q,q^{5};q^{6} )_{\infty}\,(q,q^{2},q^{3};q^{3})_{\infty}\] \[=\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\sum_{n=-\infty}^{ \infty}q^{3n(3n+1)/2}\,(q^{-3n}-q^{3n+1})\] \[\qquad\text{(By (\ref{eq:10}), with $q$ replaced by $q^{3}$ and $z$ replaced by $q$)}\] \[=\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\left(\sum_{n=- \infty}^{\infty}q^{3n(3n-1)/2}-\sum_{n=-\infty}^{\infty}q^{(3n+1)(3n+2)/2}\right)\] \[=\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\left(\sum_{n\neq 1 \pmod{3}}^{\infty}q^{n(n+1)/2}-\sum_{n\equiv 1\pmod{3}}^{\infty}2\,q^{n(n+1)/2}\right)\] \[=\left(\sum_{n=0}^{\infty}b_{6}(n)\,q^{n}\right)\left(\sum_{n=0} ^{\infty}\rho_{n}\,q^{n(n+1)/2}\right)\] \[=\sum_{n=0}^{\infty}\left(\sum_{j=0}^{n}\rho_{j}\,b_{6}\big{(}n- j(j+1)/2\big{)}\right)q^{n}.\] This concludes the proof. ## 7 Proof of Theorems 1.12 and 1.14 ### Analytic proof We have \[\sum_{n=0}^{\infty}Q_{2}(n)\,q^{n} =(-q;q^{2})_{\infty}\,(-q^{6};q^{6})_{\infty}\] \[=\frac{(q^{2};q^{4})_{\infty}}{(q;q^{2})_{\infty}}\,\frac{1}{(q^{ 6};q^{12})_{\infty}}\] \[=\frac{1}{(q;q)_{\infty}}\cdot(q^{2},q^{10};q^{12})_{\infty}\,(q^ {2},q^{4},q^{6};q^{6})_{\infty}\] \[=\frac{1}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}q^{3n(3n+1)}\,(q^{-6n }-q^{6n+2})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad Considering (6), these equations can be rewritten as \[\left(\sum_{n=-\infty}^{\infty}(-1)^{n}\,q^{n(3n-1)/2}\right)\left(\sum_{n=0}^{ \infty}Q_{2}(n)\,q^{n}\right)=\sum_{n=0}^{\infty}\rho_{n}\,q^{n(n+1)}\] and \[\left(\sum_{n=-\infty}^{\infty}(-1)^{n}\,q^{3n(3n-1)/2}\right)\left(\sum_{n=0}^ {\infty}Q_{2}(n)\,q^{n}\right)=\sum_{n=-\infty}^{\infty}q^{n(3n-2)}.\] The assertions of Theorem 1.14 follows easily by comparing coefficients of \(q^{n}\) on both sides of these equations. ### Combinatorial proof of Theorem 1.12 (ii) We will use a particular case of [11, Lemma 2.1]. We first introduce some notation. Let \(\mathcal{Q}_{6,1}(n)\) be the set of partitions of \(n\) into distinct parts congruent to \(\pm 1\pmod{6}\)). Let \(\mathcal{W}_{6,1}(n)\) be the set of pairs \((\mu,k(3k-2))\), where \(\mu\) is a partition into parts divisible by \(6\), \(k\in\mathbb{Z}\), and \(|\mu|+k(3k-2)=n\). Then, [11, Lemma 2.1] with \(m=6\) and \(r=1\) gives a bijection \(\xi_{6,1}:\mathcal{Q}_{6,1}(n)\to\mathcal{W}_{6,1}(n)\). Let \(n\) be a nonnegative integer. We create a bijection \[\psi:\mathcal{Q}_{2}(n)\to\bigcup_{k\in\mathbb{Z}}\mathcal{P}\left(\frac{n-j( 3j-2)}{3}\right).\] Start with \(\lambda\in\mathcal{Q}_{2}(n)\). Write \(\lambda=\alpha\cup\beta\), where \(\alpha\) is a partition into distinct parts congruent to \(1\) or \(5\) modulo \(6\), and \(\beta\) is a partition into distinct parts divisible by \(3\). Thus, \(\beta_{/3}\) is a partition with distinct parts and \(\varphi_{Gl}^{-1}(\beta_{/3})\) is a partition with odd parts. Moreover, \(3\varphi_{Gl}^{-1}(\beta_{/3})\) is a partition whose parts are congruent to \(3\) modulo \(6\) and \(|3\varphi_{Gl}^{-1}(\beta_{/3})|=|\beta|\). Let \(\xi_{6,1}(\alpha)=(\mu,k(3k-2))\) for some \(k\in\mathbb{Z}\). Since the parts of \(\mu\) are divisible by \(6\), all parts of \(\mu\cup 3\varphi_{Gl}^{-1}(\beta_{/3})\) are divisible by \(3\). Define \(\psi(\lambda)=(\mu\cup 3\varphi_{Gl}^{-1}(\beta_{/3}))_{/3}\). Since \(|\mu|=|\alpha|-k(3k-2)\) and \(|\alpha|+|\beta|=n\), it follows that \(\psi(\lambda)\) is a partition of \(\frac{n-k(3k-2)}{3}\). For the inverse, let \(k\in\mathbb{Z}\). Start with a partition \(\eta\) of \(\frac{n-k(3k-2)}{3}\). Then \(3\eta\) is a partition of \(n-k(3k-2)\). Write \(3\eta=\mu\cup\pi\), where \(\mu\) (respectively \(\pi\)) has parts congruent to \(0\) (respectively \(3\)) modulo \(6\). We have that \(\xi_{6,1}^{-1}(\mu,k(3k-2))\) is a partition of \(|\mu|+k(3k-2)\) into parts congruent to \(1\) or \(5\) modulo \(6\). The partition \(\pi/3\) has odd parts and \(3\varphi_{Gl}(\pi/3)\) is a partition into distinct parts divisible by \(3\) and \(|3\varphi_{Gl}(\pi/3)|=|\pi|\). Then \(\psi^{-1}(\eta)=\xi_{6,1}^{-1}(\mu,k(3k-2))\cup 3\varphi_{Gl}(\pi/3)\in \mathcal{Q}_{2}(n)\). ### Combinatorial proof of Theorem 1.14 (ii) We denote by \(\mathcal{Q}(n)\) the set of partitions of \(n\) into distinct parts and we set \(\mathcal{Q}:=\cup_{n\geq 0}\mathcal{Q}(n)\), \(\mathcal{Q}_{2}:=\cup_{n\geq 0}\mathcal{Q}_{2}(n)\). and \(\mathcal{Q}_{6,1}:=\cup_{n\geq 0}\mathcal{Q}_{6,1}(n)\). Let \(\varphi_{F}\) be the involution defined by Franklin to give a combinatorial proof of Euler's pentagonal number theorem (see, for example, [2, Theorem 1.6]). Let \[\mathcal{A}(n):=\{(\lambda,\mu):\lambda\in\mathcal{Q}_{2},\mu=3\eta\text{ with }\eta \in\mathcal{Q},|\lambda|+|\mu|=n\}\] and \[\mathcal{E}\mathcal{A}(n):=\{(\lambda,\mu)\in\mathcal{A}(n):\mu_{/3}\text{ pentagonal partition}\}.\] Here a pentagonal partition is either \(\emptyset\) or a partition of the form \((2i,2i-1,\ldots,i+1)\) or \((2i-1,2i-2,\ldots,i)\) for some integer \(i>0\). The involution \((\lambda,\mu)\mapsto(\lambda,3\varphi_{F}(\mu_{/3}))\) on \(\mathcal{A}(n)\setminus\mathcal{E}\mathcal{A}(n)\) proves combinatorially that \[\sum_{j=-\infty}^{\infty}(-1)^{j}\,Q_{2}\big{(}n-3j(3j-1)/2\big{)}\] is the generating function for \[|\{(\lambda,\mu)\in\mathcal{A}(n):\ell(\mu)\text{ even}\}|-|\{(\lambda,\mu) \in\mathcal{A}(n):\ell(\mu)\text{ odd}\}|. \tag{15}\] We define another involution on a subset of \(\mathcal{A}(n)\) that reverses the parity of the length of the second partition in the pair. Given \((\lambda,\mu)\in\mathcal{A}(n)\), we write \(\lambda=\lambda^{3|}\cup\lambda^{3|}\), where the parts of \(\lambda^{3|}\) are all parts of \(\lambda\) which are divisible by \(3\). If \(\ell(\lambda^{3})\not\equiv\ell(\mu)\pmod{2}\), we map \((\lambda^{3|},\lambda^{3|},\mu)\) to \((\mu,\lambda^{3|},\lambda^{3|})\). If \(\ell(\lambda^{3|})\not\equiv\ell(\mu)\pmod{2}\) and \(\lambda^{3|}\neq\mu\), let \(i\) be the smallest integer such that \(\lambda^{3|}_{i}\neq\mu_{i}\). If \(\lambda^{3|}_{i}>\mu_{i}\), we remove part \(\lambda^{3|}_{i}\) from \(\lambda^{3|}\) and insert a part equal to \(\lambda^{3|}_{i}\) into \(\mu\). If \(\lambda^{3|}_{i}<\mu_{i}\), we remove part \(\mu_{i}\) from \(\mu\) and insert a part equal to \(\mu_{i}\) into \(\lambda^{3|}\). We obtain an involution on the set \(\{(\lambda,\mu)\in\mathcal{A}(n):\lambda^{3|}\neq\mu\}\) that reverses the parity of \(\ell(\mu)\). Thus, (15) equals \[|\{(\lambda,\lambda^{3|})\in\mathcal{A}(n):\ell(\lambda^{3|})\text{ even}\}|-|\{(\lambda,\lambda^{3|})\in\mathcal{A}(n):\ell(\lambda^{3|}) \text{ odd}\}|.\] Mapping \((\lambda,\lambda^{3|})=(\lambda^{3|},\lambda^{3|},\lambda^{3|})\) to \((\lambda^{3|},2\lambda^{3|})\) and setting \[\mathcal{B}(n):=\{(\alpha,\beta):\alpha\in\mathcal{Q}_{6,1},\beta=6\gamma\text { with }\gamma\in\mathcal{Q},|\alpha|+|\beta|=n\}\,,\] we see that (15) equals \[|\{(\alpha,\beta)\in\mathcal{B}(n):\ell(\beta)\text{ even}\}|-|\{(\alpha, \beta)\in\mathcal{B}(n):\ell(\beta)\text{ odd}\}|.\] Let \(\mathcal{C}(n)\) be the set of triples \((\gamma,k(3k-2),\beta)\) such that \(k\in\mathbb{Z}\), \(\gamma\) and \(\beta\) are partitions with parts divisible by \(6\), \(\beta\in\mathcal{Q}\), and \(|\gamma|+k(3k-2)+|\beta|=n\). We define a bijection from \(\mathcal{B}(n)\) to the set \(\mathcal{C}(n)\) by \[(\alpha,\beta)\mapsto(\xi_{6,1}(\alpha),\beta)=(\gamma,k(3k-2),\beta)\] where, \(\xi_{6,1}:\mathcal{Q}_{6,1}(n)\to\mathcal{W}_{6,1}(n)\) is the bijection of [11, Lemma 2.1]. Then, (15) equals \[|\{(\gamma,k(3k-2),\beta)\in\mathcal{C}(n):\ell(\beta)\text{ even}\}|-|\{( \gamma,k(3k-2),\beta)\in\mathcal{C}(n):\ell(\beta)\text{ odd}\}|.\] Finally, we define an involution \(\zeta\) on \[\{(\gamma,k(3k-2),\beta)\in C(n):(\gamma,\beta)\neq(\emptyset,\emptyset)\}.\] Start with \((\gamma,k(3k-2),\beta)\in\mathcal{C}(n)\) with \((\gamma,\beta)\neq(\emptyset,\emptyset)\). If \(\gamma_{1}>\beta_{1}\), remove part \(\gamma_{1}\) from \(\gamma\) and insert a part equal to \(\gamma_{1}\) into \(\beta\). If \(\gamma_{1}\leq\beta_{1}\), remove part \(\beta_{1}\) from \(\beta\) and insert a part equal to \(\beta_{1}\) into \(\gamma\). The involution \(\zeta\) reverses the parity of \(\ell(\beta)\). This completes the combinatorial proof of Theorem 1.14(ii). ## 8 Inequalities and open problems Linear inequalities involving partition functions, especially Euler's partition function \(p(n)\), have been the subject of recent studies by G. E. Andrews and M. Merca [4, 5, 6], C. Ballantine and M. Merca [8, 9, 10], C. Ballantine, M. Merca, D. Passary and A. J. Yee [12], V. J. W. Guo and J. Zeng [21], J. Katriel [24], M. Merca [26, 27, 28, 29, 30, 31, 38, 32, 33, 34, 35, 36, 37, 39], M. Merca and J. Katriel [40], M. Merca, C. Wang and A. J. Yee [41], M. Merca and A. J. Yee [42]. For example, G. E. Andrews and M. Merca [4] proved that: for \(n\geqslant 0\), \(k>0\), \[(-1)^{k-1}\sum_{j=-(k-1)}^{k}(-1)^{j}\,p\big{(}n-j(3j-1)/2\big{)}\geqslant 0.\] Recently [5, Corollary 11], the same authors found a new infinite family of linear homogeneous inequalities for \(p(n)\) which involves the triangular numbers: if at least one of \(n\) and \(k\) is odd, \[(-1)^{k-1}\sum_{j=0}^{2k-1}(-1)^{j(j-1)/2}\,p\big{(}n-j(j+1)/2\big{)}\geqslant 0.\] As a consequence of Theorem 1.8, we remark a new infinite family of linear inequalities for \(p(n)\). **Corollary 8.1**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k}\left(b_{6}(n)-\sum_{j=-(k-1)}^{k}(-1)^{j}\,p\big{(}n-3j(3j-1)\big{)} \right)\geqslant 0,\] _with strict inequality if \(n\geqslant 3k(3k+1)\)._ For example, the cases \(k=1\) and \(k=2\) of this corollary provides the following double inequality \[p(n)-p(n-6)-p(n-12)+p(n-30)\leqslant b_{6}(n)\leqslant p(n)-p(n-6). \tag{16}\] In terms of \(\mathrm{mex}_{2,2}\), inequality (16) becomes \[pm_{2}(n)+pm_{4}(n)-pm_{8}(n)-pm_{10}(n)\leqslant b_{6}(n)\leqslant pm_{2}(n)+pm_ {4}(n).\] In this section, inspired by the identity (8) and Theorems 1.10, 1.12 and 1.14, we propose as conjectures new infinite families of linear inequalities for the partition functions \(p(n)\), \(b_{6}(n)\) and \(Q_{2}(n)\). ### Euler's partition function Inspired by Theorem 1.12, for \(k\geqslant 0\) we investigated the following series: \[(-q;q^{2})_{\infty}\,(-q^{6};q^{6})_{\infty}-\frac{1}{(q;q)_{\infty}}\sum_{j=0 }^{k}\rho_{j}\,q^{j(j+1)}.\] There is a substantial amount of numerical evidence to conjecture that this series has nonnegative coefficients if \(k\) is not congruent to \(0\) modulo \(3\) and nonpositive coefficients if \(k\) is congruent to \(0\) modulo \(3\). In addition, we conjecture that the coefficient of \(q^{n}\) in this series is nonzero if and only if \(n\geqslant(k+1)(k+2)\). We have the following combinatorial interpretation of this conjecture. **Conjecture 1**.: _For \(n,k\geqslant 0\),_ 1. \(\sum_{j=0}^{3k}\rho_{j}\,p\big{(}n-j(j+1)\big{)}\geqslant Q_{2}(n)\)_,_ _with strict inequality if_ \(n\geqslant(3k+1)(3k+2)\)_;_ 2. \(\sum_{j=0}^{3k+1}\rho_{j}\,p\big{(}n-j(j+1)\big{)}\leqslant Q_{2}(n)\)_;_ _with strict inequality if_ \(n\geqslant(3k+2)(3k+3)\)_._ Assuming this conjecture, we remark the following double inequality: \[p(n)-2p(n-2)+p(n-6)\leqslant Q_{2}(n)\] \[\leqslant p(n)-2p(n-2)+p(n-6)+p(n-12). \tag{17}\] In terms of \(\mathrm{mex}_{2,2}\), inequality (17) becomes \[pm_{2}(n)-pm_{4}(n)\leqslant Q_{2}(n)\leqslant pm_{2}(n)-pm_{4}(n)+pm_{>6}(n).\] ### \(6\)-regular partitions Inspired by the identity (8) and the truncated pentagonal number theorem (11), for \(k>0\) we considered the following series: \[(q^{6};q^{6})_{\infty}-\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}}\sum_{n=- (k-1)}^{k}(-1)^{n}\,q^{n(3n-1)/2}.\] There is a substantial amount of numerical evidence to conjecture that this series has nonnegative coefficients if \(k\) is even and nonpositive coefficients if \(k\) is odd. In addition, we conjecture that the coefficient of \(q^{n}\) in this series is nonzero if and only if \(n\geqslant k(3k+1)/2\). We have the following combinatorial interpretation of this conjecture. For any integer \(n\), let \[\alpha_{n}:=\begin{cases}(-1)^{m},&\text{if $n=3m(3m-1)$, $m\in\mathbb{Z}$},\\ 0,&\text{otherwise}.\end{cases}\] **Conjecture 2**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k}\left(\alpha_{n}-\sum_{j=-(k-1)}^{k}(-1)^{j}\,b_{6}\big{(}n-j(3j-1)/2 \big{)}\right)\geqslant 0,\] _with strict inequality if \(n\geqslant k(3k+1)/2\)_ We remark that this inequality can be rewritten in terms of \(M_{k}(n)\) as follows: for \(n\geqslant 0\), \(k>0\), \[\sum_{j=-\infty}^{\infty}(-1)^{j}\,M_{k}\big{(}n-3j(3j-1)\big{)}\geqslant 0, \tag{18}\] with strict inequality if \(n\geqslant k(3k+1)/2\). In analogy with Conjecture 2, we also make the following conjecture. **Conjecture 3**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k-1}\left(\alpha_{n}-\sum_{j=-k}^{k}(-1)^{j}\,b_{6}\big{(}n-j(3j-1)/2 \big{)}\right)\geqslant 0,\] _with strict inequality if \(n\geqslant k(3k+1)/2\)._ It is easy to see that Conjecture 3 is a weaker version of Conjecture 2. The inequality given by Conjecture 3 can be rewritten in terms of \(\widetilde{P}_{k}(n)\) as follows: for \(n\geqslant 0\), \(k>0\), \[\sum_{j=-\infty}^{\infty}(-1)^{j}\,\widetilde{P}_{k}\big{(}n-3j(3j-1)\big{)} \geqslant 0, \tag{19}\] with strict inequality if \(n\geqslant k(3k+1)/2\). Clearly the inequality (18) implies the inequality (19). Inspired by Theorem 1.10, for \(k\geqslant 0\) we investigated the following series: \[(q,q^{5},q^{6};q^{6})_{\infty}-\frac{(q^{6};q^{6})_{\infty}}{(q;q)_{\infty}} \sum_{j=0}^{k}\rho_{j}\,q^{j(j+1)/2}.\] There is a substantial amount of numerical evidence to conjecture that this series has nonnegative coefficients if \(k\) is not congruent to \(0\) modulo \(3\) and nonpositive coefficients if \(k\) is congruent to \(0\) modulo \(3\). In addition, the coefficient of \(q^{n}\) in this series is nonzero if and only if \(n\geqslant(k+1)(k+2)/2\). We have the following combinatorial interpretation of this conjecture. For any integer \(n\), let \[\beta_{n}:=\begin{cases}(-1)^{m},&\text{if $n=m(3m-2)$, $m\in\mathbb{Z}$,}\\ 0,&\text{otherwise.}\end{cases}\] **Conjecture 4**.: _For \(n,k\geqslant 0\),_ 1. \(\sum_{j=0}^{3k}\rho_{j}\,b_{6}\big{(}n-j(j+1)/2\big{)}\geqslant\beta_{n}\)_,_ _with strict inequality if_ \(n\geqslant(3k+1)(3k+2)/2\)_;_ 2. \(\sum_{j=0}^{3k+2}\rho_{j}\,b_{6}\big{(}n-j(j+1)/2\big{)}\leqslant\beta_{n}\)_;_ _with strict inequality if_ \(n\geqslant(3k+2)(3k+3)/2\)_._ Assuming this conjecture, we remark the following double inequality: \[b_{6}(n)-2b_{6}(n-1)+b_{6}(n-3)\leqslant\beta_{n}\leqslant b_{6}(n)-2b_{6}(n -1)+b_{6}(n-3)+b_{6}(n-6).\] ### Partitions into distinct parts \(\not\equiv\pm 2\pmod{6}\) Inspired by Theorem 1.14.(i), for \(k>0\) we considered the following series: \[(q^{2},q^{10};q^{12})_{\infty}\,(q^{2};q^{2})_{\infty}-(-q;q^{2})_{\infty}\,(- q^{6};q^{6})_{\infty}\sum_{j=-(k-1)}^{k}(-1)^{j}q^{j(3j-1)/2}.\] There is a substantial amount of numerical evidence to conjecture that this series has nonnegative coefficients if \(k\) is even and nonpositive coefficients if \(k\) is odd. We have the following combinatorial interpretation of this conjecture. For any nonnegative integer \(n\), let \[\gamma_{n}:=\begin{cases}\rho_{m},&\text{if $n=m(m+1)$, $m\in\mathbb{N}_{0}$,}\\ 0,&\text{otherwise.}\end{cases}\] **Conjecture 5**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k}\left(\gamma_{n}-\sum_{j=-(k-1)}^{k}(-1)^{j}\,Q_{2}\big{(}n-j(3j-1)/2 \big{)}\right)\geqslant 0.\] We remark that this inequality can be rewritten in terms of \(M_{k}(n)\) as follows: for \(n\geqslant 0\), \(k>0\), \[\sum_{j=0}^{\infty}\rho_{j}\,M_{k}\big{(}n-j(j+1)\big{)}\geqslant 0. \tag{20}\] We also make the following conjecture which is weaker than Conjecture 5. **Conjecture 6**.: _For \(n\geqslant 0\), \(k>0\),_ \[(-1)^{k-1}\left(\gamma_{n}-\sum_{j=-k}^{k}(-1)^{j}\,Q_{2}\big{(}n-j(3j-1)/2 \big{)}\right)\geqslant 0.\] This inequality can be rewritten in terms of \(\widetilde{P}_{k}(n)\) as follows: for \(n\geqslant 0\), \(k>0\), \[\sum_{j=0}^{\infty}\rho_{j}\,\widetilde{P}_{k}\big{(}n-j(j+1)\big{)}\geqslant 0. \tag{21}\] It is clear that the inequality (20) implies the inequality (21). Regarding inequalities (18) - (21), it would be very appealing to have combinatorial interpretations for their sums.
2307.13089
A conceptual framework for SPI evaluation
Software Process Improvement (SPI) encompasses the analysis and modification of the processes within software development, aimed at improving key areas that contribute to the organizations' goals. The task of evaluating whether the selected improvement path meets these goals is challenging. On the basis of the results of a systematic literature review on SPI measurement and evaluation practices, we developed a framework (SPI Measurement and Evaluation Framework (SPI-MEF)) that supports the planning and implementation of SPI evaluations. SPI-MEF guides the practitioner in scoping the evaluation, determining measures, and performing the assessment. SPI-MEF does not assume a specific approach to process improvement and can be integrated in existing measurement programs, refocusing the assessment on evaluating the improvement initiative's outcome. Sixteen industry and academic experts evaluated the framework's usability and capability to support practitioners, providing additional insights that were integrated in the application guidelines of the framework.
Michael Unterkalmsteiner, Tony Gorschek, A. K. M. Moinul Islam, Chow Kian Cheng, Rahadian Bayu Permadi, Robert Feldt
2023-07-24T19:22:58Z
http://arxiv.org/abs/2307.13089v1
# A conceptual framework for SPI evaluation ###### Abstract Software Process Improvement (SPI) encompasses the analysis and modification of the processes within software development, aimed at improving key areas that contribute to the organizations' goals. The task of evaluating whether the selected improvement path meets these goals is challenging. Based on the results of a systematic literature review on SPI measurement and evaluation practices, we developed a framework (SPI-MEF) that supports the planning and implementation of SPI evaluations. SPI-MEF guides the practitioner in scoping the evaluation, determining measures and performing the assessment. SPI-MEF does not assume a specific approach to process improvement and can be integrated in existing measurement programs, refocusing the assessment on evaluating the improvement initiative's outcome. Sixteen industry and academic experts evaluated the framework's usability and capability to support practitioners, providing additional insights that were integrated in the application guidelines of the framework. Copyright © 2013 John Wiley & Sons, Ltd. Software Process Improvement; Software Measurement; Software Process Evaluation ## 1 Introduction With the increased importance of software in product development [1], the software engineering discipline and the study of the involved processes have started to gain more popularity among researchers and practitioners in industry [2, 3, 4]. Software Process Improvement (SPI) encompasses the assessment and improvement of the processes and practices involved in software development [5]. SPI involves the understanding of the software processes as they are used within an organization and suggests areas for improvements in achieving specific goals such as increasing product quality, operation efficiency and cost reduction [6]. The SPI literature provides many case studies of successful companies and descriptions of their SPI programs [7]. Examples are presented by [8, 9, 10, 11, 12, 13, 14, 15, 16], and also covers the recently popular development practices classified as agile or lean [17, 18]. Assessing the outcomes of SPI initiatives is as important as their actual implementation since without a clear understanding of gains or losses, it is impossible to reason about the performance of an SPI initiative [19]. Measurement in SPI can be of descriptive, evaluative or predictive nature [20]. Descriptive and predictive measurement is the primary facility to enable the software process to perform with predictable performance and capability and to ensure that process artifacts meet their requirements [21, 22]. Evalative measurement aims at providing support for operative decisions [20]. In this paper we focus on the evaluative nature of measurement, targeted at assessing the impact of SPI initiatives. The success of improvement initiatives also means different things to different people [23]. Hence, various stakeholders' points of view have to be taken into consideration when assessing the outcome on an SPI program [24]. The causal relationship between the improvement initiative and its effect is complex, and it is hard to determine whether the effect being measured is stemming exclusively from the improvement initiative [25]. The lack of guidelines for conducting evaluative SPI measurements have raised the challenge to develop and implement effective performance measurement programs for SPI [26]. Since the evaluation of the outcome of an SPI initiative is complex but also crucial to the organization, there is a need for a measurement and evaluation framework which guides SPI practitioners in their work, helping in preserving effort and cost, and enabling return on investment to be ascertained. Such a framework should promote an evaluation which considers the improvement from different views, increase the visibility, and consequentially facilitate the assessment of the achieved benefits. The challenges in process improvement evaluation are diverse, ranging from defining an appropriate measurement scope, eliciting the required metrics, to the consideration of confounding factors in the evaluation [27]. This paper proposes a conceptual framework that aims to address these challenges, offering a structured approach. The framework was derived from an extensive systematic literature review [27] which collected best practices in the field. Subsequently, we followed the technology transfer model proposed by Gorschek et al. [28] to statically validate the framework by experts from both academia and industry. The remainder of this paper is organized as follows. Related work is discussed in Section 2. In Section 3 we present four major challenges, basing their formulation on the results of a systematic literature review on SPI measurement and evaluation [27]. With the aim to address those challenges, we developed the Software Process Improvement Measurement and Evaluation Framework (SPI-MEF). Section 4 describes the framework, and an example scenario is provided in [29]. The usefulness and usability of SPI-MEF was validated through the help of 9 research experts and 7 industry practitioners as described in Section 5. The results and the refinements applied to SPI-MEF are discussed in Section 5.3. Threats to validity are discussed in Section 5.2. Finally, conclusions and motivations for future work are given in Section 6. ## 2 Related Work In this section, we briefly review previous work relevant to the measurement and evaluation framework proposed in this paper. Software process appraisal methods, e.g. SCAMPI [30], or guides to process assessment, e.g. ISO/IEC 15504 (Part 4) [31], evaluate whether an organization conforms to a certain industry standard. The assessment identifies areas for improvement [32] and can steer the implementation of process improvements [33]. Such assessments provide a benchmark against a set of goals, do however not evaluate the actual impact of process changes. SPI research into measurement programs has developed and suggested several metrics [34]. For example, the _ami_ (Assess, analyze, Metricate, Improve) approach integrates an analytic, bottom-up with a benchmarking, top-down approach to process improvement [35, 36]. The rationale for the expected synergy is that top-down approaches, such as the Capability Maturity Model (CMM) [37], do not consider the specific business goals of a company [38]. Hence, the proposed goal-oriented measurement in ami, based on the GQM paradigm [39, 40], serves to analyze the identified improvement opportunities more in depth and to monitor the implemented changes, assuring that the followed best practices lead to the achievement of the targeted business goals. Similarly, the GQM+Strategies approach aims at linking business strategies with measurement goals [41], since CMMI [42] does not provide an explicit link from the improvement to business value [43]. Inspired by the ami approach, Park et al. [44] developed the GQ(I)M method. Extending the GQM paradigm, GQ(I)M introduces the notion of _indicators_, which reflect the idea of asking "What do I want to know?" as opposed to the question "What do I want to measure?". Indicators are therefore representations of measurement data, backed by one or more metrics, and support with a clear definition of their construction the decision making process [45]. The integration of process assessment, modeling and measurement is the goal of the product-focused improvement approach (PROFES) [46]. As opposed to ami and GQ(I)M, in which _company specific_ business goals define the measurement strategy, PROFES promotes continuous assessment against _reference models_ such as CMM or ISO/IEC 15504, supported by measurements derived by GQM [47, 48]. The expected benefits are higher visibility of process changes and therefore better control on the improvement process and lower assessment costs, as the time needed for data collection is reduced [47]. An alternative to the ubiquitous GQM paradigm is the Practical Software and System Measurement (PSM) approach [49], that influenced and was influenced [50] by the in parallel developed international standard for Software Process Measurement, ISO/IEC 15939 [51]. In contrast to the more general, goal-oriented GQM, PSM is designed to establish a measurement process for project evaluation, following the Plan-Do-Check-Act cycle [50]. Measurement in PSM has the purpose to satisfy the project manager's information needs which stem from a) the achievement of project success, and b) obstacles or issues related to achieving success [52]. We reviewed the literature on SPI evaluations conducted in industry [27], identifying current practices that also were built upon, or were inspired by the approaches discussed in this section. The analysis of these practices lead to the definition of several challenges related to the evaluation of SPI initiatives which are discussed in further detail in Section 3. ## 3 Challenges in Measuring and Evaluating SPI Initiatives Obstacles and issues in implementing measurement programs in general were previously identified by Herbsleb and Grinter [53] (difficult communication across organizational boundaries, rigidity of data collection mechanisms, non-transparent data usage), Berry and Ross [54] (the complexity of combining sociological and technological aspects in a measurement program), Kasunic et al. [55] (poor data quality), and Umarji and Seaman [56] (different perceptions of metrics between developers and managers). Since the focus of these issues is predominantly on the implementation of measurement programs, which is a critical but not the sole aspect of SPI evaluation, we devised four fundamental challenges in measuring and evaluating SPI initiatives. The formulation of these challenges bases upon the findings from a systematic literature review on measurement and evaluation of SPI [27]. Sections 3.1 to 3.4 characterize the identified challenges and explain how they are addressed by the six concepts presented in SPI-MEF. ### Challenge I - Heterogeneity of SPI initiatives The spectrum of SPI initiatives ranges from the application of tools for improving specific development processes, to the implementation of organization-wide programs to increase the software development capability as a whole [27]. As a consequence of this variety and diversity in scope and complexity of SPI initiatives, we designed SPI-MEF as a set of interrelated concepts, each one addressing one or more challenges. Figure 1 summarizes the relationships between challenges and concepts. In each concept we provide a set of practices which can be used to fulfill the goals of the concept and hence addressing the challenge. These practices however may need to be adapted and scaled to the specific context in which the framework is used. Hence, the first concept is termed _Gap analysis_ of evaluation quality_. It provides means to assess the current and to define the aspired evaluation quality, enabling the customization and scaling of the framework to different types of SPI initiatives. ### Challenge II - Partial evaluation The outcome of SPI initiatives is predominately assessed by evaluating measures which are collected at the project level [27, 57]. As a consequence, the improvement can be evaluated only partially, neglecting effects which are visible only outside individual projects. Such evaluations can therefore lead to sub-optimizations of the process [53]. By focusing on the measurement of a single attribute, e.g. effectiveness of the code review process, other attributes might inadvertently change, e.g. time-to-market of a product. To address this challenge, we propose the concept of _Evaluation scoping_ which provides means to determine the extent of the improvement evaluation. This concept provides the answer to the question: where to measure? Complementary we also propose the concept of _Determination of measures_ which aims at providing the answer to the question: what to measure? ### Challenge III - Limited visibility This challenge is a consequence of the previous one since a partial evaluation implies that the gathered information is targeted to a specific audience which may not cover all important stakeholders of an SPI initiative. This means that information requirements may not be satisfied, and that the actual achievements of the SPI initiative may not be visible to some stakeholder as the measurement scope [27] is not adequately determined. _Evaluation scoping_ aims to address this issue by providing a structured approach to identify the relevant stakeholders and to provide them with the information they need. The concept of _Holistic view_, on the other hand, provides a way to collect and present the gathered information, supporting a multi-faceted view on the improvement initiative. ### Challenge IV - Evaluation effort and validity Due to the vast diversity of SPI initiatives (see Challenge I), it is not surprising that the evaluation strategies vary. The evaluation and analysis techniques are customized to the specific settings were the initiatives are embedded [27]. Since there exist no formal guidelines for implementing an Figure 1: Conceptual map of the framework SPI evaluation [26], one can assume that the design and development of the evaluation strategies require a considerable amount of effort. Furthermore, confounding factors are seldom taken into account in the industrial practice of improvement evaluation [27]. This can be a major threat to the evaluation validity since the predominant practice of improvement evaluation is based on pre-post comparison [27]. To address this challenge, the concept _Selection of evaluation strategies_ provides support in identifying and implementing adequate means for SPI evaluation. In addition, the concept _Evaluation implementation_ discusses timing factors that should be considered and provides support for conducting the evaluation itself. ### Summary Figure 1 shows a conceptual map, indicating how Challenge I to IV are addressed by the SPI-MEF concepts, and how the concepts are related to each other. _Gap analysis of evaluation quality_, whose intent is to tune the overall measurement and evaluation approach, is directly connected to _Evaluation scoping_ and _Selection of evaluation_ strategies, and indirectly to _Determination of measures_. Similarly, _Evaluation scoping_, whose intent is to define where to measure and who will see the results, influences the _Evaluation implementation_, and the _Holistic view_ concepts. In Section 4 we describe all SPI-MEF concepts in detail, and present practices used to realize the concept in practice. ## 4 Spi-Mef SPI-MEF was developed based on an extensive study of SPI research and industry case studies [27], mapping best practices, but also gaps in knowledge. The reviewed primary studies provided an excellent source of practices applied successfully in industry, but even more importantly, allowed us to extract generic guidelines to support the evaluation of SPI initiatives. This section presents these guidelines, complemented with 9 interconnected, although fictitious, examples (starting with Example Box 1), which in essence constitute SPI-MEF. As the relationships shown in Figure 1 suggest, there exist dependencies between the concepts. They are however ordered in a way, reflected in the structure of the guidelines and the examples, in which one would typically conduct the planning for the evaluation. The extended scenario provided in [29] shows how the framework is applied in an iterative manner, following a phased approach. ### Gap analysis of evaluation quality This concept, addressing _Challenge I - Heterogeneity of SPI initiatives_, reflects the need to capture the context in which SPI initiatives are implemented. The particular context characteristics steer further decisions, e.g. in Evaluation scoping (Section 4.2) and Selection of evaluation strategies (Section 4.4). The basic information that should be recorded is (see also Example Box 2): 1. a description of the initiative and its purpose 2. concrete improvement goals Example Box 1: Introduction 3. the affected process areas 4. the target entities of the initiative, i.e. specific projects, products or departments, and 5. a tentative schedule for the implementation. Furthermore, the organization's capability to implement a measurement program and conduct an evaluation needs to be assessed by a gap analysis. In general, gap analysis uses two sets of information: the current status and the aspired status [58]. The current status of measurement capability can be determined by following one of the maturity assessment approaches presented by Daskalantonakis et al. [59], Comer and Chard [60], Niessing and Vliet [61], and Diaz-Ley et al. [62]. Then the aspired measurement and evaluation quality has to be determined. The subsequently identified gap then shows what refinements are needed in the measurement program. SPI-MEF proposes to use a "2x2" matrix (Figure 2) to support the decision process, using the context information and current measurement capability as input. The evaluation quality in SPI-MEF is defined by two dimensions: accuracy and coverage. Accuracy can be improved by considering primary and complementary measures (Section 4.3.1), selecting the appropriate evaluation strategy and by taking confounding factors into account (Section 4.4). Coverage is determined by to what extent measurement levels and viewpoints (Section 4.2) are included Figure 2: Opportunity matrix in the evaluation. It is possible to address both dimensions simultaneously, but cost and effort constraints may prohibit such a strategy. The "2x2" matrix has been proven to be an excellent tool to address such decision dilemmas [63]. The roles involved in the discussion about the long-term strategy should include management which provides the funding for the improvement initiative and measurement program experts. The Evaluation Opportunity Matrix (Figure 2 and Example Box 3) serves as a starting point for the discussion. Aside from accuracy and coverage another important aspect to consider is the cost of the evaluation. Cost is denoted as a function of the quality and scope of the evaluation. Therefore, the resources an organization is willing to invest have to be taken into consideration when choosing the desired path to improve the quality of evaluation. The cost of evaluation can arise from the amount of metrics defined and collected, the resources needed to manage the metrics, the number of people involved in the metric collection and evaluation, etc. Although achieving high accuracy and coverage in the context of SPI-MEF seems to inherently require more metrics, the potential reuse of metrics should be considered when evaluating the cost for evaluation implementation. During the decision process of which strategy to follow, it is advisable to involve personnel who have expertise in implementing a measurement program and can estimate the cost of metric collection, and management, which sponsors the improvement program. Example Box 3: Evaluation opportunity matrix The Software Engineering Process Group (SEPG) at ALPHA met with the upper level management and employees who are currently in charge of the measurement program. To increase accuracy in the short term, plans are to consider primary and complementary measures and to control the major confounding factors by establishing a baseline from the appropriate historical data. Considering the process improvement budget and the implementation schedule, the company's short-term goal is to focus on Process and Project level accuracy first and then address coverage by including the Product measurement level. ### Evaluation scoping The evaluation scope is determined using two dimensions. The Measurement Levels (MLs) represent the spectrum of measurable entities which can potentially be assessed in the evaluation. Identifying the Measurement Levels for evaluation counteracts _Challenge II - Partial evaluation_ as it leads the practitioner to consciously define the coverage of the evaluation. Section 4.2.1 explains the Measurement Levels in more detail. The second dimension, Evaluation Viewpoints (EVs), represent the stakeholders and their information needs in relation to the evaluation of the improvement. Defining Evaluation Viewpoints counteracts _Challenge III - Limited visibility_ as it clarifies the stakeholders' data requirements to evaluate the improvement initiative. Section 4.2.3 discusses the Evaluation Viewpoints in more detail. #### 4.2.1 Measurement Levels The Measurement Levels (MLs) represent the spectrum of entities which are affected by SPI initiatives and need to be measured in order to achieve a holistic evaluation of the SPI initiative's outcome. The MLs, Process, Project, Product, Organization and External are inspired by the levels of dependent variables proposed by Gorschek and Davis [57]. On the _Process_ level, the efficiency and effectiveness of the implemented process improvement initiative can be assessed. For example, if the process change consists of involving testers in requirements reviews, it can be measured how many faults are identified compared to the previous instance of the process. A measurable gain at this level is however not sufficient to assert that the improvement goal (e.g. improved product quality) has been reached. Furthermore, the improvement of one process may produce side-effects on other processes, or, more generally, affect the output of the process, which is not measurable at this level. The measurement at the _Project_ level is mainly concerned with project control by monitoring budget, schedule and resources. A projects' success or failure is often evaluated by determining the discrepancy between estimated and actual values. Additionally, it is possible to measure the effects of newly introduced or modified processes by assessing the work products created during the project. For example, if the requirements review process leads to fewer specification changes during the project life-cycle. Adherence to project estimates can indicate process improvement but can also be misleading when considered in isolation as product quality is not assessed. Linberg [64] reports on a case study in which a project faced severe schedule and budget overruns. From the management's point of view, the project was perceived as a failure, whereas the product, once shipped, was highly successful. Thus, considering the project _and_ the product perspective in an improvement evaluation is important. Increasing product quality is often the major improvement goal when establishing an SPI initiative. Measurement at the _Product_ Level assesses both internal quality attributes which are mostly visible to software developers, and external quality attributes which are observed by the user of the product. Besides increased quality, process improvement may also target a reduction in cost and time-to-market of the product. Continuing on the previous example with requirements reviews, the involvement of testers could lead to a delay in other projects, to which they were originally assigned. The project with the improved review process and tester involvement could be completed earlier due to less rework; other projects however could be delayed due to the deduction of resources. It is therefore necessary to control and assess all aspects of the improvement goals and to take them into consideration when evaluating the initiative's success. The short- and mid-term effects of an SPI initiative can be assessed in the Process, Project and Product level, but the long-term effects will prevail and only be visible at the _Organization_ level. An SPI initiative has to meet the business goals of a company and has to be aligned with its vision. Therefore, it is necessary to assess the improvements impact on the organization's business strategy, economy and culture. The example with the involvement of testers in requirements reviews shows that a reorganization of the development process may be required in order to avoid resource and scheduling issues. Hence, the measurement and evaluation of the performance of the SPI initiative at the Organization level is of importance for the design and implementation of forthcoming process improvements. The previously mentioned MLs are focused on the measurement and evaluation of the SPI initiative within the company and neglect that the effect of the improvement may also transcend the organizational border to the exterior world. The _External_ level is influenced by the produced goods but also by the organization itself, e.g. through supplier dependencies. For example, the aforementioned improvement of the development process can also affect suppliers as they may need to interact differently with their client. Measurement at the External level assesses positive and negative externalities which should be taken into consideration when evaluating the success of an SPI initiative. #### 4.2.2 Effect traceability in Measurement Levels Effect traceability in an issue inherent in the Measurement Levels that was also brought up by Gorschek and Davis [57]. The traceability between the action and its empirically assessable effects diminishes with increasing distance from the process change. Due to temporal distance there is an increasing latency by which the effect of process improvement is measurable at the different levels. That is, the effect of the treatment will reach the process itself first, then the projects in which the process is applied and eventually the products emerging from the various projects. Furthermore, the ability to isolate the effect on a particular improvement decreases. Each level is an aggregation of one or more entities of the previous level, e.g. the External level includes, besides other things, a set of organizations, which in turn, include, besides other things, a set of products. These "other things" can be seen as external variables and are defined in this context as confounding factors. Those may aggravate an accurate evaluation because they hide or amplify the effects of the improvement initiative. In order to counteract these effect traceability issues we propose Evaluation Viewpoints as the second dimension for evaluation scoping. #### 4.2.3 Evaluation Viewpoints According to Zahran [65], a software process improvement initiative has to be backed up by both organizational and management infrastructure, as well as a process technical infrastructure. The organizational and management infrastructure defines the stakeholders which are usually involved in the improvement initiative, such as executive sponsors, a steering committee, a software engineering process group (SEPG), and software process improvement teams. Besides the viewpoints represented by the previously mentioned SPI stakeholders, the evaluation should also consider the viewpoints from top- and middle- management, product and project management, and software developers which are not directly in charge of the improvement initiative. Daskalantonakis [66] identified six target audiences for the evaluation and use of metrics in software organizations: Software users, Senior Managers, Software Managers, Software Engineers, and Software Process Engineers and Software Quality Assurance. Similarly, Ebert [67] identified four roles with individual goals related to the improvement: practitioners, project managers, department head, and corporate executives. The specific roles encountered in an organization and in an SPI initiative are highly dependent on the structure of the organization and the extent of the process improvement initiative. Hence, we generalize the potential stakeholders into three Evaluation Viewpoints (EVs): Implementer, Coordinator and Sponsor. The three EVs reflect the different angles from which the process improvement is perceived and, more importantly, which aspects of the improvement matter to whom when conducting the evaluation. The definition of different viewpoints also supports the idea of increasing the visibility of the process improvement, which is, presenting the information to In Example Box 3, the SEPG decided to evaluate the improvement initiative, code inspections, on the Process, Project and Product levels. The SEPG defines the Evaluation Viewpoints (EVs) for the respective levels. For illustrative purposes, the table in this example contains also the Organization and External levels. The first column denotes the Measurement Levels (MLs), i.e. the entities that are affected by the SPI initiative, whereas the remaining columns denote the Evaluation Viewpoints (EVs), i.e. the stakeholders that have an interest in the evaluation of the SPI initiative. The table is read, for example, as follows: "The development team is interested in evaluating the impact of the SPI initiative on the Process ML from the Implementer EV." Note that a specific role can have different interests when evaluating an improvement initiative and therefore represent more than one viewpoint. By looking at the table, the "Product Manager" role subsumes both the Coordinator viewpoint at the Product level and the Implementer viewpoint at the Organization level. \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**MLs**} & \multirow{2}{*}{_Implementer_} & \multicolumn{1}{c}{**EVs**} & \multirow{2}{*}{_Sponsor_} \\ \cline{1-1} & & & & \\ \cline{1-1} \cline{3-4 the appropriate stakeholders and alleviate the decision-making process (see Example Box 4). It is important to point out that for a holistic evaluation of the improvement initiative it is necessary to consider and account for all viewpoints and the respective evaluation results without isolating single aspects [24]. The _Implementer_ viewpoint represents all the roles which are dedicated to put the software development in general, and the process improvement in particular, into practice. The evaluation from this viewpoint is needed to make the effects of changes in behavior visible to the enactors of the process improvement. The rationale behind this argument is that a feedback loop on the effects of the improvement fosters the sustainment of process change. Additionally, if the Implementer is well informed about the improvement and is conscious of its effects, he can serve as an accurate data source for the evaluation of the improvement [68], as well as be an active contributor to the improvement [69, 70]. The _Coordinator_ viewpoint comprises the roles which generally participate in software development and in a software process improvement initiative as coordination and control entities. Their areas of responsibility include managing and leading the Implementers, and to steer and promote the process improvement through strategic (higher level, global), and tactical (lower level, local), decisions. The interests in evaluating the improvement initiative from this viewpoint are several, but in general they boil down to two aspects: (a) to assess if the improvement goals have been achieved, and use the output of the evaluation to drive and guide further improvement activities, and (b) to provide feedback to superiors. The _Sponsor_ viewpoint represents those roles which fund and motivate the improvement initiative, and in parallel, those who are interested in evaluating the improvement according to its costs and benefits. The motivating roles' focus is towards the evaluation of the improvement process itself in order to assess if it delivered the anticipated benefits. This includes, for example, the SPI steering committee and/or the head of department. On the other hand, the evaluating roles' focus may be less on the improvement process itself and rather on the results which are visible in the environment in which the process change is embedded. This includes, for example, higher level management financing the effort, but also company-external entities like shareholders, customers or regulatory stakeholder. In either case, the evaluation needs to be able to confirm the long-term effects of the process improvement. ### Determination of measures In order to perform an accurate evaluation, measurements need to provide the required data. The question is how to elicit the set of sufficient measurements that allow an evaluation to express reliably if an improvement goal has been reached or not. The common approach is to derive the required metrics from the improvement goal. For example, if a reduction in cost is targeted, one could measure and evaluate if the expended resources in a project which implements the process improvement were reduced as compared to a previous, similar, project. There are two problems with this approach. 1. Not all the benefits in terms of cost reduction may be visible at the project level, that is, assessing on this level alone would only show a subset of the achieved benefits. 2. If the expenditure of resources in a project is the only assessed dependent variable, it is not possible to evaluate if the improvement did provoke any side-effects. In particular, detrimental influences that are visible only with some delay, and on different Measurement Levels, would not be accounted for in an evaluation based on pure Project level measurements. Further, using only a single metric as an achievement indicator could raise validity concerns in the subsequent evaluations [71]. One reason for this could be data collection issues, e.g. incomplete data-sets or incorrectly compiled data forms. To address problem 1, one has to reason about selecting the appropriate target audience for the evaluation and then deriving the necessary measurements. This evaluation scoping was discussed in Section 4.2 were we introduced Measurement Levels and Evaluation Viewpoints as scoping instruments. To address problem 2 we propose a method that builds upon the Goal-Question-Metric (GQM) paradigm [39, 40]. GQM is a systematic way to tailor and integrate an organizations' objectives into measurement goals and refine them into measurable values. It provides a template for defining measurement goals and guidelines for top-down refinement of measurement goals into questions and then into metrics, and a bottom-up analysis and interpretation of the collected data [40]. SPI-MEF provides an interface with the conceptual level of GQM to define the appropriate measurement goals for improvement evaluation as illustrated in Figure 3. We tailor the GQM approach to the context of SPI measurement and evaluation. The rationale for interfacing the GQM facets with SPI-MEF, illustrated also in Example Box 5, is as follows. The "Object of study" facet corresponds to the implemented SPI initiative whereas the "Purpose" is to evaluate the impact of the change. The "Focus" corresponds to the notion of success indicator which is determined by the Measurement Level and the consideration of primary and complementary indicators. Table 2 lists success indicators that are commonly encountered in SPI initiatives. Section 4.3.1 discusses primary and complementary indicators in further detail. The "Point of View" corresponds to the concept of Evaluation Viewpoint. The "Context" facet corresponds to the SPI Target Entities which set, as defined in Section 4.1, the scope of the SPI implementation. #### 4.3.1 Primary and complementary indicators We define primary indicators as the set of measurements that are used to assess if the improvement goal has been reached. For example, given that the improvement goal is cost reduction, primary measurements could be elicited from the process (e.g. efficiency of the changed/added process) and project level (e.g. effort in man hours). Complementary indicators capture the effects of process improvement that are not directly connected with the expected effect of the initiative. In other words, complementary indicators assess the side-effects that may arise when, through an improvement initiative, the corresponding primary \begin{table} \begin{tabular}{c c c c} \hline \hline & _Primary_ & \multicolumn{2}{c}{_Complementary_} \\ \hline Success indicator (Project level) & Cost & Quality & Schedule \\ \hline Example metric & Effort in man & Defect & Project cycle \\ & hours & density & time \\ \hline \hline \end{tabular} \end{table} Table 1: Identifying complementary indicators Figure 3: SPI-MEF interface with GQM indicator is affected (see Table 1). This is needed in order to control measurement dysfunction that may arise from either wrongly reported data or from sub-optimization of primary indicators. Iversen and Mathiassen [73] reported on a case where the measurement program was threatened due to mistrust in the the collected data. The method we propose to identify complementary indicators borrows its central idea from the project management triangle [74, 75] whose respective edges represent cost, time and quality. The aim of the project management triangle is to create the awareness that the entities at the edges are interrelated with each other and changing one will inevitably affect the others. Considering this principle in the context of process improvement evaluation, helps to identify complementary indicators. Table 1 exemplifies this idea where cost is a primary success indicator, and quality of the produced artifacts and project schedule are complementary indicators. Three basic success indicators, cost, time and quality can be used as a starting point, since those are the commonly targeted improvement goals, e.g. by Basili et al. [76], Debou and Kuntzmann-Combelles [38], Murugappan and Keeni [77], Weiss et al. [78], and Moreau et al. [79]. In [29] we provide an initial set of the success indicators shown in Table 2 that can be refined, depending on the actual improvement goal(s) and the concrete context in which the initiative is conducted. #### 4.3.2 Metrics baselining The success indicators and their respective metrics need to be baselined in order to serve as the initial point for evaluating the improvement. There are various ways how organizations can set the baselines for their metrics. The most typical way would be creating the baseline from historical data collected from previously conducted processes or projects and already finished products. Some derived metrics that consist of two or more elementary measurements can sometimes be easily acquired from historical data. If there is no historical data available, the baseline can be obtained by collection of data from active projects that are currently running in the organization. The data collected from the active projects would serve as the baseline to evaluate the projects that are going to incorporate the SPI initiative. With the definition of the baseline, an expert in interpreting the metric needs to specify the ranges indicating improvement, stagnation and decline of that metric. The same evaluator assesses, later on in the evaluation (Section 4.5.2), the change of the metrics' value with respect to the defined baseline. ### Selection of evaluation strategies SPI evaluation strategies can be classified into four general categories [27], that are, in practice, often applied in combination: basic comparison, statistics-based analysis, survey, and cost-benefit analysis. The fundamental idea of the basic comparison strategy is to quantify the impact of an \begin{table} \begin{tabular}{l l l} \hline Measurement Level & Success Indicator & What is measured? \\ \hline Process & Efficiency & The means of the process implementation. \\ \cline{2-3} & Effectiveness & The ends of the process implementation, \\ & visible in any work product and/or artifact. \\ \hline Project & Defects & Artifact quality w.r.t. the different phases in the \\ & project life-cycle. \\ \cline{2-3} & Cost & Investment in terms of resources and effort to \\ & conduct the implementation of project. \\ \cline{2-3} & Schedule & Calendar time of project and/or phases therein. \\ \cline{2-3} & Productivity & Effort input and size output in project \\ & activities. \\ \cline{2-3} & Estimation accuracy & Difference between planned and actual \\ & outcomes of project success indicators. \\ \hline Product & Quality & Internal and external quality attributes of the \\ & software product. \\ \cline{2-3} & Cost & Total cost of product development _and_ \\ & maintenance. \\ \cline{2-3} & Time to Market & Calendar time between product inception and \\ & delivery. \\ \hline Organization & Economics & Costs and benefits (including intangible assets) \\ \cline{2-3} & Employees & Employee satisfaction \\ \cline{2-3} & Growth & Organizational growth, revenue and \\ & innovation. \\ \cline{2-3} & Communication & Collaboration and communication between \\ & employees and/or customers. \\ \hline External & Customer externalities & Any of the above, applied however to the \\ & customers’ context. \\ \cline{2-3} & Society externalities & Effects on the environment of the organization. \\ \hline \end{tabular} \end{table} Table 2: Measurement Levels and success indicators (based on [27]) improvement initiative by assessing the change of measurements relative to a baseline. In statistics-based analysis, statistical tools help to identify and control variation in processes over time; surveys are used to collect information on the improvement from people who are either directly (employees) or indirectly (customers) affected; cost-benefit analysis helps to quantify the financial impact of the SPI initiative. As illustrated in Figure 1, the _Gap analysis of evaluation quality_ concept constrains which evaluation strategies may be eligible for the specific SPI initiative. Table 3 summarizes the criteria on which the evaluation strategy should be selected. The criterion "Measurement Levels" identifies a strategy depending on the selected success indicators and the corresponding Process, Project, Product, Organization and External level (see Table 2). The "Cost" criterion provides a relative rank of the required resources to perform the corresponding evaluation strategy. The accuracy of the evaluation is influenced by the extent to which the last criterion, "Confounding factors", can be controlled. Confounding factors represent a fundamental threat for the evaluation of a process improvement initiative if any kind of comparison is used to assess its effects. A comparison is said to be confounded if the observed difference between two values (the effect of a treatment) is not solely caused by the treatment, but can be partially attributed to an external factor [80]. Table 4 summarizes typical confounding factors encountered in the evaluation of improvement initiatives. Looking at Table 3, we assess the confounding factors for the "Basic comparison" and "Statistics-based analysis" strategies as controllable. For example, in the case of the "Basic comparison" strategy, it is common to apply the matching technique or linear regression models [27]. On the other hand, controlling confounding factors in "Cost-benefit analysis" or "Survey evaluation" strategies is more challenging. Surveys collect quantitative and qualitative data from human subjects. Hence it is important to create a profile of the surveyed individuals in order to group the acquired data into homogeneous categories. In the cost-benefit analysis strategy, it is crucial to quantify both direct and indirect costs, and tangible and intangible benefits [27]. As we discussed in Section 4.2.2, the traceability of improvement initiatives decreases along the Measurement Levels due to timing and isolation issues. A viable approach to compensate for the effects of multiple improvement initiatives is to let internal experts weigh the contribution of individual initiatives [81]. Confounding factors related to timing and potential solutions are discussed in Evaluation implementation (Section 4.5). \begin{table} \begin{tabular}{l l l l} \hline \hline Strategy & Measurement Levels & Cost & Confounding factors \\ \hline Basic comparison & Process, Project, Product & Medium 1 & controllable \\ \hline Statistics-based analysis & Process, Project, Product & High & controllable \\ \hline Survey & Product, Organization, External & Low & challenging \\ \hline Cost-Benefit analysis & Product, Organization, External & Medium & challenging \\ \hline \hline \end{tabular} \end{table} Table 3: Criteria for selecting evaluation strategies ### Evaluation implementation The goal of the improvement evaluation is to satisfy the information needs of the respective stakeholders defined in evaluation scoping (see Section 4.2). In SPI-MEF, an improvement evaluation is conducted according to a planned schedule, consisting of the analysis of measures collected at a certain Measurement Level, and requires the involvement of roles with the expertise to judge the impact of the improvement initiative. Therefore, each evaluation instance is assigned a time, a Measurement Level and one or more experts that conduct the evaluation. #### 4.5.1 Scheduling The motivation to plan the evaluation schedule is to introduce means to control timing as a potential confounding factor. The principle idea behind this is that the effects of a \begin{table} \begin{tabular}{p{142.3pt} p{284.5pt}} \hline Factor & Description \\ \hline Project type & New development and enhancement/maintenance projects have different properties and they should not be treated as the same during evaluation, i.e. comparison of success indicators from different project types should be avoided. \\ \hline Development model & Different project development life-cycles such as waterfall model and spiral model have different project characteristics and potentially confound the evaluation. \\ \hline Product size and complexity & Product size (lines of code, function points, etc.) and complexity (number of features, cyclomatic complexity, etc.) have to be taken into consideration during the evaluation. \\ \hline Product domain & The product domain difference also affects the evaluation. Front-end applications, server-side systems and embedded software are different types of product domains that should not be put on par during the evaluation. \\ \hline Technology factors & Technological factors such as the programming language and tool support can influence indicators like productivity and effort. \\ \hline Process compliance & The degree to which the standard process is followed in the actual implementation should be considered in the evaluation as this can give indications to what extent the improvement can be actually attributed to the SPI initiative. \\ \hline Employee factors & The staff working in the project might differ in experience level and measurement on productivity and efficiency should take staff experience into consideration. In addition to that, employee turnover in the organization may also affect the evaluation result. \\ \hline Time factors & Time can be seen as a factor that can affect the evaluation result. When conducting a customer survey on product quality, the time that the product has been in use needs to be considered. \\ \hline Multiple improvement initiatives & It is difficult to ensure that a particular improvement is attributed to a specific SPI initiative. Several improvement initiatives that run in parallel would create traceability issues in the evaluation. For example, when calculating the cost saving from a specific improvement initiative, care should be taken not to count the saving twice as the saving might also be attributed to another improvement initiative. \\ \hline \end{tabular} \end{table} Table 4: Confounding factors (adapted from [27]) certain improvement initiative may be measurable at different points in time, depending which Measurement Level is considered in the evaluation. The temporal distance in the Measurement Levels (see Section 4.2.2) supports the idea of a lag factor, which is also referred as the time lag between the cause (SPI initiative) and the corresponding effect (improvement) [82, 25]. This latency needs to be considered when determining the appropriate time to evaluate. On the other hand, one has also to consider how long a measurement result is valid, i.e. how long can it be of value to support decision making processes and be representative for what is actually assessed1? Due to this validity decay of measurement results, periodic evaluations are needed in order to make the effects of the improvement visible over time, as exemplified by Herbsleb et al. [83], Jarvinen and van Solingen [47], Savioja and Tukiainen [84], Jarvinen et al. [48], Iversen and Ngwenyama [26], and Moreau et al. [79]. Footnote 1: For example, the standard CMMI appraisal method for software improvement (SCAMPI) [30] defines a degradation factor of 3 years for class A appraisals. In SPI-MEF, we use the terms Lag Factor (LF) and Degradation Factor (DF) to designate the improvement effect latency and, respectively, the validity decay of measurement results. DF defines how long an evaluation result may support and be valid for the decision making process. As a consequence, a periodic evaluation schedule is required (see Example Box 7). LF and DF are determined by the Measurement Level, the conducted improvement initiative, the degree to which the changes are actually implemented, and external factors which may stall progress. Dror [25] proposes statistical process control tools and data mining techniques to identify causality links between improvement action and effect. Historical data from organizations could therefore be used to create context-sensitive heuristics of improvement timings, i.e. define the Lag and Degradation Factor based either on collected data and/or on expert opinion gathered from employees. #### 4.5.2 Analysis The aim of the analysis is to provide an evaluation to which degree a certain measurement has changed due to the enacted improvement initiative. To this end, the expert who was assigned to each measure when they were determined (see Section 4.3) rates the change compared to the baseline. The analysis performed at this stage serves as an intermediate product that is reused when the holistic view is created, as discussed in Section 4.6. ### Holistic view By defining a model which assesses the improvement from the viewpoint of the involved stakeholders, an important aspect of improvement evaluation can be addressed, namely increasing the visibility of the improvement initiative as a whole (_Challenge III - Limited visibility_, see Figure 1). Such a representation would be beneficial for several reasons. First, the success of an initiative could be asserted with more confidence since it is assessed considering the involved stakeholders. Second, it could show, given that the appropriate metrics were collected, if the improvement has a positive impact on the organization as a whole or if the change negatively influences aspects which would not have been considered initially. Third, it can be used as an aid to communicate results of the improvement in an efficient way, as the amount of data produced in the individual evaluations is reduced. The major aim of the holistic view concept is to provide an aid to communicate the improvement to the different stakeholders. To achieve this goal, we define improvement indicators that can be represented in a Kiviat diagram (see Example Box 9). The important information that is shared between the stakeholders by looking at these diagrams is, that the impact of the improvement may be different, depending on who is assessing it. Interesting cases are given if there is a disagreement on the outcome of the initiative, i.e. the evaluation viewpoints diverge. Such scenarios should give reason for further analysis of the implemented initiative. #### 4.6.1 Considerations for the model In order to show the overall or compound impact of an improvement initiative it is necessary to define an appropriate model which is able to aggregate the results of individual evaluations (and metrics) into a representative score. We identified three basic aspects that need to be considered for such a construction of the model: 1. Normalization of the different metrics to enable a meaningful aggregation. 2. Compensation for the different orders of magnitude in the values of the metrics, i.e. consider that a small difference in one metric may have effectively more impact than a larger difference in another. 3. Consideration of the individual viewpoints to include the relative "importance" in improving a specific metric. The third point has less a technical rather than a qualitative rationale. The model should take the subjective change, as it was experienced by the involved parties, into account. This means that each metric should be given a weight, defined by the viewpoints which are interested in the result of the evaluation. It is assumed that in this way, the evaluation of the improvement initiative gains realism by representing the actual situation and reveals possible imbalances in the change effort, as it was perceived by the involved stakeholders. The first two aspects could be implemented by an impact rating, in which the evaluator maps the change in a metric into an ordinal scale, which would both normalize the metrics and compensate the differences in orders of magnitudes. #### 4.6.2 Subjective Value of Improvement To calculate the improvement for each Evaluation Viewpoint and Measurement Level, that is, the Subjective Value of Improvement (SVI), we use two components. The first component is the Subjective Weight (SW) in which each viewpoint defines a weight of subjective importance to every metric. This means that the stakeholders of the improvement initiative within the Implementer, Coordinator and Sponsor viewpoints have to agree on a Subjective Weight. The second component is the Impact Rating (IR). Here, the expert who conducted the individual metric evaluations, as presented in Section 4.5.2, rates the impact of the improvement initiative on the respective metric according to an 11 point Likert scale (see Example Box 9). One can choose also a 7 or 9 point Likert scale, however research suggests that lower than 6 point scales generally produce less valid scores, have less discriminating power and are less preferred by its users [85]. Since the Impact Rating is subjective in nature, the organization should discuss and agree upon guidelines on how to perform the rating with the aim to improve the consistency in the rating between different metric-experts. The Subjective Value of Improvement is then calculated as \[SVI=\sum_{id}(SW_{id}*IR_{id})\] where \(id\) refers to the respective metric identified in the determination of measures (see Section 4.3). Since the Impact Rating component is based on previous evaluations, their Degradation Factor (see Section 4.5.1) determines if the evaluation results are actually considered at the point in time when the SVI is calculated. From a measurement theory point of view, the calculation of Subjective Value of Improvement is questionable as it involves mathematical operations which, in a strict sense, are not applicable on ordinal scales [86]. On the other hand, Stevens [87] pointed out that it can be practical to treat an ordinal scale as an interval scale. Furthermore, several studies have empirically shown that it matters little if an ordinal scale is treated as an interval scale [86]. Since the aim of the Holistic View is to provide an overview of the improvement (the individual evaluations are a more appropriate data source for decision making), the Subjective Value of Improvement has to be seen as an index or score. It gives an indication of the improvement, rather than being a metric, which in its formal definition has to fulfill the representation condition of a measurement [88]. #### 4.6.3 Aggregated Subjective Value of Improvement If evaluations, as presented in Section 4.5, are conducted on different target entities (e.g. projects or products), the calculation of the Example Box 9: Holistic view ALPHA decided to conduct a holistic evaluation 12 months after code inspections were introduced as at least one evaluation for each targeted Measurement Level would have been performed at that time (see Example Box 7 for the evaluation schedule). First, the Subjective Value of Improvement (SVI) is calculated to see how the different Evaluation Viewpoints perceived the improvement. Then, the Aggregated SVI (ASVI) is used to illustrate the impact of the initiative on the three Measurement Levels that were scored for the evaluation. The Evaluation Viewpoint assessment uses the SVI as an indicator of the improvement at a specific Measurement Level. For this example we show the outcome on the Process level, using the results from evaluation \(e\) (see Figure (a) in Example Box 7) as a basis. The Project Manager, responsible for evaluation \(e\), reassesses the impact of code inspections on feature project A.2, using the Likert scale shown in Figure (b). Representatives from the involved Evaluation Viewpoints (development team, SEPG, SPI steering committee) weigh the individual metrics according to their relative importance, i.e. whether they consider effectiveness or efficiency more critical to fulfill the aim of the initiative. Subjective Value of Improvement needs to consider differences in invested resources. Therefore, the Aggregated Subjective Value of Improvement (AVSI) is used. The SVI is weighted by an Investment Unit (IU): \[ASVI=\sum_{te}(\frac{SVI_{te}*IU_{te}}{IU_{Total}})\] where \(te\) refers to the Subjective Value of Improvement and the Investment Unit of the respective target entities (i.e. projects or products). \(IU_{total}\) is the sum of all investments in the target entities. The Investment Unit can be regarded as the resources which were spent in the implementation of processes, projects or products on which the individual evaluations are based on. ### Summary Sections 4.1 to 4.6 described the framework, SPI-MEF, targeted at evaluating SPI initiatives. SPI-MEF aims at providing an SPI evaluation that addresses the challenges discussed in Section 3. The framework is based on several key concepts (Figure 1) that span from scoping the evaluation, determining the required measures, to analyzing the gathered data. In Section 5 we illustrate how the framework was validated. ## 5 Validation The aim of the validation is to determine whether the framework is able to support practitioners in the evaluation of SPI initiatives. Section 5.1 describes the design of the validation, whereas Section 5.3 presents its results. Section 5.2 discusses threats to the validity. Figure (a) shows that, on the Process level, the Implementer viewpoint perceived the introduced code inspections more valuable than the Coordinator. Although all viewpoints identify an improvement, this outcome indicates that the implementation of the initiative can be further improved. Figure (c) is based on evaluations \(d,e,f,g\) (see Figure (a) in Example Box 7), as those are the only evaluations that are within the DF of 6 months. There, the Aggregated SVI is calculated on the Process, Project and Product level. Looking at the Project Level assessment, we can observe a decline in performance. This indicates that the outcome of the initiative is not coherently seen as a success by all stakeholders. ### Research method We designed a validation process in which expert judgment from researchers and industry experts was collected, analyzed and used to refine SPI-MEF to its final version as it is presented in Section 4. The concrete objectives of this process were: 1. to identify deficiencies in the proposed concepts 2. to assess the applicability of the framework from a practitioner's point of view 3. to elicit improvement opportunities for the framework. As a basis for the validation served a document describing the concepts on which the framework is based upon. #### 5.1.1 Selection of experts We validated the framework by using both researchers and industry experts in order to address both theoretical aspects and the practicality of the framework. Since the assessment of the framework's applicability was deemed critical, we selected researchers with industry experience or working closely with industry. Figure 4 provides an overview how the gathered expert judgment is distributed in industry and academia, and which data collection mechanisms were employed. The number in curly braces indicates how many individual experts are included in the respective category. The researchers' expert judgment was gathered through a semi-structured interview [89], and was, depending on the interviewee's accessibility, conducted in a face-to-face meeting or a telephone call. The group of industry experts was approached either by a semi-structured interview or a self-administered questionnaire [90], again depending on their accessibility. Thirteen researchers and eleven industry experts were selected and contacted, whereas nine and seven subjects of the respective groups agreed to participate in the study. All researchers agreed to provide 45-60 minutes for the interview and industry experts scheduled a 1\(\%\) hour meeting. Table 5 gives an overview of the characteristics of the participating subjects. All but one researcher were at the time of the investigation employed at the Blekinge Institute of Technology; nevertheless, they experienced education in various universities in Sweden, Germany, Australia and Turkey, which allows the assumption that their expertise was not streamlined. The industry experts were employed in four different companies located in Sweden, the U.S., Malaysia and Singapore. The companies' core businesses were telecommunications, electronic and electrical manufacturing, and global communication solutions of software intensive systems. #### 5.1.2 Definition of data collection instruments In this section we describe the design of the interview and the questionnaire. For the design of the interview instrument we followed the guidelines by Kvale [91]. The interview questions address the framework's concepts (see Figure 1) and were assigned a priority. The prioritization of the questions is important as the interview should be designed in a way that it can be finished in the stipulated amount of time while considering all high priority topics. Figure 4: Sample of expert judgment and data collection mechanisms To clarify the purpose of the interview and the contents that need to be validated, a distilled description (10 pages) of the framework's concepts was prepared and sent as preparatory documentation to the interviewees. We held also a short presentation at the beginning of each interview meeting to provide an introduction to the framework and to give a refresher of its concepts. All interviews were conducted by the same interviewer to maintain the consistency in the way the questions are presented. Three note takers recorded the answers during the interview sessions. The self-administered questionnaire was designed following Kasunic' guidelines [92]. The questions were formulated in a way such that the respondents could express their degree of agreement/disagreement (using a Likert scale). Additionally, the respondents were urged to motivate theirs answers with a few sentences. Furthermore, the wording of the questions was selected very carefully, taking Salant and Dillman's [93] advises into consideration. The quality of the interview instrument, the questionnaire and the prepared supplementary material [29] was improved by piloting the interview and the questionnaire with three Software \begin{table} \begin{tabular}{c l} \hline \hline Years 1 & Research Area \\ \hline 8/7 & Software Measurement / Estimation, SPI, Project Management, Requirements \\ & Engineering, Business Process Modeling \\ \hline 3/0 & Large Scale Software Management, Software Quality, Product Management, \\ & Software Process Management \\ \hline 3/3 & Software Product Line Engineering, SPI, Agile and Lean Software Development, \\ & Software Measurement \\ \hline 3/4 & Value-Based Software Engineering \\ \hline 7/3 & Software Verification \& Validation, Search-Based Software Engineering \\ \hline 3/6 & Strategic Software Engineering, Software Product Management \\ \hline 10/0 & Requirements Engineering, Software Architecture \\ \hline 30/10 & Software Architecture / Reuse, Process Engineering and Measurement, SPI \\ \hline 12/8 & Verification \& Validation, Automated Software Engineering, Requirements \\ & Engineering, Human / Social aspects of Software Engineering, Search-Based \\ & Software Engineering \\ \hline \multicolumn{3}{c}{Industry} \\ \hline Years 2 & Business Unit & Company Size 3 \\ \hline 18 & Multimedia & Large \\ \hline 12 & Product Development Excellence & Large \\ \hline 5 & Engineering & Small \\ \hline 5 & Research and Development Engineering & Large \\ \hline 10 & Data Networks Department & Large \\ \hline 12 & Enterprise Mobility Solutions & Large \\ \hline 10 & Research and Development & Large \\ \hline \hline \end{tabular} * The values denote the experience in academia/industry * The values denote total work experience * The company size is according to the European Recommendation 2003/361/EC \end{table} Table 5: Profile of the industry and academia experts Engineering students. The understandability and clarity in presenting the concepts of the framework were verified and the questionnaire was assessed regarding question formulation, layout and the overall compilation process. ### Threats to validity The discussion on the threats to validity of this research is organized according to the categorization proposed by Wohlin et al. [94]. Threats to internal validity (Section 5.2.1) are concerned with the observed relationship between the treatment and the outcome, i.e. the external factors that can influence an independent variable with respect to the causal relationship with a dependent variable. Threats to external validity (Section 5.2.2) are factors that can influence the ability to generalize the results to a wider scope than covered by the study. Construct validity (Section 5.2.3) is concerned with the relationship between theory and the observed outcomes of the research, that is, with the ability to generalize its results to the theoretical construct which motivate the research. Threats to the conclusion validity (Section 5.2.4) are concerned with factors that affect the ability to draw the correct conclusions from the conducted study. #### 5.2.1 Internal validity Three threats to internal validity, related to the gathering of expert judgment, were identified: instrumentation, maturation, and selection. The instrumentation threat is caused by bad design of artifacts [94] used in the expert judgment elicitation. Those can lead to misunderstandings regarding the discussed topic and weaken the results from the gathered data. To minimize this threat, the preparatory document and the interview questions were piloted first with three Software Engineering students to test whether the artifacts are clear and understandable. Afterwards, the preparatory document and the interview questions were refined. The maturation threat exists if the experts' behavior changes during the elicitation process as the time passes [94]. This can distort the gathered results if the subjects acquire new knowledge during the process, or become detached [94]. This threat is regarded as minor since the interviews with researchers were conducted during a meeting which lasted approximately one hour each. The written questionnaire was compiled and returned by all industry experts within two weeks; since no deadline to return the questionnaire was given to the subjects, the rather quick response indicates that they were committed to the task and had interest in providing useful information. Furthermore, the questionnaire was designed to present the needed information and the questions concisely and precisely such that it can be compiled within approximately one hour. The selection threat is concerned with the varying human performance and potential biases introduced by the selected subjects for the investigation, e.g. higher motivation of volunteers may lead to better results [94]. As the presented profiles in Section 5.1.1 show, both researchers and industry experts have several years of experience in their respective fields. Obviously there are differences in expertise in the specific areas of interest but this was regarded rather as an advantage than a threat since a major goal of the validation was to identify new, not yet considered, aspects for the measurement and evaluation of SPI. #### 5.2.2 External validity The threat of selection and treatment is caused by not having a representative sample of the population [94]. To address this threat, the selection of researchers took also the industrial experience of the subjects into consideration. The industry experts selected in this study were employed in different companies with different core businesses from Europe, the United States and Asia. Nevertheless, there is a moderate threat of selection bias due to the convenience sampling of researchers and industry experts. #### 5.2.3 Construct validity In this category, two threats for this research were identified: mono-operation bias and evaluation apprehension. Mono-operation bias is caused by considering only a single subject, independent variable or case and hence, the study may not fully represent the investigated theory [94]. This threat is considered moderate since two groups with different background were considered and for each group (academic and industry experts) more than one subject was involved. The threat of evaluation apprehension is caused by the human tendency to behave differently while being evaluated [94]. This can distort the result of the study since the subjects may perform better than in a regular, unobserved, situation. To tackle this issue, the experts were guaranteed their anonymity and that their answers were only used by the researchers involved in the study. #### 5.2.4 Conclusion validity Three threats were identified in this study that fall under this category: random heterogeneity of subjects, random irrelevancies in experimental setting, and searching for a certain result. Random heterogeneity of subjects is a threat caused by a heterogeneous sample such that individual differences within the sample could affect the study's result [94]. To minimize this threat, the experts were selected based on their competencies and knowledge in software engineering and software process improvement. Random irrelevancies are elements outside of the study setting which can disturb its conduct [94]. This threat is considered as minor since the interviews were conducted in an uninterrupted session and in a quiet environment. There were no discussions about the questions before the interview that could have influenced the interviewee's answers. Searching for results or "fishing" is the tendency of the researchers to search for a certain result or answer and ignore the inconvenient information [94]. To minimize this threat, all answers from the experts, whether they were positive or negative, were recorded and analyzed regardless the researchers' expected outcome. ### Results The subsections 5.3.1 to 5.3.7 summarize the main issues regarding the presented concepts (see Figure 1). The impact on SPI-MEF and the applied refinements are reported in each subsection. In essence, the approach proposed by SPI-MEF for SPI evaluation was taken very positively by both academic researchers and industry practitioners. Both groups agreed that SPI-MEF has the potential to provide a systematic way of evaluating process improvement impact. However, there were several suggestions brought forward to improve the framework in terms of increasing its applicability in practice. #### 5.3.1 Gap analysis of evaluation quality Higher accuracy and better coverage (see Section 4.1) is of course good to achieve. However, it may not be feasible for companies to achieve both simultaneously in the first place since resources may be constrained. Therefore, it is important to know which one is important to consider first. There was divergence in the answers of the interviewees on this issue. Some suggested considering accuracy first while others considered coverage as more important. However, their answers revealed that giving emphasis on accuracy first has some formidable advantages. Achieving accurate and valid results first can increase the confidence on the quality of evaluation which then can motivate to increase the coverage adding more complexity in the evaluation and investing more resources. If the intention of the evaluation is to see a more complete picture of the improvement benefits first and identifying the problem areas, then coverage should get more emphasis than accuracy. The cost of the evaluation was considered as a very important factor. The absence of cost considerations may lead organizations to opt for a good enough evaluation and discourage them from expending money to gain high accuracy and coverage to achieve a holistic evaluation. Therefore, the cost factor should be included in this matrix and a discussion on the relation between quality of evaluation and cost should be included in the concept. Impact on SPI-MEFIn addition to the previously present two dimensions of accuracy and coverage, a third dimension covering the cost aspect was added to the framework (see Section 4.1). #### 5.3.2 Evaluation scoping Some confusion arose regarding the roles in each viewpoint and in the interpretation of the categorization of the viewpoints (see Section 4.2.3). For example, it was not clear that the same role can subsume different viewpoints (e.g. a project manager who has the viewpoint of a Coordinator in the Project level can also have the Implementer viewpoint in the Product level). This may be due to the short document provided to the experts as an introduction to the framework which did not suffice to clarify this aspect. _Impact on SPI-MEF:_ To reduce chances of misinterpretation, the example describing the Evaluation Viewpoints in Section 4.2.3 explicitly discusses this point. The extended scenario [29] was enhanced with motivations for the allocation of roles to the different viewpoints. #### 5.3.3 Determination of measures The feedback to the questions regarding the proposed method (see Section 4.3) was twofold: on one hand, the approach was judged as systematic and comprehensive, which indicates that the method can be of practical use and provide appropriate support for practitioners. On the other hand, some experts perceived the approach as quite complex and time consuming which implies caveats in its applicability in terms of training and education of employees, and in justifying the additional resources needed for its implementation. _Impact on SPI-MEF:_ The concerns about the complexity of the method can be addressed by considering that the process of derivation of measures is an iterative one and is indeed scalable to more realistic settings than those which were shown in the example given in the interview material. Adding to the framework, as it was proposed by one interviewee, a palette of goals, questions and metrics on which the user can base his measurement program, was regarded by the authors as inflexible and difficult to maintain. It would be more appropriate to define a step-by-step guide which leads the practitioner to formulate his own goals and questions and then provide a pool from which he can pick the needed metrics (see Section 4.3.1). Clearly, this implies more effort on the part of the user of the framework; however, this approach makes it flexible and applicable in a wider range of scenarios. #### 5.3.4 Primary and complementary measures The introduction of "primary" and "complementary" measurements (see Section 4.3.1) necessitates a precise definition of these new terms. As it was observed by one academic expert, the term "complementary" may induce misunderstandings, and indeed, an industry expert interpreted the measures as the "needed" (primary) and "good to have" (complementary) ones. Clearly, this was not the intended interpretation and several remedies were discussed to avoid this misinterpretation. _Impact on SPI-MEF:_ As a result, a renaming of the terms was discarded, since any naming inherits ambiguities depending on the background of the reader. Therefore, in order to minimize the space for interpretation, the definition of the terms "primary" and "complementary" were enhanced and the exemplified measurement derivation was elaborated with more detailed steps. Additionally, it was made very explicit in the framework that "complementary" measures are not optional ("good to have"), but necessary for a complete evaluation (Section 4.3.1). Furthermore, a pool of commonly used metrics, grouped according to measurement levels, was provided in [29]. This should support the practitioner initially in identifying primary and complementary measurements. It should be noted however, that the pool has to be seen as a reference, and it should not be regarded as an exhaustive set of metrics. #### 5.3.5 Confounding factors This concept (see Section 4.4) was specifically put to the industry experts in order to exhibit if they consider it as an important issue in the practical evaluation of SPI. Compared to the other questions, the input to this concept was rather thin, although positive. Indeed, it was deemed as a necessary step to create awareness for confounding factors and consider them appropriately in the construction of baselines, and practical ways to control them in an industrial setting were needed according to the industry experts. _Impact on SPI-MEF:_ In the final framework, a short description of typical confounding factors (Table IV) that need to be taken into consideration for evaluation planning or during the evaluation was included, along with guidelines on how to address them (Section 4.4). #### 5.3.6 Evaluation scheduling Both the academic and industry experts agreed that the concept of Lag Factor (LF) and Degradation Factor (DF), and periodic evaluation is conceptually right (see Section 4.5.1). The main concerns were, however, how to come up with these values in the first place when the initiative is new or when several improvement initiatives are running in parallel. DF was considered harder to define as compared to LF. DF is the key concept that helps to define the time-bounds for periodic evaluation and could also help to determine the optimum interval between successive evaluations which is important to minimize the cost of evaluation. _Impact on SPI-MEF:_ It was suggested to provide some guidelines on how to come up with the values of LF and DF. These could however be misleading as long as empirical evidence or heuristics for LF and DF are not available. Therefore, at the beginning when the framework is introduced in an organization, experienced practitioners and experts in the field of process improvement could help to define these values. Thereafter, organizations can learn and improve their accuracy to determine LF and DF when they gain more and more empirical evidence for appropriate values of LF and DF. #### 5.3.7 Holistic view The scrutiny of the Holistic view concept (see Section 4.6) revealed some important characteristics regarding this approach to present improvement, and which strengths and weaknesses are inherent in this approach. It was confirmed that the target audience for the holistic representation resides in top-level management, for which the reduction in details can be seen as an advantage. The tool is therefore less adequate as decision support for the continuation or further refinement of an improvement initiative (this has to be done at a lower level where details are conserved), but rather expresses the "health" on the initiative and reveals if the expected benefits are achieved. The subjective element, "gut feeling", as it is integrated in the model, was judged both positively and negatively. Subjective ratings in improvement assessment are used in industry and therefore applicable in the "Holistic View". The contribution of the framework would therefore be the formalization of that process. _Impact on SPI-MEF:_ To make the subjective rating in the improvement assessment more homogenized and consistent among the different stakeholders, the framework prescribes to create guidelines on how to perform such a rating (Section 4.6.2). The extended scenario [29] provides an example how such a guideline can be realized as a help to homogenize impact rating. ## 6 Conclusion This paper presents a framework for the measurement and evaluation of software process improvement initiatives (SPI-MEF). SPI-MEF describes and exemplifies the use of the concepts of evaluation quality and scoping, determination of measures, and evaluation scheduling and analysis. The framework's concepts were derived from the best practices gathered in a systematic literature review on SPI measurement and evaluation [27]. Once the framework was created initially, it was evaluated by sixteen academic and industry experts with a median of 6 years of combined SPI experience in both research and practice. The focus of the evaluation was to validate that the framework integrates the important aspects of SPI evaluation and, on the other hand, provides support for practitioners. According to the experts, the contribution of the framework lies in the structured and nevertheless flexible approach. SPI-MEF gives concrete guidelines on how to scope the evaluation _before_ the improvement initiative is implemented. This allows practitioners to increase the visibility of the improvement effort within the company and to plan the required resources needed for the evaluation. As SPI-MEF builds upon the widely known GQM paradigm, existing measurement programs in an organization can be re-focused on the evaluation of SPI, and hence reusing existing resources and infrastructure. On the other hand, SPI-MEF provides also guidance to initiate a new measurement and evaluation program. Perception of improvement success varies within the functional structure of an organization. SPI-MEF provides means to capture and communicate improvement outcomes from different viewpoints, facilitating the understanding of the effects of process change. As such, SPI-MEF is a step forward in the ability to determine the success of SPI initiatives, increasing the confidence in the evaluation results. ### Future work Future work, refining and extending SPI-MEF, will include the integration of a cost-model that will further increase the adaptability of the framework, and improve the support for practitioners selecting evaluation strategies. Furthermore, we target a dynamic evaluation [28] of SPI-MEF, instantiating the framework in a specific company context and piloting the implementation within an initiative that aims at improving the alignment between requirements engineering and verification activities.
2303.16360
Multi-phase gas nature in the sub-pc region of the active galactic nuclei II: Possible origins of the changing-state AGNs
Multi-wavelength observations of active galactic nuclei (AGNs) often reveal various time scales of variability. Among these phenomena, "changing-look AGNs" are extreme cases where broad emission lines become faint/bright or even disappear/emerge between multi-epoch observations, providing crucial information about AGN internal structures. We here focus on "changing-state" AGNs, specifically investigating the transition of optical spectra over years to tens of years. Based on the axisymmetric radiation-hydrodynamical simulations (Paper I) for the gas dynamics within the dust-sublimation radius, we investigate the spectral properties of ionized gas exposed to the radiation from an AGN with a 10^7 Msun supermassive black hole. We find significant time-dependent variations in the Balmer emission lines by utilizing post-process pseudo-three-dimensional calculations and the spectral synthesis code CLOUDY. The equivalent width of Halpha and Hbeta changes by a factor of 3, or the emission lines even disappear during 30 years for the same viewing angle. The time-dependent behaviour arises primarily from gas dynamics, particularly the formation of non-steady, radiation-driven outflows within the innermost region of the disc (r <10^-3 pc). The intricate interplay between non-spherical radiation sources at the core of AGNs and the dynamic behavior of gas within the dust sublimation radius gives rise to radiation-driven outflows. This non-steady outflow potentially contributes to the observed variability in Balmer line emissions over multi-year timescales in certain AGNs.
Keiichi Wada, Yuki Kudoh, Tohru Nagao
2023-03-28T23:58:06Z
http://arxiv.org/abs/2303.16360v2
Multi-phase gas nature in the sub-pc region of the active galactic nuclei II: Optical-UV spectra originated in the ionized gas ###### Abstract Through two-dimensional radiation-hydrodynamical simulations, we investigate the spectral properties of ionized gas irradiated by an active galactic nucleus with a supermassive black hole of \(10^{7}M_{\odot}\). For the gas inside the dust-sublimation radius (\(r\sim 10^{-2}\) pc), we conduct post-process pseudo-three-dimensional calculations utilizing the spectral synthesis code Cloudy. We show that we can reproduce various broad emission lines in optical and ultraviolet wavelengths. The line profiles change depending on the viewing angles even for a small range from the rational axis, i.e., 5-30 degrees; most lines, such as H\(\alpha\), are characterized by a double-peaked profile, reflecting that the emissions are originated in the surface of the rotating disk. By contrast, high-ionization emission lines such as C iv\(\lambda 1549\) show a double-peaked profile for a nearly face-on view, as these lines derive from the fast outflowing gas from the disk surface. Our results suggest that some properties of the bright UV-optical emission lines observed in Seyfert-like AGNs can be caused by the radiation-driven fountain flow inside the dust sublimation radius. ## 1 Introduction Type 1 active galactic nuclei (AGNs) are characterized by broad (\(\gtrsim 1000\) km s\({}^{-1}\)) emission lines. These include Balmer lines, C iv\(\lambda 1549\), and Mg ii\(\lambda 2798\). (e.g., Osterbrock & Mathews 1986; Peterson 1997). These lines most likely originate in the photoionized gas derived from the strong radiation emitted by the AGN (e.g. Osterbrock & Ferland 2006; Netzer 2013). The widths of the individual emission lines have often been used to estimate central black hole (BH) mass (e.g., Ferrarese & Ford 2005; Bentz et al. 2009). However, for a reliable estimate of the masses of supermassive BHs, understanding the geometry and kinematics of broad emission line regions (BLRs) and their origins is essential. In terms of low-ionization gas, BLRs are believed to originate in a rotating disk, but radial motions such as outflows and inflows are also assumed to be present (e.g., Gaskell, 1982; Smith and Raine, 1985; Ferland and Persson, 1989; Chiang and Murray, 1996; Gaskell, 2009, and references therein). Recently, the spatial structures of BLRs were partially resolved using a near-infrared interferometer in some nearby AGNs (GRAVITY Collaboration et al., 2018, 2020, 2021). The results are consistent with the size determined by the reverberation mapping (RM) technique (e.g., Blandford and McKee, 1982; Peterson et al., 1993, 2004; Lawther et al., 2018; Baskin and Laor, 2018). The outer edge of the BLR is \(\sim 1/3\) of the dust sublimation radius (Netzer and Laor, 1993; Suganuma et al., 2006; Netzer, 2015, 2020; GRAVITY Collaboration et al., 2022). The velocity-resolved RM (e.g., Barth et al., 2011; Almeyda et al., 2020) can provide clues about the spatial distribution of BLRs in Seyfert 1 galaxies, such as NGC 5548 (Williams et al., 2020). However, it is still unclear whether these two components are or are not physically related. If BLR is extended to the inner edge of the dust torus, the structure and dynamics of the dust sublimation region are critical in understanding the origins of BLRs (Baskin and Laor, 2018). The failed radiatively accelerated dusty outflow (FRADO) is a type of dynamical model (Czerny and Hryniewicz, 2010; Naddaf et al., 2021), but the multi-dimensional dynamics of dusty gas, which are essential for the line shape and distribution of ionized gas, is not directly solved in the FRADO model (see also Dorodnitsyn and Kallman, 2021). BLR gases are often assumed to be high-density (\(\sim 10^{10}-10^{11}\) cm s\({}^{-3}\)) cloudlets. However, the realistic structures and dynamics of BLR "clouds" remain theoretically unclear. Recently, Matthews et al. (2020) demonstrated that biconical disk winds illuminated by an AGN continuum can produce BLR-like spectra. Based on a simple clumpy wind model, they conducted Monte Carlo radiation transfer calculations, and found that broad emission lines with equivalent widths and line ratios comparable to those observed in quasars. Although they have succeeded in reproducing spectra resembling those of luminous, type-1 AGNs, their model needs to assume wind properties and geometry with various free parameters. As they pointed out in the summary of the paper, the radiative transfer calculations based on hydrodynamic simulations are necessary for the next step. The hydrodynamics of dusty gas under central radiation was recently studied in terms of the "obscuring torus" on a 1-10 pc scale (Wada, 2012; Dorodnitsyn et al., 2012; Wada, 2015; Namekata and Umemura, 2016; Williamson et al., 2020). Multi-dimensional radiation-hydrodynamic calculations previously revealed that outflowing multi-phase gas with dust is formed naturally, and the Type 1 and 2 dichotomies in the spectral energy distribution (SED) can thus be naturally explained (Schartmann et al., 2014). This dynamical model ("radiation-driven fountain") effectively explains the multi-wavelength observations of the nearby Type 2 Seyfert galaxy, the Circinus galaxy, in many aspects: molecular and atomic emission and absorption lines in the central 10 pc (Izumi et al., 2018; Wada et al., 2018; Uzuo et al., 2021; Matsumoto et al., 2022; Izumi and Wada, 2023), the conical shape and line ratio properties of the narrow emission line region (e.g., [Oiii ]\(\lambda\) 5007 (Wada et al., 2018)), and the X-ray spectral energy distribution and lines (Buchner et al., 2021; Ogawa et al., 2022). As the second paper of the series, we here focus on emission lines derived from the gas inside the dust-sublimation radius (\(<0.02\) pc) using a high-spatial-resolution radiation-driven fountain model (Kudoh et al., 2023) (hearafter Paper I). In contrast to Matthews et al. (2020), we investigate gas dynamics in relatively low luminosity AGNs with a moderate black hole (BH) mass, i.e., \(10^{7}M_{\odot}\) in this paper. This is partly because a larger dynamic range should be necessary for more luminous quasar-type AGNs associated with more massive BHs, and the radiation-driven fountain scheme is most relevant to explain the multi-wavelength properties of Seyfert-type AGNs (e.g., Izumi and Wada, 2023). Following Wada et al. (2018), we analyze a snapshot of the hydrodynamic simulation using the photo-ionized code Cloudy(Ferland et al., 2017). The line profiles of the hydrogen recombination lines as well as the high-ionization lines are discussed. This is an attempt to understand the origins of the emission lines of AGNs based on a physics-motivated multi-dimensional model. ## 2 Numerical Methods ### Physical model in gas, dust, and radiation We use one-snapshot data from a two-dimensional radiation-hydrodynamic simulation in a quasi-steady state (see Paper I in detail) and calculate the radiative-transfer as a post process (see SS2.2). Here, we briefly summarize the hydrodynamic model. We solve the evolution of a dusty gas disk with mass inflow irradiated by a central source in a computational box of \(r=10^{-4}\sim 50\) pc (Figure 1). This is an extension of the three-dimensional radiation-driven fountain simulations (Wada, 2012, 2015) with a higher resolution. However, we assume an axisymmetric distribution using a cylindrical coordinate. The included physics here are those of radiative heating by X-ray as well as the radiation force that induces both dusty and ionized gases. The black hole mass is \(M_{\rm BH}=10^{7}M_{\odot}\) and the Eddington ratio is 0.1 (the bolometric luminosity is \(1.25\times 10^{44}\) erg s\({}^{-1}\)). The basic equations are \[\frac{\partial\rho}{\partial t}+\mathbf{\nabla}\cdot[\rho\mathbf{v}]=0, \tag{1}\] \[\frac{\partial\rho\mathbf{v}}{\partial t}+\mathbf{\nabla}\cdot[\rho\mathbf{v}\mathbf{ v}+P_{\rm g}\mathbf{I}]=\mathbf{f}_{\rm rad}+\mathbf{f}_{\rm grav}+\mathbf{f}_{\rm vis}, \tag{2}\] \[\frac{\partial e}{\partial t}+\mathbf{\nabla}\cdot[(e+P_{\rm g})\,\bm {v}]=-\rho\mathcal{L}+\mathbf{v}\cdot\mathbf{f}_{\rm rad}+ \tag{3}\] \[\mathbf{v}\cdot\mathbf{f}_{\rm grav}+W_{\rm vis},\] where total energy density is \(e=P_{\rm g}/(\gamma-1)+\rho v^{2}/2\), and the specific heat ratio adiabatically, i.e. \(\gamma=5/3\). \(\mathcal{L}\) is the net heating/cooling rate per unit mass. We adopted the gravitational force, \(\mathbf{f}_{\rm grav}=-\rho GM_{\rm BH}\mathbf{e}_{r}/r^{2}\), where \(G\) denotes the gravitational constant and \(r=\sqrt{R^{2}+z^{2}}\) is the distance from the center of BH with \(M_{\rm BH}=10^{7}M_{\odot}\). The radiation force is \(\mathbf{f}_{\rm rad}\simeq\int\nabla\cdot F_{\nu}\mathbf{e}_{r}d\nu\), where \(F_{\nu}\) is the radiation flux. We assume the \(\alpha\) viscosity (\(\alpha=0.1\)) to achieve mass accretion through the disk as \(\nu_{\rm vis}=\alpha c_{s}^{2}/\Omega_{\rm K}\) with the viscosity depends on the sound speed \(c_{s}\) and the Keplerian angular speed \(\Omega_{\rm K}\)(Shakura & Sunyaev, 1973). The viscous force in Equation (2) and the viscous heating in Equation 4 are taken from Ohsuga et al. (2005): \[\mathbf{f}_{\rm vis}\equiv\frac{\mathbf{e}_{\varphi}}{R^{2}}\frac{\partial}{\partial R }\left[R^{2}\alpha P_{\rm g}\frac{R^{2}}{v_{\varphi}}\frac{\partial}{\partial R }\left(\frac{v_{\varphi}}{R}\right)\right], \tag{4}\] and \[W_{\rm vis}\equiv\alpha P_{\rm g}\frac{R}{v_{\varphi}}\left[R\frac{\partial}{ \partial R}\left(\frac{v_{\varphi}}{R}\right)\right]^{2}. \tag{5}\] We assumed the viscosity parameter \(\alpha\) to obtain the gas supply around the disk mid-plane, \[\alpha=\left\{\begin{array}{ll}0.1&n>10^{3}~{}{\rm cm}^{-3}~{}\&~{}T_{\rm g} <10^{3}~{}{\rm K}\\ 0.0&{\rm otherwise}\end{array}\right. \tag{6}\] We consider heating by UV and X-ray (Maloney et al., 1996; Meijerink & Spaans, 2005a; Wada, 2012) and the optically thin radiative cooling (Meijerink & Spaans, 2005b; Wada et al., 2009). We assume that dust grains sublimate above a dust sublimation temperature \(T_{\rm sub}=1500\) K. Dust temperature is generally used in local thermal equilibrium with radiation. We use the public MHD code CANS+ (Matsumoto et al., 2019)1 with the additional module to evaluate the radiation force and radiative heating/cooling. However, we ignore the magnetic field in the present model. The number of computational cells in each direction is set to \((N_{R},N_{z})=(1200,2304)\). The cell sizes in the uniform region give high resolutions \(\Delta R=\Delta z=5\times 10^{-5}\) pc for \(R<3.5\times 10^{-2}\) pc and \(|z|<3.26\times 10^{-2}\) pc. On the outside, the cells are stretched to approximately 0.1 pc for a maximum simulation box of \(R=|z|=50\) pc (Figure 1). Footnote 1: [https://github.com/chiba-aplab/cansplus](https://github.com/chiba-aplab/cansplus) ### Radiative transfer using CLOUDY We used density, temperature, and velocities in the central \(r\leq 0.02\) pc in the hydrodynamic simulation described in Section 2.1 (Figure 2), and the data of the 2D cylindrical coordinate were modified for polar grid cells with \((N_{r},N_{\theta})=(400,41)\) for \(-30\leq\theta\leq 30^{\circ}\), where \(\theta\) is the angle from the equatorial plane. We then ran the spectral synthesis code Cloudy (version 17.03) (Ferland et al., 2017). The SED of the central source was derived from Cloudy's AGN command and is represented as \[f\left(\nu\right)=\nu^{\alpha_{\rm UV}}\exp\left(-h\nu/kT_{\rm BB} \right)\exp\left(-kT_{\rm IR}/h\nu\right)\cos i\] \[+a\nu^{\alpha_{\rm X}}\exp\left(-h\nu/E_{1}\right)\exp\left(-E_{2}/ h\nu\right), \tag{7}\] where \(\alpha_{\rm UV}=-0.5\), \(T_{\rm BB}=10^{5}\) K, \(\alpha_{\rm X}=-0.7\), \(a\) is a constant that yields the X-ray-to-UV ratio \(\alpha_{\rm OX}=-1.4\), \(kT_{\rm IR}=0.01\) Ryd, \(E_{1}=300\) keV, \(E_{2}=0.1\) Ryd, and \(i\) is the angle from the z axis (i.e., rotational axis). The UV radiation (first term), which derives from the geometrically thin optically thick disk, was assumed to be proportional to \(\cos i\). By contrast, the X-ray component (second term) was assumed to be isotropic (Figure 1). We assume that grains are sublimated (i.e., no grains) in the data used in Cloudy, and the Solar metallicity is assumed. In Paper I, we confirmed that the gas inside \(r\sim 0.01\) pc is mostly dust-free. The following are parts of the input file for Cloudy: abundances "default.abn" no grains grains ism function sublimation filling factor 1.0 no molecules set nend 2000 set continuum resolution 0.2 The transmitted SED, calculated using Cloudy for the inner most cell, was used as an incident SED for the next outward radial cell, and this procedure was repeated up to the outer edge (i.e., \(r=0.02\) pc) for a given radial ray (see Wada et al. 2018b, in detail). We confirmed that beyond \(r>0.02\) pc, most emission lines typically seen in BLRs are very weak. Figure 1: Model setup: The central radiation field of anisotropic and isotropic are assumed for UV and X-ray, respectively. Therefore, the dust sublimation region is not spherical (see Paper I in detail). The dusty, cold gas is supplied through the disk and the outflow is launched from the inner most region of the disk (\(r\lesssim 0.1\) pc). \(R_{s}\) is the Schwarzschild radius. Upon completion of all Cloudy calculations, we _observed_ the system (i.e., all the grid cells within \(r=0.02\) pc) along the line of sight, assuming the viewing angle \(i\) (\(i=0\) means face-on). For the azimuthal direction, we assumed that the system was axisymmetric. In addition, the Doppler-shifted emission lines (in which the velocity of each cell was used) from all grid cell spectra were integrated while considering 64 azimuthal directions over \(2\pi\). This is justified that the optical depth of the prominent lines in Cloudy calculations are smaller than 0.1, and therefore they are not heavily attenuated by the diffuse media between the observer and the surface of the dense disk, where the most emission lines are originated in. Note that we do not assume optically thin for the radial direction in the Cloudy calculations. ## 3 Results Figure 2: Input radiation-hydrodynamic model, where the top and bottom are the same, but for the central 0.2 pc and 0.002 pc regions, respectively. See also Paper I. The left-half panel shows the line-of-sight velocity for the viewing angle, which is \(5^{\circ}\) from the rotational axis. The right-half panel shows the number density of the gas. In the gray region near the \(z\) axis, the velocity exceeds \(10^{5}\) km s\({}^{-1}\) of the light speed. This region is a temporal structure derived from the numerical artifact near the boundary. However, this high velocity region is not used in the spectrum calculations. Figure 3 shows the spectrum between 1000A and 7000A, assuming a viewing angle \(i\) of 5\({}^{\circ}\) from the rotational axis. Strong hydrogen recombination lines as well as C iv\(\lambda\)1549, C iii\(]\lambda\)1909, and Mg ii\(\lambda\)2798, which are often seen in Type 1 and 2 AGNs, are obtained 2. Footnote 2: It is unclear why the He i\(\lambda\)4471 was stronger than the Balmer lines, which is not usually prominent in observations. Figure 4 shows the spectra that include H\(\alpha\), H\(\beta\), and Mg ii. The profiles for all of these lines depend on the viewing angle; it is wider for \(i=30^{\circ}\) than \(i=5^{\circ}\) or \(10^{\circ}\). In H\(\alpha\) and Mg ii, the line shows a double-peak profile for \(i=30^{\circ}\). This dependence on the viewing angle is a natural consequence in which the emission region for these lines represents the upper or lower surface of the rotating disk, as can be seen in the brownish region of the density map shown in Figure 2 (bottom panel), where \(n\gtrsim 10^{11}\) cm\({}^{-3}\). We also observed significant differences in the line profiles of the high-ionization lines (C iv\(\lambda\)1549 and Si iv\(\lambda\)1397) as compared with the hydrogen recombination lines and Mg ii shown in Figure 4. In Figure 5, C iv shows a double peak profile for \(i=5^{\circ}\) and a strong systemic component at the rest-frame wavelength. However, a single peak appears only around the systemic velocity for \(30^{\circ}\). Si iv also shows high-velocity components with a systemic component for \(i=5^{\circ}\). For \(i=30^{\circ}\), the profile is a single peak around the systemic component only. Note that the weaker N iv] shows a double-peaked profile for \(i=30^{\circ}\), which is similar to that of Mg ii. These results indicate that C iv and Si iv partially originated in the _outflows_ located at the upper/lower Figure 3: Line spectrum calculated using Cloudy between 1000 and 7000 Å. C iv\(\lambda\)1549, C iii\(]\lambda\)1909, C ii\(]\lambda\)2326, Mg ii\(\lambda\)2798, He i\(\lambda\)4471, \(\lambda\)5876, and the hydrogen recombination lines are all marked. The viewing angle is assumed to be 5\({}^{\circ}\). surface of the disk, which are prominently displayed as the high line-of-sight velocity (\(>2000\) km s\({}^{-1}\)) in the velocity map of the central 0.002 pc in Figure 2. ## 4 Discussion and Conclusion Since the 1980s, quasar broad emission lines have been known to blue-shift frequently from the systemic velocity, particularly for the high-ionization broad C iv line (e.g., Gaskell, 1982; Wilkes & Carswell, 1982). This has also been confirmed for Sloan Digital Sky Survey quasars (e.g., Vanden Berk et al., 2001; Richards et al., 2011; Shen et al., 2016). The origin of the velocity shift remains under debate; it could be caused by outflows and osculation by the disk (Gaskell, 1982; Chiang & Murray, 1996). However, some observations, including those of velocity-resolved reverberation mapping and line width-time delay relation are not well explained by the disk wind model (Gaskell & Goosmann, 2016), which implies that the motion is dominated by gravity (Krolik et al., 1991). As described in this study, we found that the hydrogen recombination lines and most other emission lines show a wider line profile or double-peaked profiles for larger viewing angles (i.e., closer to edge-on). This is consistent with the notion that BLRs originate in the rotating disk (see e.g., Storchi-Bergmann et al., 2016, and references therein). However, our results also show emission lines originating in the radiation-driven wind. In some disk wind models, two spatially distinct components are required to explain the properties of BLR (Collin-Souffrin and Lasota, 1988). Accordingly, Yong et al. (2020) proposed that the velocity-shift between C iv \(\lambda\)1549 and Mg ii \(\lambda\)2798 can be used to infer the orientation of the nucleus. This is basically consistent with what we found in Figure 53. In our present analysis, we assumed that we can observe both far and near sides of the outflows (see Fig. 2). If the far side outflow (i.e., redshifted component) is obscured by the dense disk, we expect that C iv and Si iv are blueshifted with respect to the systemic velocity, or it may show an asymmetric profile. Footnote 3: In the spectra of type-1 AGNs, the Si iv\(\lambda\)1397 line is blended with the O iv]\(\lambda\)1402 line due to their broad nature. However, in typical situations of BLRs (i.e., with the solar or super-solar metallicity), the flux of O iv]\(\lambda\)1402 is less than \(\sim\)30% of the total flux of the Si iv+O iv] blend (see, e.g., Figure 29 in Nagao et al. (2006)). Thus we simply write Si iv \(\lambda\)1397 to denote the Si iv +O iv blend. As Baldwin et al. (1995) indicated in their "locally optimal cloud" representation, the emission line spectrum can be reproduced by integrating various properties of emitting clouds, and the spectra do not necessarily represent physical conditions such as pressure, gas density, or ionization of individual clouds. They also speculated that a chaotic cloud environment could be the source of the lines, and therefore the lines reflect mostly global properties of the clouds. The present results suggest that this rather chaotic depiction is naturally reproduced by the radiation-driven fountain model. Using the narrow Fe-K\(\alpha\) reverberation mapping for a changing-look AGN NGC 3516, Noda et al. (2023) found that the Fe-K emitting radius in the type-2 phase Figure 5: Line profiles for three viewing angles (5\({}^{\circ}\) and 30\({}^{\circ}\)): Si iv \(\lambda\)1397, N iv] \(\lambda\)1487, C iv \(\lambda\)1549. The rest-frame wavelengths are shown by vertical lines. Note that blue- and redshifted components appear in both lines for small viewing angles. is consistent with that of the BLR materials in the type-1 phase. They claimed a possibility that the BLR materials remained at the same location as in the type-1 phase. If the BLR material is mainly originated in the surface of the rotating disk as our results suggested, this observational fact can be naturally understood. However, farther investigation based on the radiation-hydrodynamic simulations by changing the AGN luminosity would be necessary. ## Acknowledgments We thank G. Ferland and the Cloudy team for their regular support. Numerical computations of the radiation-hydrodynamic model were performed on a Cray XC50 at the Center for Computational Astrophysics at the National Astronomical Observatory of Japan and using the Fugaku supercomputer at RIKEN. This work was supported by JSPS KAKENHI Grant Number 21H04496. The work used computational resources of Fugaku provided by RIKEN through the HPCI System Research Project (Project IDs: hp210147, hp210219).
2310.16258
The effective QCD running coupling constant and a Dirac model for the charmonium spectrum
The QCD effective charge extracted from the experimental data is used to construct the vector interaction of a Dirac relativistic model for the charmonium spectrum. The process required to fit the spectrum is discussed and the relationship with a previous study of the vector interaction is analyzed.
M. De Sanctis
2023-10-25T00:28:53Z
http://arxiv.org/abs/2310.16258v1
# The effective QCD running coupling constant and a Dirac model for the charmonium spectrum ###### Abstract The QCD _effective charge_ extracted from the experimental data is used to construct the vector interaction of a Dirac relativistic model for the charmonium spectrum. The process required to fit the spectrum is discussed and the relationship with a previous study of the vector interaction is analyzed. pacs: 12.39.Ki, 12.39.Pn, 14.20.Gk ## 1 Introduction In a series of previous works the author developed a Dirac relativistic quark-antiquark model to study the spectrum of charmonium and, possibly, of other mesons. In particular, in Ref. [1] the relativistic reduced Dirac-like equation (RDLE) of the model was introduced. This equation is written in the coordinate space in a local form. An accurate calculation of the charmonium spectrum was performed using a small number of free parameters in Ref. [2]. Furthermore, in a subsequent work [3], the Lorentz structure of the interaction terms was studied in more detail, developing a covariant form of the same RDLE. In this model, a specific form of the _regularized vector interaction_ has been used. That interaction had been introduced and studied previously in Ref. [4]. We highlight here that a vector interaction alone is not sufficient to give an accurate reproduction of the charmonium spectrum. To this aim, the contribution of a _scalar interaction_ has been always included in the interaction of the RDLE. In this respect, the scalar interaction was studied in more detail in another work [5], also considering the possibility of using a _mass interaction_. In the same work the scalar and mass interactions have been tentatively related to the excitation of the first scalar resonances of the hadronic spectrum. In the following, we shall denote the content of all these ###### Abstract We consider the _effective charge_ of a _nonperturbative_ hadronic phenomenon, which is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ hadronic phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ hadronic phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _nonperturbative_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _nonperturbative_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of _non_ phenomenon. In this paper, we show that the _effective charge_ of a _non_ phenomenon is a consequence of the _non_ phenomenon. with QCD and the quark interaction for the hadronic bound states, is not univocally defined and still represents a challenge for theoretical physics. Taking into account the complexity of the problem, in the present work we shall revise the previously developed vector interaction of Refs. [1, 2, 3, 4, 5], deriving from the quantities introduced there the (possibly) related form of \(\alpha_{S}(Q)\). Then we shall use the effective coupling \(\alpha_{g1}(Q)\) of Refs. [6, 7, 8] to construct, with some modifications, the vector potential for the model. Finally, we point out that, by means of our RDLE, a truly _relativistic_ model is constructed. In this model the vector interaction and the scalar (or mass) interaction can be treated _separately_, allowing for a _separate_ study of their structure. In particular, in the present work, we shall focus our attention on the vector interaction. We recall that, on the contrary, in the nonrelativistic studies, the two interactions give rise (at least at the leading order in the nonrelativistic expansion) to a _unique_ potential, in which the two contributions cannot be easily disentangled. The remainder of the paper is organized as follows. In the next Subsect. 1.1 the notation and conventions used in the work are introduced. In Sect. 2, we study the theoretical connection between the running coupling constant, as a function of the momentum transfer \(Q\), with the vector interaction potential. In Sect. 3, we analyze from the (new) point of view of this paper our previuos calculations preformed with the RDLE. In Sect. 4, we develop the construction of the interaction vector potential by using the experimentally extracted \(\alpha_{g1}(Q)\). Finally, in Sect. 5, the charmonium spectrum is calculated and displayed. The role of the different parameters is analyzed and some general considerations about the whole problem are given. ### Notation and conventions The following notation and conventions are used in the paper. * The invariant product between four vectors is standardly written as: \(V^{\mu}U_{\mu}=V^{\mu}U^{\nu}g_{\mu\nu}=V^{0}U^{0}-\mathbf{V}\cdot \mathbf{U}\). * The lower index \(i=1,2\) represents the _particle index_, referred to the quark (\(q\)) and to the antiquark (\(\bar{q}\)). * We shall use, for each quark, the four Dirac matrices \(\gamma_{i}^{\mu}\). * The vertex 4-momentum transfer will be denoted as \(q^{\mu}=(q^{0},\mathbf{q})\). * We shall neglect the retardation contributions, setting \(q^{0}=0\) for the time component of the 4-momentum transfer. This approximation is consistent with the use of the Center of Mass Reference Frame for the study of the \(q\bar{q}\) bound systems. * In consequence, the _positive_ squared four momentum transfer \(Q^{2}\) takes the form \(Q^{2}=-q_{\mu}q^{\mu}=\mathbf{q}^{2}\), that is \(Q=|\mathbf{q}|\). * The quantities \(\alpha(Q)\), \(G(Q)\), \(V^{V}(r)\) and \(U^{V}(r)\) that will be introduced in the paper, are used, _with no label_, in general expressions. * To indicate the model to which these quantities are referred, a specific label is added: \(Coul\) for the pure Coulombic case, \(pr\) for the previous calculations with the RDLE and \(g1\) for the effective charge extracted from the experimental data. The quantity \(\alpha_{V}(0)\) will be also introduced in Sect. 5. * The subindex \(X\) will be used to denote, for the parameters \(\bar{V}_{X}\) and \(r_{X}\), the scalar (\(X=S\)) or mass (\(X=M\)) character of the corresponding interaction. * Finally, throughout the work, we use the standard natural units, that is \(\hbar=c=1\). ## 2 The vector interaction in momentum and coordinate space Our RDLE [1, 2] has been formulated in the coordinate space. In order to introduce into this model the momentum dependent running coupling constant \(\alpha_{S}(Q)\), it is strictly necessary to establish the connection between the coordinate space and the momentum space interaction. We write, _in general_, the momentum dependence of the vector strong interaction (apart from the standard \(1/Q^{2}\) factor) in the form \[\alpha(Q)=\alpha(0)G(Q)\] where \(\alpha(0)\) is a _truly_ constant, adimensional quantity that "represents the strength" of the vector interaction. Furthermore, \(G(Q)\) is a decreasing, positive, function of the momentum transfer \(Q\) that satisfies the condition \(G(0)=1\). The momentum dependence of \(\alpha(Q)\) can be related, at a fundamental level, to the running of the QCD coupling constant, identifying \(\alpha(Q)\) with the strong coupling constant \(\alpha_{S}(Q)\). In phenomenological quark models, as, for example, in our previous calculations, we can say that the function \(G(Q)\) takes phenomenologically into account the structure of the interacting, nonpoint-like, quarks. Its physical meaning, within different models, will be analyzed in more detail in the following of the paper. By means of Eq. (1), the tree-level vector interaction in the momentum space, for a \(q\bar{q}\) system, can be written, in general, as \[{\cal W}^{V}(Q)=-\frac{4}{3}\frac{4\pi}{Q^{2}}\alpha(0)G(Q)\gamma_{1}^{\mu} \gamma_{2}^{\nu}g_{\mu\nu} \tag{2}\] where \(4/3\) represents the color factor in the \(q\bar{q}\) case; \(\alpha(0)\) and \(G(Q)\) have been introduced in Eq. (1). Performing the Fourier transform one obtains the corresponding expression in the coordinate space \[{\cal W}^{V}(r)=\int\frac{d^{3}q}{(2\pi)^{3}}\exp{(i\mathbf{q}\cdot \mathbf{r})}{\cal W}^{V}(Q) \tag{3}\] Multiplying the previous expression by \(\gamma_{1}^{0}\gamma_{2}^{0}\) from the left, one obtains, the two-body vector interaction \(W_{(2)}^{V}\) introduced in Eq. (10) of Ref. [2] for the calculations in the Hamiltonian Dirac form. In particular, the two-body interaction potential in the coordinate space is given by the following Fourier transform \[V^{V}(r)=-\frac{4}{3}\int\frac{d^{3}q}{(2\pi)^{3}}\exp{(i\mathbf{q} \cdot\mathbf{r})}\frac{4\pi}{Q^{2}}\alpha(0)G(Q)\, \tag{4}\] where \(V^{V}(r)\) is the vector (two-body) interaction potential, denoted as \(V^{int}(r)\) in Eqs. (12) and (14) of Ref. [2]. In the first place, we recall that, in the case of a constant \(G(Q)\), one goes back to a standard Coulombic interaction. More precisely, for \(G_{Coul}(Q)=1\), one would obtain in the coordinate space the pure Coulombic potential \[V^{V}_{Coul}(r)=-\frac{4}{3}\frac{\alpha_{Coul}(0)}{r}. \tag{5}\] This potential is not able to reproduce with good accuracy the charmonium spectrum. Furthermore, the choice \(G(Q)=G_{Coul}(Q)=1\) is not in agreement with the QCD phenomenology, being completely ignored the running of the coupling constant. In the following Sect. 3 we shall discuss \(G_{pr}(Q)\), corresponding to the potential \(V_{pr}^{V}(r)\) that was introduced in our previous works [2, 5]. In Sect. 4 we shall study the case of \(\alpha_{g1}(Q)\) extracted from the experimental data. In any case, the interaction potential in the coordinate space is obtained by means of the Fourier transform of Eq. (4). ## 3 The quantity \(G_{pr}(Q)\) of our previous calculations The impossibility of reproducing accurately the charmonium spectrum with a pure Coulombic potential required to use, in Ref. [2], a model of the vector interaction that was previously introduced in Ref. [4]. In this model the quarks are considered as _extended_ sources of the chromo-electric field. After many trials with different analytic functions, an accurate reproduction of the charmonium spectrum has been obtained with a Gaussian color charge distribution for each quark: \[\rho(x)=\frac{1}{(2\pi d^{2})^{3/2}}\exp\left(-\frac{\mathbf{x}^{2}}{ 2d^{2}}\right). \tag{6}\] This distribution gives, in the momentum space, the following vertex form factor \[F(Q)=\exp(-\frac{Q^{2}d^{2}}{2}) \tag{7}\] Considering one form factor for each quark vertex, one obtains for the function \(G(Q)\) introduced in Eq. (2), the following expression, specific of our previous calculations: \[G_{pr}(Q)=[F(Q)]^{2}=\exp(-Q^{2}d^{2}). \tag{8}\] For this model, developed in our previous calculations, we have the (true) constant \(\alpha_{pr}(0)=\alpha_{V}\) that was introduced in Refs. [2, 5]. As anticipated at the beginning of the previous section, we can say that, _within this model_, the quantity \(\alpha_{pr}(Q)=\alpha_{pr}(0)G_{pr}(Q)\)_defines_ an effective strong running coupling constant \(\alpha_{S}(Q)\). Furthermore, we observe that \(\alpha_{pr}(Q)\), with \(G_{pr}(Q)\) of Eq. (8), is a function without singularities that "freezes" (_i.e._ goes to a constant limit) as \(Q\to 0\). By performing the Fourier transform defined in Eq. (4), with \(G_{pr}(Q)\) of Eq. (8), one obtains the interaction potential in the following analytic form \[V_{pr}^{V}(r)=-\frac{4}{3}\frac{\alpha_{pr}(0)}{r}\mbox{erf}\left(\frac{r}{2d }\right). \tag{9}\] In Eq. (17) of Ref. [2] the same result, denoted there as \(V^{int}(r)\), was obtained by means of a different procedure completely developed in the coordinate space. Note that the potential of Eq. (9) is _regular_ for \(r\to 0\). More precisely, we have \[V_{pr}^{V}(0)=-\frac{4}{3}\frac{\alpha_{pr}(0)}{d}\frac{1}{\sqrt{\pi}}. \tag{10}\] This result was given in Eqs. (13) and (16) of Ref. [2]. We recall that also a positive constant term, denoted as \(\bar{V}_{V}\), is frequently introduced in quark models to improve the reproduction of the experimental spectra. In our previous calculations, as shown in Eq. (13) of Ref. [2], we fixed this constant in the following way: \[\bar{V}_{V}=-V_{pr}^{V}(0). \tag{11}\] With this assumption, the constant \(\bar{V}_{V}\) represents the positive zero-point quark self-energy that, added to the interaction term of Eq. (9), gives a total vector potential that is vanishing at \(r=0\) and approaches the maximum value \(\bar{V}_{V}\) as \(r\rightarrow\infty\). As discussed above, the parameters of the vector interaction, in our previous calculations, are \(d\) and \(\alpha_{pr}(0)\). Their numerical values were obtained by fitting the resonance masses of the charmonium spectrum. The following numerical values were obtained: \(d=(0.1526)\ 0.1511\ fm\) corresponding to \(\lambda=1/d=(1.293)\ 1.306\ GeV\) and \(\alpha_{pr}(0)=(1.864)\ 1.838\) where the first values (in brackets) are those of Table II of Ref. [2] and the second ones are those of Table II of Ref. [5]. In the latter case an updated set of charmonium resonance masses [13] were used to determine the values of \(d\) and \(\alpha_{pr}(0)\). In the remainder of this work, we shall consider only the second group of values. Incidentally, these results can be compared with HLF QCD that gives, for the effective running coupling constant _exactly_ the same analytic expression: \[\alpha_{HLF}(Q)=\alpha_{HLF}(0)\exp[-\frac{Q^{2}}{(2\kappa)^{2}}]. \tag{12}\] The numerical value is \(2\kappa=1.046\pm 0.048\ GeV\), as given in Ref. [9]. This value has the same order of magnitude as \(\lambda\) of our model. ## 4 The use of \(\alpha_{g1}(Q)\) In this section we analyze the possibility of using the quantity \(\alpha_{g1}(Q)\), extracted from the experimental data, to construct the vector interaction potential. In the first place, considering the results of Refs [6, 7, 8], we write \[\alpha_{g1}(Q)=\alpha_{g1}(0)G_{g1}(Q) \tag{13}\] where one would have \(\alpha_{g1}(0)=\pi\) (this numerical value will be discussed in the following). Then, in order to perform (numerically) the Fourier transform of Eq. (4), required for the calculation of the vector potential, we parametrize \(G_{g1}(Q)\) with a continous analytic function, in the following way: \[G_{g1}(Q)=aA(Q)+(1-a)B(Q) \tag{14}\] where the two momentum dependent functions \(A(Q)\) and \(B(Q)\) satisfy the condition \[A(0)=1,\ \ B(0)=1. \tag{15}\] In more detail, we take these functions in the form: \[A(Q)=\exp(-Q^{2}d_{a}^{2}) \tag{16}\] and \[B(Q)=\frac{1+c_{0}\alpha_{b}\ln(x_{b})}{1+c_{0}\alpha_{b}\ln(x_{b}+Q^{2}(\eta( Q))^{2})} \tag{17}\] with \[\begin{array}{l}\eta(Q)=\eta_{0}+bQ\,\\ c_{0}=(11-n_{f}\frac{2}{3})\frac{1}{4\pi}\.\end{array} \tag{18}\] The total function of Eq. (14) has been fitted to the experimentally extracted data [6, 7, 8], from \(Q=0\) to \(Q=50\ GeV\), obtaining the following values for the parameters of Eqs. (14 - 18): \(a=0.35415\), \(d_{a}=0.1611\ fm\), \(\alpha_{b}=1.395\), \(x_{b}=0.9164\), \(\eta_{0}=0.7385\ GeV^{-1}\), \(b=1.479\ GeV^{-2}\) and \(n_{f}=6\). In the parametrization displayed above \(B(Q)\) of Eq. (17) is related to the low momentum behavior of \(\alpha_{g1}(Q)\), while \(B(Q)\) takes into account the high momentum logarithmic terms, peculiar of perturbative QCD. However, we point out that our parametrization does not pretend to have a specific physical meaning but has been introduced essentially to perform the numerical calculation. The experimentally extracted data, the corresponding fit for \(G_{g1}(Q)\) and \(G_{pr}(Q)\) of Eq. (8) are shown in Fig. 1. In this figure, the sources of the experimentally extracted data are not differentiated. For more details regarding this point, the reader is referred to the works [6, 7, 8, 9]. The coordinate space potentials are obtained by means of Eq. (4). In particular, for the experimentally extracted data, we use the parametrization of \(G_{g1}(Q)\) given in Eq. (14) with the functions \(A(Q)\) and \(B(Q)\) defined in Eqs. (16) and (17), respectively. The calculation is performed analytically for \(A(Q)\) and numerically for \(B(Q)\). In order to display graphically the coordinate space potentials, we divide the potentials by \(\alpha(0)\), introducing the following coordinate space function \[U^{V}(r)=\frac{V^{V}(r)}{\alpha(0)}. \tag{19}\] This function is plotted in Fig. 2. In more detail, in this figure, we display: * \(U^{V}_{g1}(r)\), obtained from the fit of the experimentally extracted data; * \(U^{V}_{pr}(r)\) given by Eq. (9); * \(U^{V}_{Coul}(r)\) that is given by the pure Coulombic potential of Eq. (5). Figure 1: The function \(G(Q)\) introduced in Eqs. (1) and (2). The points with error bars, in red, represent the experimentally extracted \(g1\) data; the blue continous line represents \(G_{g1}(Q)\), that is the fit of Eq. (14) to these data. The green continous line represents \(G_{pr}(Q)\) of our previous calculations, given by Eq. (8). We note that, as \(r\to\infty\), the three functions have the same Coulombic behavior. As \(r\to 0\), \(U^{V}_{pr}(r)\) takes the finite value determined by Eq. (10); numerically, this value is \(U^{V}_{pr}(0)=-0.9824\ GeV\). This regularization of the potential is given by the fastly decreasing function \(G_{pr}(Q)\). On the other hand, \(U^{V}_{g1}(r)\) diverges as \(r\to 0\), with a slower rate than \(U^{V}_{Coul}(r)\). In this respect, we observe that the function \(B(Q)\) of \(G_{g1}(Q)\) does not decrease sufficiently fast, as \(Q\to\infty\), to regularize the corresponding coordinate space potential when \(r\to 0\). Figure 2: The coordinate space function \(U^{V}(r)\) of Eq. (19). The blue line, \(U^{V}_{g1}(r)\), is obtained from the fit of the experimental data; the green line, \(U^{V}_{pr}(r)\) is given by the potential of the previous model; the black line, \(U^{V}_{Coul}(r)\) represents the pure Coulombic case. ## 5 The charmonium spectrum We can now try to reproduce the charmonium spectrum with the vector potential given by \(U_{g1}^{V}(r)\). The technique for solving the RDLE and the fit procedure are exactly the same as in Refs. [2, 5]. For the charmonium spectrum we use here the experimental data [13]. For the quality of the fit, as in [5], we define \[\Theta=\sqrt{\frac{\sum_{k}(E_{k}^{th}-M_{k}^{exp})^{2}}{N_{d}}}\, \tag{20}\] where \(E_{k}^{th}\) and \(M_{k}^{exp}\) respectively represent the result of the theoretical calculation and the experimental value of the mass, for the \(k\)-th resonance and \(N_{d}=16\) is the number of the fitted resonances. We point out that the model, to reproduce accurately the spectrum, necessarily includes also a scalar (\(S\)) [1, 2], or mass (\(M\)) [5], interaction. We have started the analysis trying to _fix_ the vector interaction strength at the value \(\alpha_{g1}(0)=\pi\), as given in Refs. [6, 7, 8]. But this choice did not allow to reproduce accurately the charmonium spectrum. In this respect, many trials have been performed modifying the form of the scalar or mass potentials. We have also tried to modify the form of \(G(Q)\) but, in any case, the fit of the charmonium spectrum refused the value \(\alpha_{g1}(0)=\pi\) of the vector interaction strength. Subsequently, this quantity, that we denote from now on as \(\alpha_{V}(0)\), has been left as a _free parameter of the fit_. This choice has allowed an acceptable reproduction of the charmonium spectrum, as shown in Table 1, where the theoretical and the experimental values of the resonance masses are displayed. The values of the parameters used for the interaction are given in Table 2. In particular, for the mass of the quark we have taken the same value of the previous works [2],[5], that is \(m_{q}=1.27\ GeV\). This value represents the "running" charm quark mass in the \(\overline{MS}\) scheme [13]. As discussed before, \(\alpha_{V}(0)\) is determined by the fit to the spectrum. Comparing the results obtained for \(\alpha_{V}(0)\) with \(\alpha_{g1}(0)=\pi\), we have: \(\alpha_{V}(0)=0.65\ \alpha_{g1}(0)\) and \(\alpha_{V}(0)=0.62\ \alpha_{g1}(0)\), when the scalar or mass interaction are used, respectively. As discussed in the introduction, the nonunivocal definition of the effective charge, that affects particularly the low \(Q\) region, can explain why the value \(\alpha_{g1}(0)=\pi\) is not adequate for obtaining a suitable bound state quark interaction for our calculation. With respect to Ref. [5], here the additional constant of the vector interaction \(\bar{V}_{V}\) is considered as a completely free parameter: the vector interaction obtained from \(G_{g1}(Q)\) does not allow to relate \(\bar{V}_{V}\) to the quark vector self-energy. Following the phenomenological model discussed in [5], we have _fixed_ the constant \(\bar{V}_{X}\), for both the scalar (\(X=S\)) and the mass (\(X=M\)) interaction at the value \(\bar{V}_{X}=0.7350\ GeV\). Also for the distance parameters \(r_{X}\), the same values of [5] have been used, as shown in Table 2. Analyzing in more detail the obtained results for the spectrum, we note that the quality of the fit is slightly worse here than in Ref. [5]. For the parameter \(\Theta\) defined in Eq. (20), we have here \(\Theta=36.0\ MeV\) and \(\Theta=38.0\ MeV\) for the Scalar and Mass interaction, respectively. In Ref. [5], the corresponding values were \(\Theta=13.4\ MeV\) and \(\Theta=12.8\ MeV\). The quality of the fit can be improved if the parameters \(V_{X}\) and \(r_{X}\) are left as free parameters. We decided to fix these parameters at the same values of Ref. [5] to show that the vector potential obtained from \(G_{g1}(Q)\) is compatible with the model for the scalar and mass interactions studied in Ref. [5] without changing their parameters. For completeness, we also note that, as in [5], the model is unable to reproduce the resonance \(\chi_{c0}(3915)\). The new experimental data [13] give, for this resonance, a mass of \(3921.7\pm 1.8\ MeV\). Our model, taking the quantum numbers \(2^{3}P_{0}\), gives the mass values of \(3857\ MeV\) and \(3846\ MeV\), for the \(S\) and the \(M\) interactions, respectively. Our model and other quark models give a wrong order for the masses of this resonance and its partner \(\chi_{c1}(3872)\). We conclude this paper with the following considerations. The momentum dependence of the QCD experimentally extracted \(\alpha_{g1}(Q)\) gives a vector interaction potential that is compatible with our quark model based on a RDLE. However, to fit accurately the spectrum, the constant of the vector interaction strength must be reduced with respect to \(\alpha_{g1}(0)\). Moreover, the additional constant \(\bar{V}_{V}\) to must be added to the vector potential. Finally, a scalar or mass interaction is also strictly necessary to reproduce in detail the charmonium spectrum. Further investigation is necessary to establish a deeper connection between the effective bound state quark interaction and the phenomenology related to the QCD analysis. **Acknowledgements** The author gratefully thanks Prof. A. Deur and the other authors of Refs. [6, 7, 8] for giving a complete numerical table of the experimentally extracted \(\alpha_{g1}(Q)\). The data of this table are those shown in Fig. 1.
2307.04293
Inverse of the Gaussian multiplicative chaos: an integration by parts formula
In this article, we study the analogue of the integration by parts formula from "Hitting times for Gaussian processes" in the context of GMC and its inverse.
Tomas Kojar
2023-07-10T01:02:26Z
http://arxiv.org/abs/2307.04293v1
# Inverse of the Gaussian multiplicative chaos: an integration by parts formula ###### Abstract. In this article, we study the analogue of the integration by parts formula from [4] in the context of GMC and its inverse. ###### Contents * 1 Introduction * 2 Main result * 3 Setup for Malliavin calculus for the inverse * 4 Integration by parts formula * 5 Formula for the shifted GMC ## Part 3 Further directions and Appendix * 6 Further research directions * 7 Moments of the maximum and minimum of modulus of GMC * 8 Propeties of the covariance of truncated field
2308.14605
A Generalization of Continuous Relaxation in Structured Pruning
Deep learning harnesses massive parallel floating-point processing to train and evaluate large neural networks. Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks. This performance improvement, which often requires heavy compute for both training and evaluation, eventually needs to translate well to resource-constrained hardware for practical value. Structured pruning asserts that while large networks enable us to find solutions to complex computer vision problems, a smaller, computationally efficient sub-network can be derived from the large neural network that retains model accuracy but significantly improves computational efficiency. We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal. In addition, we demonstrate efficient and stable convergence up to 93% sparsity and 95% FLOPs reduction without loss of inference accuracy using with continuous relaxation matching or exceeding the state of the art for all structured pruning methods. The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations. We achieve this with routine automatable operations on classification and segmentation problems using CIFAR-10, ImageNet, and CityScapes datasets with the ResNet and U-NET network architectures.
Brad Larson, Bishal Upadhyaya, Luke McDermott, Siddha Ganju
2023-08-28T14:19:13Z
http://arxiv.org/abs/2308.14605v1
# A Generalization of Continuous Relaxation in Structured Pruning. ###### Abstract Deep learning harnesses massive parallel floating-point processing to train and evaluate large neural networks. Trends indicate that deeper and larger neural networks with an increasing number of parameters achieve higher accuracy than smaller neural networks. This performance improvement, which often requires heavy compute for both training and evaluation, eventually needs to translate well to resource-constrained hardware for practical value. Structured pruning asserts that while large networks enable us to find solutions to complex computer vision problems, a smaller, computationally efficient sub-network can be derived from the large neural network that retains model accuracy but significantly improves computational efficiency. We generalize structured pruning with algorithms for network augmentation, pruning, sub-network collapse and removal. In addition, we demonstrate efficient and stable convergence up to 93% sparsity and 95% FLOPs reduction without loss of inference accuracy using with continuous relaxation matching or exceeding the state of the art for all structured pruning methods. The resulting CNN executes efficiently on GPU hardware without computationally expensive sparse matrix operations. We achieve this with routine automatable operations on classification and segmentation problems using CIFAR-10 [14], ImageNet [5], and CityScapes datasets [3] with the ResNet [11] and U-NET network architectures [24]. ## 1 Introduction An established trend in deep learning literature is the improvement of model accuracy with increased computation (see Figure 1). This becomes challenging for model inference at the edge where data is abundant but compute resources are minimal due to power, hardware size, and cost constraints. Our focus is to reduce the inference computation of scientific and medical instruments, such as microscopes and CT scanners with structured pruning. For example, Transmission electron microscope (TEM) data can receive up to \(4096X4096\) resolution images up to \(400\) frames Figure 1: Classification results of various models trained on ImageNet-22K. We (CRISP) maintain accuracy on ResNet-101 while decreasing total FLOPs with higher sparsity. per second [1], well past the achievable processing rates of large neural networks with the most advanced GPU. Structured pruning restructures models weights at training time. A key assumption of pruning is that larger models provide great search spaces for pruning algorithms to find the sub-networks with optimal performance. The availability of large, transfer-learning models and their potential to be fine-tuned to certain tasks makes the structured pruning of larger models efficient. Despite its published successes [18], it is uncommon in practice to use structured pruning to improve model computational efficiency []. Barriers to broadly adopting structured pruning as a standard part of CNN model optimization include (1) the need to manually customize models and model training [9], (2) published poor convergence stability [], and (3) unexplored behavioral differences between original and pruned models. Here, we generalize the formulation of continuous relaxation in structured pruning into an automatable procure applicable to common convolutional neural network (CNN) architectures. Where other structured pruning methods have surpassed the published performance of continuous relaxation in structured pruning, we simply and improve the continuous relaxation in structured pruning formulation to achieve stable and direct convergence to state of the art structured pruning performance. Finally, we evaluate the accuracy of image segmentation vs object size of original and pruned models. This study demonstrates similar model behavior between original and pruned models, our desired behavior. Structured pruning did not reduce intersection over union (IoU) accuracy irrespective of object size. Our contributions include1: Footnote 1: All code, models, and example pipelines are available at [https://github.com/sherlj5352/crisp](https://github.com/sherlj5352/crisp). 1. an automatable structured pruning and layer collapse algorithm applicable across CNN architectures, 2. simplified stiffening and multi-step pruning providing direct and stable convergence to state-of-the-art (SOTA) computational efficiency without loss of accuracy, 3. IoU vs object size metric demonstrating structured pruning maintained consistent model performance across of object sizes. ## 2 Related Work Neural network pruning methods can be divided into two categories - unstructured [7, 20, 23] and structured [9, 10, 13]. Unstructured pruning removes parameters to optimize model sparsity. Specialized sparse matrix operations implemented in software or hardware are needed to realize a partial speedup from unstructured pruning. Structured pruning provides a direct speedup with widely available GPU hardware and software deployments. Since its focus is computational efficiency, structured pruning removes entire network structures such as blocks, layers, and convolutional channels that maintain full matrix operations. [9]. Structured pruning literature generally optimizes sparsity, in which the objective is to reduce the model size [22]. Conversely, CNN floating point operation (FLOP) reduction applies greater weight to removing operations performed on larger channels that require more computations [28]. FLOP reduction provides a hardware-independent metric of computational efficiency that can be quickly computed during model training. FLOP reduction is advantageous, especially for edge devices with numerous machine learning algorithms that vie for inference compute time. Flop reduction enables a greater batch size, reduced hardware requirements that free up space for instrument design, easier access to high-resolution data inference and a heightened efficiency of heterogeneous workflows. We discuss related works along three threads: 1. the achievement of high accuracy and sparsity through unstructured pruning and its impact on FLOPs, 2. structured pruning search metrics, and regularization with sparsity approaching that of unstructured pruning and direct improvement in FLOPs, 3. barriers to generalize and automate structured pruning. The earliest works in neural network pruning show that randomly-initialized neural networks contain subnetworks that, when trained in isolation, achieve performance comparable to the original network [7]. To isolate the subnetworks, Lottery Ticket Rewinding (LTR) iteratively trains a network, prunes the smallest network parameters, [26], and rewinds the remaining weights back to the original initialization. This and subsequent works [19, 35] demonstrate that greater than 97% sparsity can be achieved through unstructured pruning on the classic CIFAR-10 benchmark of ResNet18. [22]. The achievement of high sparsity exposed the bloated nature of large neural networks. However, their parameter removal did not translate into a proportionate improvement of computational efficiency [15]. Software acceleration of the sparse matrix operations have achieved a respectable fraction of the efficiency. [8, 29, 31]. Hardware-accelerated sparse matrix operations are likely to enable future performance improvements but also do not currently provide a speedup proportional to the sparsity achieved. [21]. To the aforementioned methods, structured pruning provides a simpler inference-time speedup and the ability to minimize computation (i.e. FLOPs) as an objective. The smallest element that can be pruned to accomplish this is dependent on the architecture. For CNNs, the convolution channels are the most frequent unit for structured pruning as it provides sufficient granularity and search efficiency. [32]. Earlier structured pruning approaches were inspired by unstructured pruning to identify the convolution channels to prune [30]. Then, the gradient [22] and continuous relaxation methods grew to prominence. [10, 12]. Both approaches utilize gradient descent to select the pruned channels. The objective of gradient pruning, in short, is to remove channels with the smallest gradient adjustments. Continuous relaxation is our method of choice - for which an additional relaxation parameter per convolutional channel is required. The variable is used to continuously approximate the necessity of the channel. Continuous relaxation algorithms also include a method to progressively stiffen the approximation during optimization so the trained and relaxed model is akin to the pruned model [9]. With these approaches, structured pruning has been successfully applied to classification, semantic segmentation, and depth estimation with an upwards of 80% FLOPs reduction while maintaining inference accuracy. Structured pruning literature generally describe processes that require domain experts to prepare models for pruning [10]. It may also require an iterative train and prune cycle with a rigorous selection process for the pruned model. [34]. We are developing a structured pruning algorithm that runs across a range of CNNs and does not require domain expert guidance, thus making it generalizable. We intend to integrate the pruning system within an automated model deployment pipeline in a production setting. [6][27] ## 3 Method We first discuss the mechanics of network augmentation and pruning as a graph coloring problem. We then formalize the problem for CNNs and discuss how to identify which channels to prune. Finally, we devise a search strategy to minimize FLOPs while preserving accuracy. ### Subgraph Identification and Augmentation To generalize the network augmentation, training, and pruning procedures, we consider the CNN to be a Directed Acyclic Graph (DAG) as illustrated in Figure 2 (a). Graph nodes are the network operators and edges are the tensor outputs of one node and inputs go to the next node. Modeled on ResNet, each block in Figure 2 (a) contains a 2D convolution, batch normalization, and ReLU operator. Table 1 outlines the input and output tensor size for common neural network operators. Most operators preserve the size of the input. For example, a ReLU activation output is identical to its input, convolutional and fully connected layers are the exception. To ensure tensor shape consistency, we introduce sub-graphs to the DAG. Each sub-graph is: 1. bound by convolutional layers and, 2. consists of operators that must necessarily maintain equivalent input and output shapes throughout the pruning process. Figure 3 (a) showcase the nuances of a ResNet DAG. Its subgraphs are represented by distinct colored edges and their associated convolutional units. The residual sub-graph spans across ResNet blocks as the element-wise addition of the identity function requires equivalent tensor shapes. Each sub-graphs are candidates for augmentation, relaxation and pruning. An augmented subgraph \(f_{a}(x,w,\sigma(s))\) is the product of the subgraph \(f_{s}(x,w)\) with the relaxation function \(\sigma(s)\) as shown in Equation1 \[f_{a}(x,w,\sigma(s))=f_{s}(x,w)\cdot\sigma(s) \tag{1}\] where the sigma function \(\sigma(s)\) inputs the relaxation parameter \(s\in\mathbb{R}\) and outputs a value from 0 to 1. The relaxation function enables a continuous optimizer to explore the discrete problem of enabling or disabling each relaxed channel as part of model training. To prune a neural network, the relaxation function is converted to a binary mask \(m\) using a threshold value \(\tau\) and \(\sigma(s)\): \[m=\begin{cases}1&\text{if }\sigma(s)>\tau\\ 0&\text{else}\end{cases} \tag{2}\] The binary mask is applied to each operator within a sub-network retaining only the candidate channels. After pruning, the unpruned relaxation function outputs \(\sigma(s)\) can be retained as a channel bias or better, merged into adjacent operators reducing computation. If all channels are pruned then the entire sub-graph is removed. In addition, incoming and outgoing operators are removed from the graph until a non-zero path exists. This procedure can be generalized to any deep learning architecture with at least two consecutive convolutional layers. Sub-graphs that contain unknown operators may be excluded from pruning. ### Problem Formulation We start with an accuracy objective that searches for convolution weights \(w\) and relaxation weights \(s\) to minimize the objective function \(\mathcal{L}(f(x,w,\sigma(s)))\). Second, we add in the structural pruning objective, and scale it with an architecture weighting factor \(\mu\|\sigma(s)-t\|_{1}\), which approaches zero as the model structure \(\sigma(s)\) approaches the target structure \(t\). The model structure is expressed as two forms: \[\sigma_{p}(s)=\frac{1}{P}\cdot\sum_{i}p_{i}\cdot\frac{1}{1+e^{-as}} \tag{3}\] \[\sigma_{q}(s)=\frac{1}{Q}\cdot\sum_{i}q_{i}\cdot\frac{1}{1+e^{-as}} \tag{4}\] Where \(p_{i}\) is the operator parameters (see Supplementary Section), \(P\) is the total model parameters, \(q_{i}\) the operator FLOPs, \(Q\) being the total model FLOPs, and \(a\) as the logistic growth rate or steepness of curve. The model structure is based on the logistic function. We formulate the logistic function to approach \(0\) for small \(s\) and \(1\) for large \(s\) providing a bounded, differentiable range \((0,1)\) from the continuous domain of \(s\in\mathbb{R}\). \(\sigma_{p}(s)\) weighs model parameters for comparison with other structured and unstructured pruning algorithms' sparsity whereas \(\sigma_{q}(s)\) weighs model FLOPs to improve computational efficiency. Third, we add in the stiffening objective scaled by its weighting factor to ensure that the relaxed model before and after pruning is similar. This is achieved when the relaxation of the pruning weights is far from 0. We observe smoother convergence and improved pruning performance when model stiffening is an optimization objective. The stiffening function is: \[\delta(s)=\frac{1}{N}\sum_{i}\exp\left(\frac{-s^{2}}{2b^{2}}\right) \tag{5}\] This is derived from the Gaussian probability density function: \[\exp\left(\frac{-s^{2}}{2b^{2}}\right)\propto\mathcal{N}(s|0,b^{2}) \tag{6}\] where \(N\) : Number of pruning weights \(b\) \begin{table} \begin{tabular}{|c|c|c|} \hline Operator & Input & Output \\ \hline \hline Concatenation \(\oplus\) & \(T_{i,n}=(b,c_{n})\oplus d\) & \(T_{o}=(b,\sum c_{n})\oplus d\) \\ \hline ReLu & \(T_{i}=(b,c)\oplus d\) & \(T_{o}=T_{i}\) \\ \hline Batch Norm & \(T_{i}=(b,c)\oplus d\) & \(T_{o}=T_{i}\) \\ \hline Sum \(u+v\) & \(T_{i,u}=(b,c)\oplus d\) & \(T_{o}=(b,c)\oplus d\) \\ \hline Element-wise & \(T_{i,u}=(b,c)\oplus d\) & \(T_{o}=(b,c)\oplus d\) \\ Product \(u\cdot v\) & \(T_{i,v}=(b,c)\oplus d\) & \(T_{o}=(b,c)\oplus d\) \\ \hline Convolution \(\oplus\) & \(T_{i}=(b,c_{i})\oplus d_{i}\) & \(T_{o}=(b,c_{o})\oplus d_{o}\) \\ \hline Fully Connected & \(T_{i}=(b,c_{i})\) & \(T_{o}=(b,c_{o})\) \\ \hline \end{tabular} \end{table} Table 1: Neural Network Operators with input and output dimension. \(b\): batch size, \(c\): number of channels, \(d\): array of image spatial dimensions, \(k\): channel mask vector, \(m\): array of convolution kernel dimensions, \(T\): tensor size vector Figure 2: **CRISP method (a)** Directed acyclic graph representation of a ResNet pruning network. The graph edges (colored arrows) and their surrounding convolutions represent individual subgraphs. Throughout the pruning process, each subgraph is designed to maintain channel consistency between convolutional outputs and inputs. Note that the green subgraph calibrates the pruned channels of every residual unit. **(b)** Training and relaxation. The CNN weights are trained while the relaxation parameters learn the importance of individual convolutional channels. In this example, the golden sub-graph does not relax, the blue and green sub-graphs become partially relaxed and the violet sub-graph relaxes completely. **(c)** Pruning. All convolutional channels with a relaxation parameter close to zero are pruned. The violet sub-graph is entirely pruned. **(d)** The final network after pruning and sub-graph collapse. Gaussian Standard Deviation. which defines a Gaussian function for each of the pruning weights. Thus our final objective function for a given CNN \(f\), model weights \(w\) and relaxation parameters \(s\) becomes: \[\mathcal{T}(x,w,s)=\mathcal{L}(f(x,w,\sigma(s)))+\mu\|\sigma(s)-t\|_{1}+\lambda \cdot\delta(s) \tag{7}\] \[\min_{w,s}\sum_{x\in X}\mathcal{T}(x,w,s) \tag{8}\] such that, \(\mathcal{L}:\): accuracy loss, \(\mathcal{T}:\): total loss, \(f:\): convolutional neural network, \(X:\): set of input tensors, \(V:\): set of ground truth label tensors, \(w:\): network weights, \(s:\): pruning weights, \(\sigma:\): architecture minimization function, \(t:\): target architecture, \(\mu:\): architecture weighting factor, \(\|\cdot\|_{1}:\) L1 Norm, \(\lambda:\): stiffening weighting factor, \(\delta:\): stiffening function Since our objective is to minimize model size without reducing model accuracy, \(\mu\) and \(\lambda\) are chosen to be close to the minimum value of the objective function. This results in an structured pruning without reducing model accuracy. A high \(\lambda\) at initialization inhibits sparsity because it enforces \(s\) to stay on the same side as the critical point of our Gaussian stiffening function, \(\delta\). However, waiting a few iterations in training allows the initial search to begin, and increasing \(\lambda\) later induces "confidence" in our determined \(s\)-values as it pushes to positive or negative infinity. In layman's terms, we do not want the search to be confident of the initialized \(s\)-values. Increasing \(\lambda\) later results in a faster convergence of \(s\). ### Search Algorithm 1 formulates a reliable and efficient algorithm to search for the minimum FLOPs that preserves model accuracy. ``` Training set \(X\) and label set \(V\) \(f(x,w,\sigma(s))\)\(\triangleright\) network with subnetworks \(w=w_{0}\)\(\triangleright\) initialize/transfer network weights \(s=s_{0}\)\(\triangleright\) initialize/transfer pruning weights \(W\)\(\triangleright\) workflow definition \(j=j_{max}\)\(\triangleright\) workflow steps \(i=i_{max}\)\(\triangleright\) training epochs while\(j\geq 0\)do if\(W_{j,prune}\)then for each\(s\in\mathcal{S}\)do\(\triangleright\) For each subgraph \(m=\sigma(s)>\tau_{j}\)\(\triangleright\) Prune mask \(P(f_{s}(x_{s},w_{s},\sigma(s_{s})))\)\(\triangleright\) Prune the subgraph endfor endif if\(W_{j,train}\)then while\(i\geq 0\)do\(\min_{w,s}\sum_{x\in X}\mathcal{T}(x,w,s)\)\(\triangleright\) Train \(i=i-1\) endwhile endif if\(W_{j,test}\)then \(r=\sum_{i}\|f(x_{i},w,\sigma(s)-v_{i}\|\)\(\triangleright\) Test Accuracy endif \(j=j-1\) endwhile \(f(x,w,\sigma(s))\)\(\triangleright\) network with pruned sub-networks \(w\)\(\triangleright\) final trained weights \(s\)\(\triangleright\) final pruning weights \(r\)\(\triangleright\) test results ``` **Algorithm 1** Continuous relaxation in structured pruning Algorithm 1 formulates a reliable and efficient algorithm to search for the minimum FLOPs that preserves model accuracy. Since continuous relaxation evaluates channels as fractionally present, it is feasible to perform structured pruning in a single step [9, 33]. However, Algorithm 1 implements a workflow iteration "while \(j\geq 0\) do". This addition to continuous relaxation is inspired by early pruning literature [22, 26]. Concerned about pruning weights early that become important only late in training, we introduced the workflow dependent \(\tau_{j}\) with an initially small prune threshold (e.g. 0.01) that we ramp progressively towards 0.5. Experimentally, we have not achieved superior single-step pruning results to multi-step pruning following this approach. Both converge towards a similar pruned structure with multi-step pruning having a smoother convergence. In addition, early pruning speeds model training as the model size decreases. The workflow definition variable \(W\) allows us to train and test on the first pass followed by prune, train, and test on subsequent passes. ## 4 Experiments The premise of CRISP is to 1) train everyday neural networks into high accuracy while minimizing computation and 2) demonstrate that this can be achieved automatically in a model development pipeline. To establish this, we first determine the highest sparsity attainable for various pruning algorithms and benchmark CRISPs performance in comparison. We observe that all the models remain under the theoretical limit specified by LTR. Then we investigate the stability of the training convergence to determine the generalizability of the algorithm. Our data shows a repeatedly robust convergence when optimizing for standard cross-entropy loss, mean intersection-over-union (mIoU) and FLOPs. Finally, the intent of CRISP is to deploy into real-world systems where class objects vary by size and shape tremendously, especially due to data drift. To verify the performance of pruned models on complex class objects, we carried out a size-based analysis on the human class of the Cityscapes dataset. We perform our experiments on standard datasets to ensure a fair comparison to existing works so that future users can utilize the benchmarks to decide the best methods for their use cases. We use PyTorch framework for experiments on NVIDIA RTX 5000 and NVIDIA RTX 6000 for CIFAR10 datasets, and NVIDIA A6000 and NVIDIA A100 for CityScapes [4] and ImageNet dataset. All the parameters, models, and code are available at [https://github.com/sherlj5352/crisp](https://github.com/sherlj5352/crisp). ### Accuracy vs Sparsity Comparison First, we search for the highest sparsity attainable. With data provided by structured and unstructured pruning algorithms alike, Figure 3 explores the trade-off between network accuracy and sparsity of the ResNet-18 networks on top-1 accuracy of the CIFAR-10 dataset. Although unstructured pruning achieves a lower speedup at the same sparsity, it provides a theoretical upper-limit for structured pruning sparsity. Random channel pruning (Random-S in the figure) provides the lower bound where channels are randomly pruned followed by iterative model retraining and testing. For all the pruning methods, we start from the same baseline performance of 91.5% which is the off-the-shelf performance of the ResNet-18 model on CIFAR-10. Our goal is to maintain model accuracy (reduction of less than 1%) as sparsity increases. We note that existing methods and CRISP exhibit an elbow-shape as the accuracy plummets at high sparsity. CRISP has its elbow point at 96.9% sparsity with the accuracy preserved at 90.25%, akin to the structured pruning state-of-the-art. We define the preservation of accuracy to be within a 1% threshold of the off-the-shelf model. Early-CroP-S [22] (at 95% sparsity and 91% accuracy). At 94.6% sparsity, CRISP performs with 90.87% accuracy. ### Sparsity vs FLOPs Whereas a sparsity objective treats all pruning weights equally, the FLOPs objective favors CNN channels that process and propagate data of larger resolutions. With the application of structured pruning on the U-NET [25] image segmentation of CityScapes [3] we seek to answer the following questions: (a) What is the trade-off in performance when optimizing for sparsity vs FLOPs? (b) What are the structural difference between models pruned for sparsity vs for FLOPs? Figure 4 shows the training convergence of three training sequences: (a) no pruning: train only to minimize standard cross-entropy loss, (b) CRISP sparsity: a 5-step CRISP training using the sparsity objective, (c) CRISP FLOPs: a 5-step CRISP training using the FLOPs objective. The scatterplot shows the cross entropy loss of the validation dataset evaluated periodically during training on left Y-axis. The black line associated with the right Y-axis displays the sparsity in Figure 4 (b) and the FLOPs removal in Figure 4 (c). Each color represent one step of the iterative training and pruning workflow. As outlined in Section 3.3, the first CRISP training step (purple scatterplot in Figure 4 (b) and (c)), seeks to minimize cross entropy, architecture and stiffening losses. Every subsequent step prunes the model, then resumes its training using the same loss objectives. The cross-entropy loss tends to jump immediately after pruning especially at higher sparsity. However, the shift is small and overall accuracy is Figure 3: Model accuracy vs sparsity with structured pruning. Performance bounded by LTR unstructured pruning and random structured pruning. maintained in the subsequent training steps. This is shown in Figure 4 (b) and (c) where convergence is smooth. For comparison, we trained three U-NET models that achieved 75% mIoU, of which two were pruned to 91% sparsity. The pruned model in Figure 4 (b) was trained for sparsity and has double the FLOPs of Figure 4 (c) that was trained to minimize FLOPs. The FLOPs objective has a significant performance benefit. The architecture reduction line plot in Figure 4 (b) and (c) illustrates our strategy of stepping up \(\mu\) and \(\lambda\) in the second training step (see Supplementary Material cityscape-scrisp.yaml for training parameters). Significantly, this reduction followed by pruning of CNN channels does not affect the cross entropy loss that continues to slowly decrease. Figure 5 illustrates the structure of the pruned U-Net models from Figure 4 (b) and (c) trained for sparsity (a) and FLOPs (b) after the final step of CRISP. In Figure 4, from the left (input) to the right (output) is a column for each of the U-Net subnetworks as defined in Section.2. The original U-Net forms an isosceles triangle. When pruned for both sparsity and FLOPs, the vast majority of the U-Net is carved away. While the sparsity objective heavily reduced the peak and its adjacent layers of the U-Net in Figure 5 (a), the FLOPs reduction U-Net is well-pruned in layers toward the edge of the triangle in Figure 5 (b). This is intuitive because the U-Net peak has been downsized by 8x. There is a greater reward for the FLOPs objective to remove channels from the outer layers that process and propagate larger resolution data. Figure 5 indicates a clear structural different between pruning for Sparsity and FLOPs although they share a common overall pattern. The pattern of a compressed, rather than symmetric decoder is found in manually-optimized networks similar to the DeeplabV3 architecture [2]. ### IoU vs Object Instance Size Finally, we analyze the effect of pruning on the segmentation of objects with varying size. Effectively, we ask, "does the segmentation of pruned models change when object sizes change?". We leverage the 'human' class of CityScapes with instances that vary greatly in shape and size, and train one standard U-Net and two pruned U-Nets that converge to similar IoUs ( 0.74). We present the results of all 'human' instances in the validation and test sets. Figure 6 (a) shows the performance of an unpruned U-Net model, and two models that are pruned to 83.16% (called Pruned83) and 91.92% (called Pruned92) where these percentages represent the FLOPs removed. We note that all the pruned models maintain performance on all sized objects. A visualization of the effect of pruning on the models' segmentation output are found in 6 (b) and Figure 7. Figure 6 (b) demonstrates variable performances by the Pruned92 and Unpruned models and Figure 7 show the projected segmentation on one zoomed-in example instance. As the pruning progresses, we observe slight changes in the mask output. However, the objects are consistently segmented out. ## 5 Conclusion Whereas large neural networks enable us to uncover increasingly better solutions to complex computer vision problems, a smaller subnetwork within can achieve a similar accuracy. Structured pruning provides a direct increase Figure 4: Training monitoring test loss and pruning ratio through (a) no pruning: 75.4% mIoU, Params 31.04e6, 218.9 GFLOPs, (b) sparsity objective: 75.4% mIoU, 91.0% Sparsity, 33.7 GFLOPs, and (c) FLOPs objective: 75.6% mIoU, 91.2% Sparsity, 15.5 GFLOPs to computational efficiency within a constrained search space. We attained a 96.9% sparsity with a minimal (\(<~{}1\%\)) loss in accuracy with CRISP, on the same ResNet-18 network and CIFAR-10 image classification, an improvement over the current structured state-of-the-art Early-CroP-S algorithm. Sparsity, however, does not represent computational efficiency. Reformulating our objective for FLOPs, we are able to achieve a 93% FLOPs reduction for CIFAR-10 image classification and CityScapes image segmentation thereby demonstrating CRISPs' extension to more complex computer vision problems. We believe that progressive pruning is beneficial to deploying large models in real-world systems where objects of different sizes show up at test time. We validate performance across objects of different sizes of an important class category and show that the performance does not vary with object size both quantitatively and qualitatively. On ImageNet we estimate 92% sparsity and 93% FLOPs reduction with 77% accuracy, on CityScapes we achieve 75.6% mIoU, 91.2% sparsity with 92.9% FLOPs reduction. We hope to shine a light on structured pruning through our efforts. In addition to the direct impact on model efficiency, we laid out a general approach to identify, relax, and prune sub-graphs applicable to a wide variety of neural networks used in computer vision. By combining model accuracy, pruning, and model stiffening into continuous relaxation and optimization regularization we only have one process to perform, which can be integrated into a general model training pipeline. This process has stable convergence to the target maximum sparsity without significant loss of accuracy. We believe this is a necessary foundation to provide structured pruning as an automated step of a production model optimization pipeline. Our future work includes the automation of CRISP within a CI/CD pipeline, the inference time speedups of CRISP models in a production setting, the pruning of other commonplace neural network structures, and exploring the relationship between network pruning and dataset covariant shift. We believe that this technique can be generally applied to large-scale data center applications and other constrained edge devices as well. Datacenter inference could benefit from reduced energy consumption and server requirements while providing headroom for future capabilities. As is the case with all algorithms, we suggest teams working to incorporate CRISP and other optimizations to be socially considerate and aware of potential target impacts. Figure 5: The relaxation \(\sigma(s)\) heatmaps of the U-Net models trained with **(a)** Sparsity and **(b)** FLOPs objectives. The columns from left to right are the U-Net subnetworks from model input to output, respectively. From top to bottom are the architecture minimization function output of the subnetwork. The color of each convolutional output is scaled by the relaxation function. Blue indicates a relaxation value of 0 and is to be pruned, while red represents 1 as a channel to be kept. The black background is negative space. See the Supplementary Section for a larger figure.
2305.14821
Emerging solutions from the battle of defensive alliances
Competing strategies in an evolutionary game model, or species in a biosystem, can easily form a larger unit which protects them from the invasion of an external actor. Such a defensive alliance may have two, three, four or even more members. But how effective can be such formation against an alternative group composed by other competitors? To address this question we study a minimal model where a two-member and a four-member alliances fight in a symmetric and balanced way. By presenting representative phase diagrams, we systematically explore the whole parameter range which characterizes the inner dynamics of the alliances and the intensity of their interactions. The group formed by a pair, who can exchange their neighboring positions, prevail in the majority of the parameter region. The rival quartet can only win if their inner cyclic invasion rate is significant while the mixing rate of the pair is extremely low. At specific parameter values, when neither of the alliances is strong enough, new four-member solutions emerge where a rock-paper-scissors-like trio is extended by the other member of the pair. These new solutions coexist hence all six competitors can survive. The evolutionary process is accompanied by serious finite-size effects which can be mitigated by appropriately chosen prepared initial states.
Attila Szolnoki, Xiaojie Chen
2023-05-24T07:21:30Z
http://arxiv.org/abs/2305.14821v1
# Emerging solutions from the battle of defensive alliances ###### Abstract Competing strategies in an evolutionary game model, or species in a biosystem, can easily form a larger unit which protects them from the invasion of an external actor. Such a defensive alliance may have two, three, four or even more members. But how effective can be such formation against an alternative group composed by other competitors? To address this question we study a minimal model where a two-member and a four-member alliances fight in a symmetric and balanced way. By presenting representative phase diagrams, we systematically explore the whole parameter range which characterizes the inner dynamics of the alliances and the intensity of their interactions. The group formed by a pair, who can exchange their neighboring positions, prevail in the majority of the parameter region. The rival quartet can only win if their inner cyclic invasion rate is significant while the mixing rate of the pair is extremely low. At specific parameter values, when neither of the alliances is strong enough, new four-member solutions emerge where a rock-paper-scissors-like trio is extended by the other member of the pair. These new solutions coexist hence all six competitors can survive. The evolutionary process is accompanied by serious finite-size effects which can be mitigated by appropriately chosen prepared initial states. ## Introduction If I am stronger than the enemy of my neutral partner then the mentioned intruder can be blocked. Similarly, my own enemy may hopefully be beaten by my neutral partner. The only crucial point is neutral partners should be capable to exchange their positions hence giving a chance for the proper defender to meet with the external intruder. This mechanism establishes the simplest two-member defensive alliance when biological species, or strategies in an evolutionary game model, compete for space [1, 2]. Practically similar indirect collaboration can also emerge in larger groups where members are not neutral, but they are in a predator-prey relationship to each other [3, 4]. The latter formations are based on nonhierarchical, or in other words, intransitive competition of members [5, 6, 7, 8]. The simplest version is when the relation of three actors can be characterized by the well-known rock-scissors-paper game [9, 10, 11, 12, 13, 14]. While the members are basically enemies, but they can protect indirectly each other from an external invader. It is worth stressing that the mentioned cyclic dominance can be observed in very different biological systems, including bacterias, plants, or animals [15, 16, 17, 18, 19, 20]. For simplicity, in the remaining of this paper we refer the actors as species, but our abstract model could be valid in other research areas, too. For example, cyclic dominance, or intransitive relation between actors can also be detected when different strategies compete in an evolutionary game model [21, 22, 23]. A simple example is when cooperator, defector and loner strategies beat each other cyclically, hence providing a stable coexistence for all competitors [24]. Further examples of cyclic relation were also found when more sophisticated strategies, like conditional cooperators, or so-called informed strategies are present [25]. Evidently, larger loops with four-, five-, six-, or even more species are also possible to form more subtle alliances [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. The fight can step onto a higher level when the external invader is not a single species, but also an alternative alliance. In this case the inner dynamics of an alliance could be a decisive factor to beat the alternative group. For instance, when two three-member groups compete then the faster inner invasion makes a cyclic loop fitter than the other trio where invasions are less intensive [37]. It could also be crucial how homogeneous the invasion rates within a group, because highly heterogeneous rates make a trio vulnerable no matter the average invasion in the loop is high [38]. The conclusions could be more complicated when alliances with different sizes compete. The simplest symmetrical model could be when a two-member and a four-member groups fight in a six-species system. In this work we study the symmetric model of six species where the relation of all actor can be described by three parameters. The first parameter characterizes the intensity of cyclic invasion among the four-member group, while the second parameter determines the frequency of neighboring site exchange of the pair. Last, the third parameter decides the intensity of interaction between the two competing alliances. Our principal goal is to explore the complete parameter space and find the possible winner at each parameter combinations. In this way we can identify principles which generally determine the vitality of an alliance when competing formations are also on the stage. ## Model Defensive alliances require a spatial setting of competing actors, therefore in our minimal model the species are distributed on a square lattice where each lattice point is occupied by one of the \(s_{i}\)\(i\in\{l,2,3,4,5,6\}\) six species. Periodic boundary conditions are applied, hence all players can interact with one of the four nearest neighbors. By forming a four-member loop, species _1, 2, 3_ and \(4\) invade cyclically each other with probability \(\alpha\). The other group is formed by species \(5\) and \(6\) who are neutral, but they exchange their neighboring position with probability \(\beta\). Last, the two alliances invade each other in a balanced way. More precisely, species \(1\) and species \(3\) invades species \(5\), while the latter invades species \(2\) and species \(4\). Similarly, species \(2\) and species \(4\) invades species \(6\), while the latter invades species \(1\) and species \(3\). The invasions between the alliances happen with probability \(\gamma\). Summing up, the microscopic rules are defined in the following way: \[\begin{split}& s_{i}s_{i+1}\xrightarrow{\alpha}s_{i}s_{i}\text{ if }i\in\{1,2,3,4\}\text{ in a cyclic manner}\\ & s_{5}s_{6}\xrightarrow{\beta}s_{6}s_{5}\\ & s_{1}s_{5}\xrightarrow{\gamma}s_{1}s_{1},s_{3}s_{5} \xrightarrow{\gamma}s_{3}s_{3},s_{2}s_{6}\xrightarrow{\gamma}s_{2}s_{2},s_{4 }s_{6}\xrightarrow{\gamma}s_{4}s_{4},\text{ and }s_{5}s_{2}\xrightarrow{\gamma}s_{5}s_{5},s_{5}s_{4} \xrightarrow{\gamma}s_{5}s_{5},s_{6}s_{1}\xrightarrow{\gamma}s_{6}s_{6},s_{6}s _{3}\xrightarrow{\gamma}s_{6}s_{6}\,.\end{split} \tag{1}\] To give a clear description about the relation of the species we present the food-web of all participants in Fig. 1. This graph highlights that parameter \(\alpha\) and parameter \(\beta\) characterize the inner dynamics of the alliances while parameter \(\gamma\) determines the interaction strength between these formations. Importantly, the relation between these groups are balanced, namely species \(5\) is a predator of two members of the quartet and simultaneously the prey of the remaining two members of the loop. Similar can be said about the relation of species \(6\) and the quartet. It is also worth noting that in the absence of species \(6\) the external invader species \(5\) would be crowed out by the cyclic loop of the quartet. Naturally, alone species \(6\) would also be vulnerable against the four-member formation. This is a well-known mechanism how a cyclic loop is capable to defense an external intruder. But now, both species \(5\) and species \(6\) are present simultaneously, and more importantly, they can exchange their positions. Our main goal is to reveal the effectiveness of their defensive alliance when they fight against another formation. At this point it is important to note that a conceptually similar system was studied recently [39]. Our present model, however, posses the most symmetric setup of possible interactions between the competing alliances. In this way the conditions for a fair competition are established. As a consequence, four additional three-member loops emerge in the food-web due to these external interactions. For example, species \(1\), \(2\), and \(6\) form a rock-scissors-paper-like group, but there are three similar formations, including _2+3+5_, _3+4+6_, and _1+5+4_. They are selected from the members of the original competing alliances. As we will show later, these possibilities have a serious consequence on evolution and give a chance for new solutions to emerge. Monte Carlo (MC) simulations are carried out in the full three-dimensional \((\alpha,\beta,\gamma)\) parameter space. During an elementary step we randomly choose a node and a nearest neighbor in the \(L\times L\) lattice. If they are occupied by different species then we execute the possible change with the given probability defined by Eqs. 1. A full MC step contains \(L\times L\) elementary steps. Figure 1: Food-web of the minimal symmetric model where a two-member and a four-member alliances meet. Solid black arrows denote the cyclic invasion among species _1, 2, 3_ and \(4\) with probability \(\alpha\). Species \(5\) and \(6\) can exchange their positions with probability \(\beta\). The mutual invasion between the mentioned groups happens with probability \(\gamma\), as indicated by the dashed grey arrows. We need to vary the linear system size between \(L=400\) and \(L=3600\) to avoid serious finite size effects in the vicinity of phase transition points. It is also important to stress that the traditionally used random initial state, when species are distributed randomly when we launch the evolution, does not always give reliable information about the stationary state which is valid in the large-size limit. Therefore we also use alternative initial states where the evolution starts from a prepared initial state [40, 41, 42, 43]. The details are given in the next section. In general, the solution which is considered as a stable solution, should be obtained starting from different initial states where all competing species are present. Naturally, the relaxation time to reach the mentioned state or the minimal system size could be very different and it may also depend on how far we are from a specific phase transition point. In our present model the necessary relaxation time is between \(10^{4}\) to \(2\cdot 10^{6}\) MC steps. ## Results ### Strong interaction between alliances We first present the system behavior in the large \(\gamma\) region when the interaction between the pair and the quartet alliances is intensive. If we fix the value of \(\gamma\) then two free parameters remain which determine the inner dynamics of the competing alliances. If the \(\beta\) mixing rate between species 5 and species 6 is high then their couple prevail independently of the value \(\alpha\). If we lower the value of \(\beta\) then the alternative alliance can win for large \(\alpha\) values. The appropriate order parameter, which characterizes the transition between the mentioned solutions, is the sum of \(\rho_{5}\) and \(\rho_{6}\) portions of the pair species. Evidently, this sum is equal to 1 in the "pair" solution and becomes zero in the "cyclic" solution. The left panel of Fig. 2 illustrates that the "pair" solution is replaced by cyclic solution of the quartet at \(\alpha=0.529\) via a discontinuous transition if \(\beta=0.04\). For smaller mixing rate, as it is shown for \(\beta=0.025\), a new phase emerges for intermediate \(\alpha\) values. In this case all six species coexist. If we increase the value of \(\alpha\) further then this solution gradually disappears and is replaced by the cyclic solution via a continuous phase transition. Last, if we further decrease the \(\beta\) value then the "pair" solution completely disappears. The complete behavior is summarized in the phase diagram shown on the right panel of Fig. 2 where the stable solutions are depicted on the \(\beta-\alpha\) parameter plane. We note that a relatively small mixing rate is enough for the pair alliance to beat all other solutions. More precisely, if \(\beta>0.07\) then the mixture of species 5 and species 6 prevail even if the invasion rate among the quartet species is maximal. The latter can only win in the right-down corner of the parameter plane when \(\alpha\) is high, hence the quartet is strong, and \(\beta\) is small, hence the pair alliance is weak. The left-down corner of the diagram represents the case when both the pair and the cyclic alliances are weak. Here a new solution emerges where all six species are present. Later, we will describe the features of this new solution in detail. But first, let us briefly comment the technical difficulties of identifying the phase transition points in this system. It is a standard protocol that we start the simulation from an initial state where the spatial distribution of competing species is random. Figure 2: Representative system behavior at \(\gamma=0.9\) when the interaction between the competing alliances is strong. Left panel depicts how the order parameter changes as we vary the value of \(\alpha\) for three representative mixing rates. The values of \(\beta\) are shown in the legend. The order parameter, which characterizes the stable solutions, is the \(\rho_{5}+\rho_{6}\) sum of the portions of the pair species. For \(\beta=0.04\) the transition is discontinuous, while for \(\beta=0.01\) we can see a continuous transition. For intermediate \(\beta=0.025\) a discontinuous transition is followed by a continuous one. The right panel summarizes the system behavior on the \(\beta-\alpha\) parameter plane. Here “all” marks the phase when all six competing strategies survive. Red solid line marks the positions of continuous, while blue dashed line denotes the location of discontinuous transition points. If we use such a starting state then we can observe a serious finite-size effect now. More precisely, the evolutionary outcome is ambiguous, sometimes the system terminates into the "pair" solution, but the alternative final state, where the cyclic quartet remains, may also be observed at the same parameter values. To illustrate this uncertainty, in the left panel of Fig. 3 we present a statistics about the possible destinations. More precisely, we plot the probability to reach the cyclic four-strategy state for different \(\alpha\) values at a fixed \(\beta\). Importantly, the alternative evolutionary outcome is to reach the "pair" state. To obtain these numbers we have executed 200-2000 independent runs depending on the lattice size at each \(\alpha\) values. The plot shows clearly that we cannot really trust on the numerical results obtained for smaller system sizes. For instance, at \(\alpha=0.51\), which is quite far from the proper transition point, both possible destinations are almost equally likely for \(L=100\) system size. The transition point, where "pair" solution is replaced by the alternative cyclic alliance, is at \(\alpha=0.538(2)\). But we note that the outcome is still ambiguous for \(L=3200\). Importantly, this finite-size problem can be mitigated significantly if we use an alternative initial state where both alliances are present from the very beginning and they can compare their vitality in a fair way. The right panel of Fig. 3 shows such an initial state where we divided the available space into two halves and each sector is occupied by one of the alliances. In this way the the possible solutions can compete properly, hence the final evolutionary outcome is less ambiguous. In particular, at the previously mentioned parameters we can already obtain reliable data for the location of the phase transition point by using \(L=800\) linear system size. To understand the origin of the evolutionary uncertainty, we monitor a representative pattern formation in Fig. 4. Initially, all six species are randomly distributed as shown in panel (a). When we launch the evolution then the pair of species 5 and species 6 can easily form their alliance which becomes effective against a randomized pattern. As panel (b) illustrates, they gradually crowd out the remaining four species who try to fight against them separately. Just a single seed emerges where all members of the cyclic solution are present simultaneously, hence they can form the alternative alliance. This moment is shown by a white circle in panel (b). In the absence of this formation, the _5+6_ pair would win, as it is demonstrated in panel (c) where the majority of available space is occupied by this solution. The rival alliance, however, can resist, and more importantly, they can reverse the direction of the war and gradually invade the whole space. Notably, this invasion could be a slow process. For example, panel (d) of Fig. 4 was taken after 50,000 MC steps, but the final destination to reach the full cyclic solution is unavoidable and was recorded after 300,000 MC steps. The presented example highlights the difficulty when we look for the winning solution. As we pointed out, it could be misleading to start the evolution from a random state because this setup does not necessarily give equal chance for all possible solutions. Importantly, to produce Fig. 4 we used relatively large system size, where \(L=800\), still, the chance of emerging the proper winner solutions was low. For example, if we zoom a smaller area having \(100\times 100\), or even \(400\times 400\) size then the system can easily evolve to the "pair" solution. Of course, the "cyclic" solution can also be reached at smaller size, if we are lucky enough to observe the surviving seed of the mentioned solution. This explains the ambiguity at smaller system sizes. Evidently, as we increase the system size, the mentioned uncertainty becomes less likely because somewhere in a larger Figure 3: The left panel shows the fixation probability to the state where the cyclic four-species alliance prevails in dependence of \(\alpha\) at fixed \(\beta=0.04\). The initial state is always a random distribution of all six species. In case of the alternative destination the system evolves onto the “pair” solution. The linear system sizes are indicated in the legend. The right panel illustrates a prepared initial state, obtained at \(L=200\), where both alliances are ready to compete from the very beginning. The usage of this initial state mitigates the finite-size effects significantly. The colors to represent species agree with the code we used in Fig. 1. population there is always a chance for cyclic solution to emerge and test its vitality against the pair solution. But in some cases, depending on the parameter values, we may need unusually large system size to give equal chance for all competing solutions. Just to illustrate the degree of the problem, at certain parameter values, even an \(L=3200\) linear system size can produce hectic destination. Luckily, this problem can be practically avoided if we apply a prepared initial state where both competing alliances are present from the very beginning hence their competition gives a more reliable information about the proper winner. In the following we discuss the so-called "all" phase when all competing species coexist. As we pointed out, this solution emerges when neither the "pair" nor "cyclic" alliances are strong enough to dominate the system. In the related parameter space the small system size may have another consequence. In particular, when the system size is small then it may happen that two of the species die out, while the portions of remaining four species remain stable. These four-member solutions, however, are different from the "cyclic" solution. One of them contains species \(1\), \(2\), \(5\), and \(6\). If we check the food-web shown in Fig. 1 then we can easily identify a three-member cyclic loop here, formed by species \(1\), \(2\), and \(6\). Normally, this alliance would easily destroy the external species \(5\). But the site exchange between species \(5\) and \(6\) saves the intruder. Similar quartet is formed by species \(2\), \(3\), \(5\), and \(6\). Furthermore, the loop of species \(3\), \(4\), and \(6\) can also coexist with species \(5\). The last quartet is formed by species \(1\), \(4\), \(5\), and \(6\). As the left panel of Fig. 5 illustrates, these combinations are forming stable solutions in the mentioned parameter region. The six-species solution can be considered as a mixture of these four-member solutions. A representative pattern of the stationary state is shown in the right panel of Fig. 5. The domains of these four-species solutions are constantly growing and shrinking hence giving chance for all six species to survive. Figure 4: We start the evolution from a random initial state, as shown in panel (a). After 800 MC steps, shown in panel (b), the cyclic alliance emerges. The seed of this solution is marked by a white circle. The remaining part of available space is conquered by the “pair” solution, as illustrated by panel (c), taken after 2200 MC steps. Slowly, but surely the cyclic alliances invades the rival group. An intermediate stage of the process, obtained after 50,000 MC steps, is shown in panel (d). Finally, not shown here, the four-member group prevails. Other parameters are \(\alpha=0.9\), \(\beta=0.08\), \(\gamma=0.2\), \(L=800\). The color code to designate species is the same we used earlier. Figure 5: The left panel shows the possible four-member solutions formed by species selected from both alliances. They are numbered for easier reference. Solution #1 is formed by species _1_+_2_+_5_+_6_, solution #2 by species _1_+_4_+_5_+_6_, solution #3 by _2_+_3_+_5_+_6_, and solution #4 by _3_+_4_+_5_+_6_. The right panel depicts a representative pattern of the “all” phase where all six species are present. As we marked by white ellipses, the pattern can be considered as the mixture of four-species solutions specified on the left side. Parameters are \(\alpha=0.1\), \(\beta=0.01\), \(\gamma=0.9\), \(L=400\). We use the same color code to designate species on the lattice. In the mentioned panel we marked these domains and their interactions can also explain why the six-species solution could be fragile at small system size. More precisely, if the typical size of the domains is comparable to the linear size of the lattice then one types of the domains may go extinct. For example, solution #2 has the smallest portion on the right panel of Fig. 5. If this domain diminishes then the delicate balance between the four solutions is broken. This effect is strongly related to the original four-member loop between species \(1\), \(2\), \(3\), and \(4\). If solution #2 diminishes then the invasion activity between species \(1\) and \(4\) lowers significantly. As a typical reaction in a cyclic dominant system [44], the decrease of invasion strength will indirectly increase the relative portions of species \(2\) and \(3\). In parallel, the extinction of solutions #4 and #1 also happen, hence the system terminates into the #3 solution. We stress, however, that the coexistence of the mentioned solutions are stable if the system size is large enough. Accordingly, the phase named "all" is a proper and stable composition of all six species, albeit its stability is based on the balance of four four-member solutions. ### Weak interaction between alliances Next, we present the typical system behavior in the case when \(\gamma\) is small, hence the interaction between the original alliances is weak. The resulting phase diagram is shown in the right panel of Fig. 6. We can see that the system behavior is basically similar to those we observed for strong interaction when \(\gamma\) is large. Namely, if mixing rate between the pair species is large enough then they prevail and other solutions have no chance to survive. Furthermore, the rival cyclic alliance can only win if their inner invasion is intensive enough and \(\beta\) is low. For small \(\beta\) and \(\alpha\) parameter pairs, however, the six-member solution can win. Similarly, the character of phase transitions is the same we reported for \(\gamma=0.9\). An illustration is shown in the left panel of Fig. 6 where the system arrives to "all" phase from "pair" solution and finally terminates to the "cyclic" phase as we increase \(\alpha\). There is, however, a striking difference between the phase diagrams obtained for large and small \(\gamma\) values. Namely, the kingdom of "pair" alliance is significantly larger for \(\gamma=0.2\). Note that this solution is always dominant if \(\beta>0.015\) independently of the value of \(\alpha\) which determines the power of cyclic alliance. For comparison, this threshold value is almost five times larger for \(\gamma=0.9\). It simply means that the weak interaction between the competing alliances does not allow the cyclic solution to reveal their defending mechanism and the simpler site exchange dynamics is more efficient. The relatively small interaction between the alliances also makes the simulations difficult. Just to illustrate the difficulties, if we launch the evolution from a random initial state then even an \(L=3200\) linear system size could be inadequate to identify the solution which is valid in the large system size limit. Even this linear system size can give different destinations at the same \(\alpha-\beta\) parameter pairs if the evolution is started from a random configuration. The quantitative differences of presented phase diagrams motivated us to check how system behavior changes if we gradually change the interaction strength between the alliances. First, we show a representative phase diagram obtained at a large \(\alpha=0.9\) value. Here, the inner invasion among the members of quartet is intensive, which makes their defensive alliance effective. As the left panel of Fig. 7 indicates, the presence of the strong cyclic solution makes the diagram simpler: either the mentioned alliance or the rival group of pair wins, but there is no chance for the six-member solution to emerge. This Figure 6: Right panel shows the phase diagram obtained at \(\gamma=0.2\). The topology of the diagram is conceptually similar to the diagram shown inf Fig. 2. As previously, blue dashed (red solid) line marks the positions of discontinuous (continuous) phase transitions. Left panel shows a cross section obtained at \(\beta=0.004\). As we increase \(\alpha\), the system changes from “pair” to “all” to “cyclic” solution via a discontinuous and a continuous phase transition respectively. As previously, the order parameter is the sum of the portions of species \(5\) and \(6\). conclusion is in harmony with our previous observations because the latter solution could only survive if none of the competing alliances was strong enough. The new diagram also confirms our previous finding about how the interaction strength alters the relation of rival alliances. Namely, as we increase \(\gamma\), the cyclic solution becomes even stronger and can compensate the pair solution for relatively large \(\beta\) values. The opposite is also true: as we lower \(\gamma\), hence the interaction between the competing alliances is weakened, the pair solution becomes dominant even at very small \(\beta\) values. Similarly to previous cases, the usage of prepared initial is proved to be very useful. To illustrate it, we present the fixation probability to reach the cyclic solution in dependence of \(\beta\) at a fixed \(\gamma=0.6\) value. As the right panel of Fig. 7 illustrates, here already the \(L=400\) linear system size provides acceptable accuracy, but the application of \(L=800\) system size makes our prediction precise by yielding the critical mixing rate \(\beta_{c}=0.04295\pm 0.00005\). To complete our survey of the parameter space, we also explore the phase diagram for a small \(\alpha\) value when the cyclic alliance is weak. Our results are summarized in the left panel of Fig. 8. If we compare this diagram with the previous one then several interesting conclusions can be made. First, we would like to note that the horizontal range of the phase diagram is just one-fifth of the range presented in Fig. 7. It simply means that the pair solution becomes already dominant at very small mixing Figure 8: Left panel: phase diagram obtained at \(\alpha=0.2\) when the invasion within the cyclic alliance is moderate. Blue dashed (red solid) line represents discontinuous (continuous) phase transitions. Right panel: Cross section of the diagram obtained at \(\beta=0.014\). As we increase the value of \(\gamma\), the change of the \(\rho_{5}+\rho_{6}\) order parameter marks four consecutive transition between “pair” \(\rightarrow\) “cyclic” \(\rightarrow\) “all” \(\rightarrow\) “pair” \(\rightarrow\) “all” phases. Figure 7: Left panel: phase diagram obtained at \(\alpha=0.9\) when the invasion within the cyclic alliance is intensive. The solutions which dominate the \(\beta-\gamma\) parameter plane are separated by a discontinuous phase transition line. Right panel: fixation probability to reach the four-member cyclic solution in dependence of \(\beta\) at fixed \(\gamma=0.6\). The simulations were launched from a prepared initial state shown in the right panel of Fig. 3. The linear system sizes are shown in the legend. rate. Previously we concluded that large \(\gamma\) values help the cyclic solution. But the latter is weak now, hence the up-left corner of \(\beta-\gamma\) parameter plane is occupied by the new solution where all six species are present. In general, the decrease of \(\gamma\) should support pair solution, but the mixing rate is extremely small now. Therefore, the cyclic quartet can prevail in the down-left region because the uniform \(\alpha\) invasion rate makes them stronger against the trios in the new four-member solutions where unequal \(\alpha\) and \(\gamma\) invasion rates are present in the loop [38]. This is again a nice example to illustrate that a larger loop can be fitter if it uses homogeneous invasion rates. One of the key messages of our work could be that new solutions or subtle system behavior can only be expected in the parameter region where none of the competing alliances are strong enough to dominate the evolution. To illustrate it, we present a cross-section of the last phase diagram at a fixed \(\beta=0.014\) value. The \(\rho_{5}+\rho_{6}\) order parameter is shown in the right panel of Fig. 8. Here we can detect four(!) consecutive phase transitions by only changing the interaction strength between the alliances. Starting from the "pair" solution at small \(\gamma\) we reach the "cyclic" solution via a discontinuous phase transition first. By increasing \(\gamma\) further we gradually enter into the six-species phase, which is followed by the "pair" solution via a sudden change. If we increase \(\gamma\) even further then we can reach the "all" phase again. In this way we can detect two reentrant transitions for two different ("pair" and "all") phases by changing a single control parameter. ## Discussion Our principal goal was to identify those features which may determine the fitness of a defensive alliance when it fights against a rival group. Similar research question was raised previously [37, 39], but our present work focused on a case when two groups with unequal sizes compete in a symmetrical way. We stress that the defensive mechanisms how a group protects their members from an external invader are different. While in the smaller unit the actors exchange their positions with a certain rate, the members of the quartet may invade each other cyclically. Importantly, we assume that the members of these two groups can invade each other in a balanced way, which results in a three-parameter model. To answer the original question we systematically explored the whole parameter space. We found that the two-member alliance is extremely effective. They practically rule the parameter space independently of the other parameters. The rival alliance has only chance to win if the mixing rate between the pair is very low. Importantly, the cyclic solution needs an intensive inner invasion flow to win. This observation confirms previous findings obtained from the competition of three-member loops. Interestingly, the cyclic solution has a better chance of winning if the interaction between the alliances is intensive. In the other extreme case, when members of different groups hardly invade each other, the site exchange becomes even more dominant. The more subtle system behavior can be identified in the parameter region where none of the alliances is strong. This happens if the parameters, which determine the site exchange and the inner invasion, are small. In this case new four-member solutions emerge which are based on a three-member loop extended by the external member of the pair. According to the defensive alliance principle, which is valid for a three-member loop, the external species should be defeated, but the site exchange with the other partner prevents the original mechanism to work, hence establishing a four-member solution. We can identify four similar solutions which interact via the original invasion loop of the cyclic alliance. In this way they form a delicate balance, hence all six species can coexist stably. When we searched for the stable solution at specific parameter values, we have detected serious finite-size effects, especially in the case if the evolution was launched from random initial state. But this difficulties can practically be avoided if we apply prepared initial states where the competing solutions can fight from the very beginning. It is worth stressing that our abstract model cannot be applied directly to a real-life system, but the mechanism and principles we revealed could be the basic elements to understand more complex system behaviors where many species are fighting. Furthermore, for simplicity, we used the term "species" to describe the actors of this system, but such kind of interactions are not restricted to biological systems [45, 46, 47]. There are other evolutionary game models, mostly motivated by human society, where competing strategies behave in a similar way [48, 49, 50, 51].
2302.11726
Chung's Law of the Iterated Logarithm for a Class of Stochastic Heat Equations
We establish a Chung-type law of the iterated logarithm for the solutions of a class of stochastic heat equations driven by a multiplicative noise whose coefficient depends on the solution, and this dependence takes us away from Gaussian setting. Based on the literature on small ball probabilities and the technique of freezing coefficients, the limiting constant in Chung's law of the iterated logarithm can be evaluated almost surely.
Jiaming Chen
2023-02-23T01:18:10Z
http://arxiv.org/abs/2302.11726v2
# Chung's law of the iterated logarithm for a class of stochastic heat equations ###### Abstract. We establish a Chung-type law of the iterated logarithm for the solutions of a class of stochastic heat equations driven by a multiplicative noise whose coefficient depends on the solution, and this dependence takes us away from Gaussian processes setting. Based on existing results from the literature on small ball probability, the limiting term is provided with both upper and lower bounds almost surely. Key words and phrases:Small ball probability, Chung's law of the iterated logarithm 2020 Mathematics Subject Classification: Primary, 60H15; Secondary, 60G17 ## 1. Introduction Let \(\{B_{t}\}_{t\geq 0}\) be an one-dimensional Brownian motion. [1] proved that with probability one, \[\liminf_{t\to\infty}\left(\frac{\log\log t}{t}\right)^{\frac{1}{2}}\sup_{0\leq s \leq t}|B_{s}|=\frac{\pi}{\sqrt{8}}.\] Generally speaking, Chung's law of the iterated logarithm (Chung's LIL) characterizes the lower envelope (\(\liminf\)) for the local oscillations of the sample path. Chung's LIL has already been applied to some stochastic integrals [2], rates of convergence for the functional form [13], the iterated Brownian motion [12], and a hypoelliptic Brownian motion in Heisenberg group [14]. Furthermore, [11] establishes Chung's LIL for a class of anisotropic Gaussian random fields with stationary increments, which is applicable to space-time Gaussian random fields and a solution of stochastic partial differential equations (SPDEs). Recently, [11] proved Chung's LIL in a wider class of Gaussian random fields that may not necessarily have stationary increments. However, we still limited ourselves to a Gaussian process setting. The purpose of this paper is to study Chung's LIL for a class of non-Gaussian processes at the origin. This is mainly motivated by [1] since it is well-known that a key step in establishing a Chung's LIL is the small ball probability. Small ball probability problems have a long history, and one can see [10] for an overview of known results on Gaussian processes and references on other processes. In short, we investigate the probability that a stochastic process \(X_{t}\) starting at \(0\) stays in a small ball for a long time, i.e., \[P\left(\sup_{0\leq t\leq T}|X_{t}|<\varepsilon\right)\] where \(\varepsilon>0\) is small. Let us summarize the major steps. We construct a sequence of events involving the solution \(u\), use the estimation of small ball probability in [1] and apply the first Borel-Cantelli lemma to derive the lower bound of the limiting term. The upper bound of the limiting term, however, cannot be obtained directly using the second Borel-Cantelli lemma due to the solution dependent coefficients. We are unable to build a sequence of independent events, but we may use the freezing technique. To be precise, given the upper bound for a linear SPDE with a constant coefficient in [1], we may approximate the upper bound for the non-Gaussian process if we can properly estimate the limiting term with respect to their differences. ## 2. Main Result We consider a class of stochastic heat equations given by \[\partial_{t}u(t,x) =\partial_{x}^{2}u(t,x)+\sigma(u(t,x))\dot{W}(t,x),\] \[u(0,x) =u_{0}(x)\equiv 0, \tag{2.1}\] on the circle with \(x\in[0,J]\) and endpoints identified, \(t\in\mathbb{R}^{+}\), and \(\dot{W}(t,x)\) is a space-time white noise. We will need the following two assumptions on the function \(\sigma:\mathbb{R}\to\mathbb{R}\). **Hypothesis 2.1**.: _There exists a constant \(\mathcal{D}\geq 0\) such that for all \(t\geq 0\), \(x\in[0,J]\), \(u,v\in\mathbb{R}\),_ \[|\sigma(u(t,x))-\sigma(v(t,x))|\leq\mathcal{D}|u-v|. \tag{2.2}\] **Hypothesis 2.2**.: _There exist constants \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}>0\) such that for all \(t\geq 0\), \(x\in[0,J]\), \(u\in\mathbb{R}\),_ \[\mathcal{C}_{1}\leq\sigma(u(t,x))\leq\mathcal{C}_{2}. \tag{2.3}\] In fact, (2.1) is not well-posed since the solution \(u\) is not differentiable and \(\dot{W}\) exists as a generalized function. However, under the assumptions (2.2) and (2.3), we define the mild solution \(u(t,x)\) to (2.1) in the sense of Walsh [20]: \[u(t,x)=\int_{0}^{J}G(t,x-y)u_{0}(y)dy+\int_{0}^{t}\int_{0}^{J}G(t-s,x-y)\sigma( u(s,y))W(dyds), \tag{2.4}\] where \(G:\mathbb{R}^{+}\times[0,J]\to\mathbb{R}\) is the fundamental solution of the heat equation \[\partial_{t}G(t,x) =\frac{1}{2}\partial_{x}^{2}G(t,x),\] \[G(0,x) =\delta_{0}(x).\] We are now ready to state the main result of this paper. **Theorem 2.1**.: _Under the assumptions (2.2), (2.3), if \(u(t,x)\) is the solution to (2.1) with \(u_{0}(x)\equiv 0\), then there is \(\mathcal{D}_{0}(J,\mathcal{C}_{1},\mathcal{C}_{2})>0\) and positive constants \(\kappa_{1}\) and \(\kappa_{2}\) depending only on \(\mathcal{D}\), \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) in (2.2) and (2.3) such that for any \(\mathcal{D}<\mathcal{D}_{0}\), we have, almost surely,_ \[\kappa_{1}\leq\liminf_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4} \\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)|}{f(r)}\leq\kappa_{2},\] _where \(f(r):=r\left(\log\log\left(\frac{1}{r}\right)\right)^{-\frac{1}{6}}\)._ Here we make a crucial observation that could be of independent interests. **Remark 2.1**.: _When \(\mathcal{D}=0\), equation (2.1) becomes a linear heat equation with a constant coefficient, and our result matches the Theorem 7.4 in [10]. It is however still necessary to prove that the limiting term is a constant with probability one, which requires a general zero-one law. We face the main difficulty of not being able to find a sequence of independent random fields due to the solution dependent coefficient._ The rest of the article is organized as follows. In Section 3, we provide some preliminary results in the literature to prove the main theorem. In Section 4, we conclude the paper by giving the lower and upper bounds. Throughout the entire paper, \(C\) and \(C^{\prime}\) denote positive constants whose values may vary from line to line. The dependence of constants on parameters will be denoted by mentioning the parameters in parentheses. ## 3. Preliminary [1] estimated the small ball probability for the solution of equation (2.1), and discovered a useful lemma and remark that give critical tail bounds on the noise term. **Theorem 3.1**.: _[_1_, Theorem 1.1(a)]_ _Consider the solution to (2.1) and let the assumptions (2.2) and (2.3) hold. Then there is a \(\mathcal{D}_{0}(J,\mathcal{C}_{1},\mathcal{C}_{2})>0\) and positive constants \(\textbf{C}_{0},\,\textbf{C}_{1},\,\textbf{C}_{2}\) and \(\textbf{C}_{3}\) depending only on \(\mathcal{C}_{1},\mathcal{C}_{2}\) and \(\varepsilon_{0}\) such that for any \(\mathcal{D}<\mathcal{D}_{0}\) and all \(0<\varepsilon<\varepsilon_{0},T>1\) we have_ \[\textbf{C}_{0}\exp\left(-\textbf{C}_{1}\frac{TJ}{\varepsilon^{6}}\right)\leq P \left(\sup_{\begin{subarray}{c}t\in[0,T]\\ x\in[0,J]\end{subarray}}|u(t,x)|<\varepsilon\right)\leq\textbf{C}_{2}\exp\left( -\textbf{C}_{3}\frac{TJ}{\varepsilon^{6}}\right). \tag{3.1}\] **Lemma 3.1**.: _[_1_, Lemma 3.4]_ _There exist constants \(\textbf{K}_{1}\), \(\textbf{K}_{2}>0\) such that for all \(\varepsilon,\lambda>0\) we have_ \[P\left(\sup_{\begin{subarray}{c}0\leq t\leq\varepsilon^{4}\\ x\in[0,\varepsilon^{2}]\end{subarray}}|u(t,x)|>\lambda\varepsilon\right)\leq \textbf{K}_{1}\exp\left(-\frac{\textbf{K}_{2}\lambda^{2}}{\mathcal{C}_{2}^{2}} \right). \tag{3.2}\] **Remark 3.1**.: _One can obtain (3.2) for letting \(\alpha=1\) and \(u_{0}\equiv 0\) in [1, Lemma 3.4]. It was also pointed out in [1, Remark 3.1] that if \(|\sigma(u(t,x))|\leq C_{1}\varepsilon\), then one can bound the right-hand side of (3.2) by \(\textbf{K}_{1}\exp\left(-\textbf{K}_{2}\frac{\lambda^{2}}{C_{1}^{2}\varepsilon ^{2}}\right).\)_ Recent paper [10] established a general framework that is useful for studying the regularity properties of sample functions of anisotropic Gaussian random fields and can be directly applied to the solutions of linear SPDEs. **Theorem 3.2**.: _[_1_, Theorem 7.4]_ _Suppose \(u(t,x)\) is the solution to (2.1) when \(\sigma\) is constant, then, almost surely,_ \[\liminf_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)|}{f(r)}\leq\textbf{C}_{1}^{\frac{1} {6}}, \tag{3.3}\] _where \(f(r)=r\left(\log\log\left(\frac{1}{r}\right)\right)^{-\frac{1}{6}}\) and \(\textbf{C}_{1}\) is the positive finite constant given by Theorem 3.1._ **Remark 3.2**.: _[_10_]_ _have established a harmonizable representation for the solution of linear heat equations with constant coefficients. When \(\dot{W}\) is a space-time white noise, we obtain \(Q=6\) in [10, Theorem 7.4]._ ## 4. Proof of Theorem 2.1 We now start with showing the lower bound in Theorem 2.1. Let \(r_{n}=a^{-n}\) where \(a>1\), and choose a constant \(c\) such that \(0<c<\frac{\textbf{C}_{3}^{1/6}}{a}\). We consider a sequence of events \[A_{n}=\left\{\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}\frac{|u(t,x)|}{f(r_{n-1})}\leq c\right\}.\] We can find \(n\) large enough such that \(r_{n}\leq r\leq r_{n-1}\) and \(f(r)\leq f(r_{n-1})\) since \(f\) is increasing when \(\frac{1}{e}>r>0\). If \(P(A_{n})\) is summable, we can use the first Borel-Cantelli lemma and get, almost surely, \[\liminf_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)|}{f(r)}\geq c>0.\] Taking \(u\) locally, Theorem 3.1 concluded that for any \(\mathcal{D}<\mathcal{D}_{0}\) and all \(0<\varepsilon<\varepsilon_{0}\), \[P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|u(t,x)|\leq\varepsilon\right)\leq\textbf{C}_ {2}\exp\left(-\textbf{C}_{3}\frac{r_{n}^{6}}{\varepsilon^{6}}\right).\] Hence, for all large enough \(n\) and letting \(\varepsilon=cf(r_{n-1})\), we have \[P(A_{n})\leq\textbf{C}_{2}\exp\left(-\textbf{C}_{3}\frac{r_{n}^{6}\log(n-1) \log a}{c^{6}r_{n-1}^{6}}\right)=\textbf{C}_{2}((n-1)\log a)^{-\frac{\textbf{C }_{3}}{a^{6}e^{6}}},\] which is summable given that \(\frac{\textbf{C}_{3}}{a^{6}e^{6}}>1\). By letting \(c\uparrow\frac{\textbf{C}_{3}^{\frac{1}{6}}}{a}\) along a rational sequence and since \(a\downarrow 1\) is arbitrary, we conclude that, \[\liminf_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)|}{f(r)}\geq\textbf{C}_{3}^{\frac{1} {6}}\quad a.s., \tag{4.1}\] which proves the lower bound from letting \(\kappa_{1}=\textbf{C}_{3}^{\frac{1}{6}}\). As we previously mentioned in the introduction, it may not be feasible to directly apply the Theorem 3.1 to show the upper bound using the second Borel-Cantelli Lemma because of the solution dependent coefficients. Instead of seeking for a sequence of independent events, we freeze the coefficient \(\sigma(u)\) by \(\sigma(u_{0})\), that is, we may approximate the solution \(u(t,x)\) by a Gaussian random field, at least in a random time region. We let \(u_{g}(t,x)\) satisfy the stochastic heat equation \[\partial_{t}u_{g}(t,x)=\partial_{x}^{2}u_{g}(t,x)+\sigma(u_{0}(x)) \dot{W}(t,x),\] \[u_{0}(x)\equiv 0,\] and then \(u_{g}(t,x)\) is a solution of a linear SPDE with a constant coefficient. We conclude from Theorem 3.2 that \[\liminf_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u_{g}(t,x)|}{f(r)}\leq\mathbf{C}_{1}^{\frac {1}{8}}\quad a.s.. \tag{4.2}\] We next show that the difference can be well controlled if the time region is suitably chosen. From the definition of the mild solution (2.4), we denote \[D(t,x) =u(t,x)-u_{g}(t,x)\] \[=\int_{[0,1]\times[0,t]}G(t-s,x-y)[\sigma(u(s,y))-\sigma(u_{0}(y) )]W(dyds),\] and \[\widetilde{D}_{n}(t,x)=\int_{[0,1]\times[0,t]}G(t-s,x-y)[\sigma(u(s\wedge \tau_{n}^{g},y))-\sigma(u_{0}(y))]W(dyds),\] where \(\tau_{n}^{g}\) is a stopping time defined as \[\tau_{n}^{g}=\inf\left\{t:|u(t,x)-u_{0}(x)|>g(r_{n})\text{ for some }x\in[0,r_{n}^{2}] \right\}.\] Clearly, on the event \(B_{n}=\{\tau_{n}^{g}\geq r_{n}^{4}\}\), we have \[\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|D(t,x)|=\sup_{\begin{subarray}{c}0\leq t\leq r _{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|\widetilde{D}_{n}(t,x)|,\] which leads to \[P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|D(t,x)|>\lambda f(r_{n})\right)\leq P\left( \sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|\widetilde{D}_{n}(t,x)|>\lambda f(r_{n}) \right)+P(B_{n}^{c}). \tag{4.3}\] We choose \(g(r_{n})=\lambda r_{n}\left[\log\log\left(\frac{1}{r_{n}}\right)\right]^{\frac {1}{2}}\), and apply Lemma 3.2 and Remark 3.1 to get \[P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|\widetilde{D}_{n}(t,x)|>\lambda f(r_{n})\right) \leq\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_{2}\lambda^{2}f(r_ {n})^{2}}{\lambda^{2}D^{2}r_{n}^{2}\left[\log\log\left(\frac{1}{r_{n}}\right) \right]r_{n}^{2}}\right)\] \[=\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_{2}}{D^{2}r_{n}^{2} \left[\log\log\left(\frac{1}{r_{n}}\right)\right]^{\frac{4}{3}}}\right),\] and \[P(B_{n}^{c}) \leq P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|u(t,x)-u_{0}(x)|>\lambda r_{n}\left[\log\log \left(\frac{1}{r_{n}}\right)\right]^{\frac{1}{2}}\right)\] \[\leq\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_{2}\lambda^{2}\log \log\left(\frac{1}{r_{n}}\right)}{\mathcal{C}_{2}^{2}}\right).\] Recall that \(r_{n}=a^{-n}\), so that we derive \[\sum_{n=1}^{\infty}P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|\widetilde{D}_{n}(t,x)|>\lambda f(r_{n}) \right)<\infty,\] and select \(\lambda^{2}>\frac{\mathcal{C}_{2}^{2}}{\mathbf{K}_{2}}\), which implies \[\sum_{n=1}^{\infty}P(B_{n}^{c}) \leq\sum_{n=1}^{\infty}\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K }_{2}\lambda^{2}\log(n\log a)}{\mathcal{C}_{2}^{2}}\right)\] \[=\sum_{n=1}^{\infty}\mathbf{K}_{1}(n\log a)^{\left(-\frac{ \mathbf{K}_{2}\lambda^{2}}{\mathcal{C}_{2}^{2}}\right)}<\infty.\] By letting \(\lambda\downarrow\frac{\mathcal{C}_{2}}{\sqrt{\mathbf{K}_{2}}}\) along a rational sequence and (4.3), the sum of this sequence of events \[\sum_{n=1}^{\infty}P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|D(t,x)|>\lambda f(r_{n})\right)<\infty,\] which implies \[\limsup_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)-u_{g}(t,x)|}{f(r)}\leq\frac{ \mathcal{C}_{2}}{\sqrt{\mathbf{K}_{2}}}\quad a.s., \tag{4.4}\] from applying the first Borel-Cantelli lemma. Alternatively, if we choose \(g(r_{n})=\left[\log\log\left(\frac{1}{r_{n}}\right)\right]^{-\frac{2}{3}}\), then from Lemma 3.2 and Remark 3.1, we attain \[P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|\widetilde{D}_{n}(t,x)|>\lambda f(r_{n})\right) \leq\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_{2}\lambda^{2}f(r_ {n})^{2}}{D^{2}\left[\log\log\left(\frac{1}{r_{n}}\right)\right]^{-\frac{4}{3 }}r_{n}^{2}}\right)\] \[=\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_{2}\lambda^{2}\log \log\left(\frac{1}{r_{n}}\right)}{D^{2}}\right),\] and \[P(B_{n}^{c}) \leq P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|u(t,x)-u_{0}(x)|>\left[\log\log\left(\frac{1}{r _{n}}\right)\right]^{-\frac{2}{3}}\right)\] \[\leq\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_{2}}{\mathcal{C}_{ 2}^{2}r_{n}^{2}\left[\log\log\left(\frac{1}{r_{n}}\right)\right]^{\frac{4}{3} }}\right).\] Let \(\lambda^{2}>\frac{D^{2}}{K_{2}}\) and we have \[\sum_{n=1}^{\infty}P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_ {n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|\widetilde{D}_{n}(t,x)|>\lambda f(r_{n})\right) \leq\sum_{n=1}^{\infty}\mathbf{K}_{1}\exp\left(-\frac{\mathbf{K}_ {2}\lambda^{2}\log(n\log a)}{D^{2}}\right)\] \[=\sum_{n=1}^{\infty}\mathbf{K}_{1}(n\log a)^{\left(-\frac{ \mathbf{K}_{2}\lambda^{2}}{D^{2}}\right)}<\infty,\] and \[\sum_{n=1}^{\infty}P(B_{n}^{c})<\infty.\] By letting \(\lambda\downarrow\frac{D}{\sqrt{\mathbf{K}_{2}}}\) along a rational sequence and (4.3), the sum of this sequence of events \[\sum_{n=1}^{\infty}P\left(\sup_{\begin{subarray}{c}0\leq t\leq r_{n}^{4}\\ x\in[0,r_{n}^{2}]\end{subarray}}|D(t,x)|>\lambda f(r_{n})\right)<\infty,\] which implies \[\limsup_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)-u_{g}(t,x)|}{f(r)}\leq\frac{D}{ \sqrt{\mathbf{K}_{2}}}\quad a.s., \tag{4.5}\] from applying the first Borel-Cantelli Lemma. Finally, from inequalities (4.4) and (4.5), we conclude \[\limsup_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)-u_{g}(t,x)|}{f(r)}\leq\frac{\min\{D,\mathcal{C}_{2}\}}{\sqrt{\mathbf{K}_{2}}}\quad a.s., \tag{4.6}\] and additionally from inequality (4.2), we end up with \[\liminf_{r\to 0^{+}}\sup_{\begin{subarray}{c}0\leq t\leq r^{4}\\ x\in[0,r^{2}]\end{subarray}}\frac{|u(t,x)|}{f(r)}\leq\mathbf{C}_{1}^{\frac{1} {6}}+\frac{\min\{D,\mathcal{C}_{2}\}}{\sqrt{\mathbf{K}_{2}}}\quad a.s., \tag{4.7}\] which gives the upper bound from letting \(\kappa_{2}=\mathbf{C}_{1}^{\frac{1}{6}}+\frac{\min\{D,\mathcal{C}_{2}\}}{ \sqrt{\mathbf{K}_{2}}}\). ## 5. Acknowledgment The author would like to thank his advisor Professor Carl Mueller for insightful discussions and constructive comments.
2308.04416
Legal Summarisation through LLMs: The PRODIGIT Project
We present some initial results of a large-scale Italian project called PRODIGIT which aims to support tax judges and lawyers through digital technology, focusing on AI. We have focused on generation of summaries of judicial decisions and on the extraction of related information, such as the identification of legal issues and decision-making criteria, and the specification of keywords. To this end, we have deployed and evaluated different tools and approaches to extractive and abstractive summarisation. We have applied LLMs, and particularly on GPT4, which has enabled us to obtain results that proved satisfactory, according to an evaluation by expert tax judges and lawyers. On this basis, a prototype application is being built which will be made publicly available.
Thiago Dal Pont, Federico Galli, Andrea Loreggia, Giuseppe Pisano, Riccardo Rovatti, Giovanni Sartor
2023-08-04T16:59:48Z
http://arxiv.org/abs/2308.04416v1
# Legal Summarisation through LLMs: The PRODIGIT Project ###### Abstract We present some initial results of a large-scale Italian project called PRODIGIT which aims to support tax judges and lawyers through digital technology, focusing on AI. We have focused on generation of summaries of judicial decisions and on the extraction of related information, such as the identification of legal issues and decision-making criteria, and the specification of keywords. To this end, we have deployed and evaluated different tools and approaches to extractive and abstractive summarisation. We have applied LLMs, and particularly on GPT4, which has enabled us to obtain results that proved satisfactory, according to an evaluation by expert tax judges and lawyers. On this basis, a prototype application is being built which will be made publicly available. **Keywords:** Large language models, automated summarisation, keyword extraction, sentence classification, tax law cases ## 1 Introduction The law is typically a natural-language-based domain, and natural-language texts are pervasive in the law. First, natural language is the medium that legislation (including administrative regulations of all kinds) uses to express legal prescriptions, which humans (both experts and laypeople) are assumed to understand and comply with. Legislative and regulatory bodies have produced complex and evolving networks of natural language texts, which have complex structures and interconnections and use diverse terminologies to express technical and non-technical content. Second, natural language is used in judicial proceedings and opinions. In a proceeding, the parties to a legal case rely on natural language to express their arguments, motions, and claims, as do witnesses in their testimonies. In their opinions, judges use natural language to report the facts of the case, summarise the arguments made by the parties, and express the reasons behind interpretations, rulings, and decisions. Natural language is normally used by private parties to express contracts and other agreements as well as the accompanying documents. Finally, natural language is the usual medium for a host of legally relevant documents, at any level of complexity, that are relevant for the creation, interpretation, and application of the law, such as doctrinal scholarship, legal theories, and case commentaries. Natural language in a legal text may use complex syntactical structures and rich terminologies, whose dense meaning results from the combination of common sense, technical knowledge, and past legal interpretations. The complexity and density of legal language have so far been a key obstacle to the deployment of AI technologies. This has been the case, on the one hand, for the deployment of symbolic AI approaches, which have struggled to capture, through the chosen formalisms, the variety, ambiguity, and meaning density of legal language. The more such formalisms have tried to reproduce the richness of natural language, the more work-intensive the knowledge-representation exercise has been and the more debatable its outcomes. On the other hand, the complexity of legal language has also limited the application of machine-learning NLP-based approaches. Most commonly, a supervised approach has been adopted where the links between input texts and the targets being predicted have to be specified by manually tagging large training sets of legal documents. The preparation of such training sets is very labour-intensive, and the information to be extracted is limited to the connections emerging from the tags in the training set. This scenario may now be changed by the advent of large language models, also called "foundational models"[1]. The largest and best performing family of such models so far is the GPT family, developed by Open AI, the latest releases of which are GPT3.5 and GPT4 (both embedded in ChatGPT) [2]. Other large language models have recently been produced, an example being Google's Bard. As is known, these models are very large neural networks, operating on hundreds of billions of parameters (links in the network). They have been constructed by training such networks on enormous sets of natural language documents. Based on that, a network learns to predict the texts (sequences of words) that are likely to meaningfully complement the input contexts (the prompts) that are provided by users (for an introduction to LLMs, see [3]). The results achievable through large language models have been surprisingly good. The "stochastic parrots" - as these systems have been called given their ability to express well-formed and seemingly meaningful language without having a real understanding of what is being said [4] - can draft complex texts that in many cases have sufficiently good quality for different uses relevant to the law (translation, summarisation, document analysis, draft generation, completion). It is true that in some cases such systems "hallucinate", i.e., they provide content (e.g., legal citations or reasoning) that does not match reality, or give answers that violate basic logic. However, even if they only mimic the text generation of cognisant humans, their performance is considerable and in many cases surprisingly good, or at any rate satisfactory for many practical applications. In the following, we shall discuss how LLMs have been used in the PRODIGIT project, an initiative developed in Italy by the Presidential Council of Tax Justice (Consiglio di Presidenza della Giustizia Tributaria, CPGT), in cooperation with the Ministry of the Economy and Finance (Ministero dell'Economia e delle Finanze, MEF), and funded by the National Operative Project for Governance and Institutional Capacity Programme 2014-2020. The general aim of the project is to provide support to tax judges and lawyers through digital technologies, in particular through AI techniques. In the project, LLMs have been used for two main purposes: (1) to prepare summaries and headnotes of judicial decisions and extract related information; and (2) to provide semantic tools for searching and analysing the case law. In the following, we shall first describe the context of the project (Section 2) and the dataset used (Section 3) and will then address the summary-generation function (Sections 4 to 7). Following this analysis, we present the evaluation procedure based on surveys carried out with legal experts (Section 8). Finally, we consider related work on the use of LLMs in the legal domain (Section 9) and provide some considerations for future work (Section 10). ## 2 Tax Law Adjudication Italian tax law adjudication involves three levels: (1) tax courts of first instance, (2) tax courts of second Instance, and (3) the Supreme Court of Cassation. The process begins with the tax courts of first instance, where taxpayers can file complaints against the decisions made by central and local Italian tax authorities. These courts have jurisdiction over cases related to most tax matters, including income tax, value-added tax (VAT), corporate tax, and local taxes. Both parties (taxpayers and tax authorities) present their arguments and evidence during the first instance proceedings, after which the court issues a legally binding judgement. If dissatisfied, either the taxpayer or the tax authority or tax office can appeal to a tax court of second instance, which have jurisdiction over an entire region. These have the power to confirm, modify, or overturn the first-instance decision. Further appeals can be made to the Supreme Court of Cassation, the highest judicial authority, on matters of law (the Court of Cassation cannot re-examine the facts of the case). Tax law judges are a mixed group, including both professional and non-professional judges. The latter are usually lawyers or accountants who serve part-time in a judicial capacity and are paid based on the number of cases they decide. The quality of their decisions is often said to vary significantly, and to be on average lower than in other parts of the judiciary. Italy, like other countries, faces a very large number of tax-related cases. This is determined by many factors, among which the large taxpayer base, the relatively low level of tax compliance, and the uncertainties in tax law, as determined by the complexity of the tax regime and the frequent changes in tax law provisions. The 2022 report provided by the Department of Finance1 shows that the number of complaints received by tax courts of first instance in 2022 was about 145,972, while appeals brought to courts of second instance were 41,051. The Court of Cassation receives about 10,531 appeals against second-instance decisions. Footnote 1: MEF Dipartimento delle Finanze, Relazione sul Contenzioso della Giustizia Tributaria, giugno 2023 [https://www.finanze.gov.it/export/sites/finanze/galleries/Documenti/Contenzioso/Relazione-monitoraggio-contenzioso-2022.pdf](https://www.finanze.gov.it/export/sites/finanze/galleries/Documenti/Contenzioso/Relazione-monitoraggio-contenzioso-2022.pdf) Italy has made some efforts to modernise its tax adjudication system by embracing digital solutions. One significant development is the introduction of electronic filing and communication systems, referred to as a "telematic tax process" (_Processo Tributario Telematico_), which has allowed for the almost complete digitisation of all stages of the judicial process. Through this system, taxpayers can submit appeals, supporting documents, and relevant information, and judges can read, write, and deliver their measures electronically, eliminating the need for physical paperwork, time-consuming procedures, and administrative burdens. In particular, with regard to the decision-making phase, the platform for preparing a judicial decision in digital form (referred to as a "digital judicial decision" - _Provvedimento Giurisdizionale Digitale_, or PGD) has been fully operational in all tax courts since 1 December 2021 and allows the judge to draft, sign, and file decisions fully electronically. At present, the PGD enables the electronic drafting of collegiate (panel) judgements, as well as orders and judgements issued by a sole judge. New technologies, including machine learning and LLMs, are starting to be used to address certain aspects of tax administration. For instance, data analytics and AI tools are currently employed by the Italian Tax Administration to analyse large bodies of financial and transactional data and detect potential discrepancies or irregularities.2 Hopefully, this will allow for a more efficient and targeted approach to tax audits and investigations. Tax adjudication is lagging behind in the use of AI technologies by comparison with tax administration. The PRODIGIT project aims to make AI technologies available in tax adjudication as well, so as to provide judges and professionals with better, more targeted information and help them efficiently address the complexities of tax law. Footnote 2: Bloomberg Tax, Italy Turns to AI to Find Taxes in Cash-First, Evasive Culture, available at [https://news.bloombergtax.com/daily-tax-report-international/italy-turns-to-ai-to-find-taxes-in-cash-first-evasive-culture](https://news.bloombergtax.com/daily-tax-report-international/italy-turns-to-ai-to-find-taxes-in-cash-first-evasive-culture) ## 3 The PRODIGIT Dataset The PRODIGIT project aims to provide tools that can be applied to the whole of Italian case law in the tax domain. However, for the purpose of experimentation and prototyping, a restricted domain was selected, namely, decisions concerning the registration and recordation tax (_imposta di registro_). This tax concerns the registration and recordation of deeds and other legally relevant documents and applies in particular to various kinds of contracts (such as those involving the transfer of real estate). It has the dual purpose of providing tax revenue and of paying the state for the service it provides to private individuals, namely, keeping track of particular deeds and financial transactions to give them legal certainty. It is governed by Presidential Decree No 31/1986 (_Testo Unico dell'imposta di registro_). We only considered those decisions that have been produced in a native digital format through the online platform provided to tax judges. In the future, the data set will be expanded to also include the vast amount of past decisions that are currently only available as scanned images of paper documents. We started with a collection of approximately 1,500 decisions addressing certain selected topics within the domain of the registration tax. Of such decisions, 750 were delivered by tax courts of first instance and 712 by tax courts of second instance. These decisions span between 2021 and 2023, with most decisions issued in 2022. They have been delivered by the tax courts of different Italian regions and provinces. The decisions have a standard structure consisting of the following parts: 1. _Introduction_, reporting (i) the number of the decision, (ii) the composition of the judicial panel, (iii) the parties and their attorneys (if present), the latter of which had previously been anonymised; 2. _Development of the Proceeding_, reporting (a) the facts related to the tax administrative process and, when delivered at the second instance (on appeal), the procedural facts related to the first-instance (trial court) proceedings (e.g., the parties' requests, claims, and arguments, as well as first-instance decisions by the tax court); (b) the requests by the parties, often presented with the related claims and arguments, possibly formulated as appeals against the first-instance decision; 3. _Grounds of the Decision_, stating the reasons in fact and in law supporting the court's decision; 4. _Final Ruling_, stating whether the complaint or appeal has been accepted or denied, and allocating the costs of the proceedings. A dataset of 17,000 decisions from various areas of tax law is currently being normalised - i.e., anonymised, segmented into relevant partitions, and corrected to fix typos and garbled text - and will be used in future developments of the project. ## 4 Summarisation of Tax Law Decisions The first task addressed in the PRODIGIT project concerns the summarisation of judicial decisions. In this section, we introduce the concept of summarisation and present the running example that will be used in the following section ### Summarisation in the Legal Domain Summarisation is the process of condensing a large set of input information into a shorter document, the summary, which still contains the most significant information, or at any rate the information that is relevant to the task at hand. In general, summarisation is subject to the need to jointly satisfy conflicting requirements as best as possible: providing a summary that is as short as possible but still includes as much of the relevant information as possible. Summarisation is very important in the legal domain, where the amount of available legal materials overwhelms the human capacity to process them. By providing summaries of decisions, judges and lawyers are given a chance to determine more quickly whether a precedent is relevant to the issue at hand, and decide whether it is worth their while engaging with the text in its entirety. Moreover, summarisation may highlight the key points of a lengthy decision, enabling lawyers to focus on them. Legal documents are particularly challenging for summarisation compared to other types of texts. These challenges relate to multiple aspects, such as the length of the documents, the hierarchical and interconnected structure of their parts, their complex technical vocabulary, and the ambiguity of natural legal language, as well as the importance of citations to legal sources. In Italian legal culture, we can distinguish two kinds of summary accounts (or statements) of judicial decisions. The first account consists of so-called "maxims" (_massime_ in Italian). A maxim (_massima_) specifies the most significant principles stated in leading judicial decisions. There exists an office in the Italian Supreme Court (the _Ufficio Massimario_) that is tasked with preparing maxims from the case law of that court. The highly qualified judges working in this office identify what decisions deserve a maxim, since they introduce principles that are particularly important, establishing new law or solving a previously unsettled issue. These principles are given compact linguistic formulations (the maxims) which are published in an online collection. In the tax domain, the function of preparing maxims has until recently been carried out by regional bodies and will be entrusted to a national body in the future. Maxims are important in the Italian legal system since they are often taken as authoritative statements of the law, and are used in arguments to support interpretive and other claims. There is indeed a debate among Italian lawyers on the extent to which maxims effectively contribute to a knowledge of the law and to legal certainty, capturing with precision the underlying rationale or _ratio decidendi_ of important cases. In any event, this is an important and persisting aspect of Italian legal culture. The second account consists, more modestly, in providing summaries, i.e., abstracts of legal cases, to be used to "triage" retrieved cases and identify the points in them that are most relevant. In other words, a summary enables lawyers decide whether they should engage with the whole case (or a section of it) and points them to its most significant aspects. It is this second kind of account (the summary) that we aim to provide with PRODIGIT. We do not undertake to replace the production of maxims, a task that, as noted, requires advanced legal skills, with the maxims themselves playing a distinctive role and institutional arrangement in the Italian legal system. However, we believe that automated summarisation, particularly in the form of the automated extraction of "decision-making criteria" (see Section 7 may provide useful support to the office tasked with preparing the maxims, the _Ufficio Massimario_). We experimented with both extractive and abstractive summarisation, considering that both approaches are potentially useful in the legal domain, and indeed they present complementary advantages and disadvantages. Extractive summarization selects the most meaningful sentences in the input text and combines them to form the summary. No change is made to the textual content of the extracted sentences. The extractive approach has the advantage of ensuring that all content in the summary is obtained from the input document, without any spurious addition. Moreover, it enables the reader to move from the selected sentences to their position in the original document, so as to obtain a context for such sentences, when needed. On the other hand, the extractive approach may fail to capture all relevant content or may do so only at the cost of reproducing large parts of the original texts, thus defeating the very purpose of summarization. Abstractive summarization generates a new text which aims to provide a synoptic statement of the content of the input documents, without reproducing their wording. The abstractive approach - when it does its job well - has the advantage of providing a short text that, in an appropriate linguistic form, still captures the salient content of a much larger document. But it may not work well, so it carries the risk of misleading readers by "hallucinating", generating content that is not found in the original text. Both extractive and abstractive summarization can find their use in the legal domain. The extractive approach is most appropriate in those cases in which the input documents are well-structured, being divided into relatively short sentences, each of which delivers a separate message. This is usually the case with decisions by the highest courts, which generally address separately the different issues submitted by the parties (the claims against lower-level decisions being contested), presenting in an orderly manner the reasons for deciding the case in one way or the other. These high courts see it as part of their mission to state binding or at least persuasive principles that should guide lower courts, and they often expressly state these principles in separate, well-recognisable sentences. Unfortunately, this was not the case with the decisions we considered, namely, first-instance and appeal decisions in the tax domain, which often include long sentences addressing different issues, often introducing new elements that concern issues discussed in previous sentences. The abstractive approach may be more appropriate when the legal reasoning is developed in long, sprawling sentences, having mixed content, or when a long decision needs to be summarised into a short and clear account, regardless of the way in which this content is expressed in the original document. In our experiments, we used both extractive and abstractive approaches and submitted their outcome to the evaluation of legal experts (see Section 8). ### Running Example As a running example, we use Decision No. 7683 issued on 14 September 2022 by the Court of Second Instance of Sicily. The case concerns the application of the first-time home buyer tax relief on the purchase of a house by a person who already owns another property. The buyers argued that they were entitled to the relief since that property was unsuitable for housing. The tax office had nevertheless refused to grant the relief, so the buyer attacked this decision in front of the tax court of first instance, winning the case, i.e., finding that the buyer was entitled to the relief. The tax administration appealed the decision, and the court of second instance upheld the first-instance decision on the ground that the property already owned by the buyer was unsuitable for housing according to an expert home-inspection report. This was argued based on the case law of the Supreme Court, according to which the law on the registration tax has to be interpreted in such a way that the tax relief is to be denied only to those who own a house concretely suitable to be used as a dwelling. In the following, we shall use this case both for extractive and abstractive summarisation, which we will illustrate by presenting some portions of (the English translations of) the outputs of our experiments. The entire text of the case and some summaries being produced are available in the Appendix A, in an English translation. The original outcomes of our experiments, in Italian, as well as further outputs, are available in this online repository. The implementation was conducted within the IBM Cloud Pak for Data environment3. This platform offers comprehensive packages essential for AI-based solutions. To access OpenAI models, we utilized the Microsoft Azure API4. Footnote 3: [https://www.ibm.com/products/cloud-pak-for-security](https://www.ibm.com/products/cloud-pak-for-security) - Accessed on August 1st, 2023 Footnote 4: [https://azure.microsoft.com/](https://azure.microsoft.com/) - Accessed on August 1, 2023 ## 5 NLP Summarisation Tools Natural Language Processing is a flourishing field with methods ranging from classical statistical approaches to very large ML-based models capturing subtle semantic features. In our research, we tested both special-purpose NLP tools for extractive summarization and the recent large language models, which we deployed for both extractive and abstractive summarization. ### Special-Purpose NLP Tools Extractive summarization techniques have been around for several decades and are widely used, including in the legal domain. As noted above, the goal of these methods is to extract the most significant sentences from a text with multiple paragraphs, under the assumption that such selected sentences can sum up the meaning of the entire text, or at least convey its most significant legal content. As a first option, we tackled extractive summarization using the following established techniques. Latent Semantic Analysis (LSA)LSA uses singular value decomposition to identify the underlying relationships between words in a document. It assigns a weight to each sentence based on its semantic similarity to the entire document. Sentences with greater weights are considered more important and are included in the summary. The goal is to create summaries with wide coverage of the document's main content while avoiding redundancy [5]. #### Lex-Rank Lex-Rank uses a graph-based approach to identify the most important sentences in a document. It creates a graph where each sentence is a node, and the edges between the nodes represent the similarity (cosine) between sentences. The most important sentences are those that have the highest centrality scores, which are calculated using PageRank [6]. Lex-Rank is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents [7]. #### TextRank This is a general-purpose graph-based ranking algorithm for NLP. Essentially, it runs PageRank on a graph designed for summarization. It builds a graph using some set of text units as vertices. Edges are based on the measure of semantic or lexical similarity between the text unit vertices. Unlike PageRank, the edges are typically non-directed and can be weighted to reflect degrees of similarity. #### Luhn Luhn uses a statistical approach to identify the most important sentences in a document. The approach assigns a score to each sentence based on the frequency of important words in the sentence. One advantage of Luhn is that it is a simple and interpretable algorithm that can be easily implemented. #### Natural Language Toolkit (NLTK) NLTK is a platform for building Python programs to work with human language data. Its summarizer applies a variation of the TextRank algorithm. It creates a graph where each sentence is a node, and the edges between the nodes represent the similarity between sentences. It assigns a score to each sentence based on its centrality in the graph. The NLTK summarizer is easy to use and can generate high-quality summaries. However, the NLTK summarizer may struggle to handle documents with complex language or a large number of irrelevant sentences. ### Large Language Models Besides exploring task-specific techniques, we also explored the applicability of more general-purpose tools like Large Language Models (LLMs), among which IT5 and GPT. LLMs use transformer architectures based on a combination of large neural networks with attention mechanisms to track relationships between the words in a text. Transformers are pre-trained on large amounts of text in a self-supervised fashion, i.e., they are automatically trained from the input text without human intervention. In this way, they learn the statistical connections between words. In their sequences, a piece of information is used to develop the basic ability of these models, namely, to effectively predict the next words given an initial text. Though LLMs can be seen as language-to-language machines, all internal processing is numeric. Hence LLM input stages provide embedding functions, i.e., mappings from text chunks to high-dimensional numeric vectors that reflect the semantic features of such text chunks. In more advanced LLMs, the core statistical engine may be wrapped in further layers, which provide the system with additional capacities, such as following instructions, producing output in a prescribed format, and preventing the delivery of inappropriate output. Though, in principle, pre-trained LLMs can be fine-tuned to specific text corpora, so far we have not engaged in any further training. This is due to the fact that good results could be obtained without fine-tuning the general GTP4 model, and also by the fact that we did not have a large set of tax decisions summarised by humans. Some summaries have been created in the past, but their number is limited and their styles and quality vary greatly. Moreover, the improvement obtained by fine-tuning large language models so far appears to be limited and probably will be even more limited in the future, when the capabilities of state-of-the-art tools improve. However, we plan to engage in some fine-tuning experiments in future developments of this project. We will assess whether this direction is worth taking on the basis of outcomes of these experiments. In the following, we give further details on the LLM we employed. #### _It5_ The IT5 model family offers a sequence-to-sequence transformer model for the Italian language. Based on the Transformer-XL architecture, this model has been pre-trained on a vast dataset of over 5 million web pages, enabling it to capture long-term dependencies in the text. The model provides more coherent and consistent text compared to earlier transformer-based models. Additionally, it has the capability to generate text from a given prompt, making it well-suited for tasks such as summarization, question-answering, and dialogue generation [8]. One can use IT5 for such tasks by accessing the model available at _Huggingface_ platform5, which is straightforward to use with the Python programming language. Of the available models for IT5, we used both _large_ and _small_ ones. Footnote 5: [https://huggingface.co/efederici/sentence-it5-base](https://huggingface.co/efederici/sentence-it5-base) #### _Gpt_ GPT, short for Generative Pre-trained Transformer, is an auto-regressive language model that employs deep learning techniques to produce text that resembles human language. This model is based on a large-scale transformer architecture and has been trained using a vast dataset of webpages. GPT uses a deep neural network to generate text that is highly similar to human writing. The model is trained in a self-supervised learning task, which involves predicting the next word in a sequence of words, given all the previous words. As a result, GPT can produce coherent, fluent, and practically indistinguishable text from human-written text. It can be leveraged for a wide range of natural language processing tasks, such as answering questions, summarising text, and translating languages. Those interested in using GPT can easily access the API offered by OpenAI. We used GPT both in version 3.5 [9] and version 4 [2], the latter being the latest and, according to its developers, far more capable than its predecessor in tasks like sentiment analysis and text classification. GPT-4 can also process a more significant number of input and output tokens, thus accessing more sophisticated tasks. ## 6 Extractive Summarisation We applied all the special-purpose NLP tools described in Subsection 5.1 and the generative models from Subsection 5.2 to the task of extractive summarization. ### Extractive Summarisation through Special-Purpose NLP Tools All special-purpose NLP tools were applied to judicial decisions in order to obtain summaries to be evaluated by legal experts. Unfortunately, the results were far from satisfactory (see Section 8). In general, we obtained lengthy summaries, in which the relevant legal information was, on the one hand, scattered across the different paragraphs of the summary and, on the other hand, incomplete. In the following, we provide an English translation of a sentence selected by all the examined techniques: **Example 1**.: _Even after this legislative innovation and, therefore, in relation to the current text, the prevailing jurisprudence of the Supreme Court (see lastly Cass. Civ. No. 20981/2021) has adhered to the interpretative option according to which the mere ownership of a real estate asset is not an obstacle to the recognition of the concession, which is instead due to the taxpayer who does not own a property that can be used as a dwelling (in this sense also Cass., Sec. 5, Order No. 19989 of 27/07/2018, according to which "on the subject of tax concessions for the first home, pursuant to art. 1, note II bis, of the tariff attached to the d.p.r. no. 131 of 1986, in the text (applicable "ratione temporis") amended by art. 3, paragraph 131, of Law no. 549 of 1995, the concept of "suitability" of the pre-owned house - an obstacle to the enjoyment of the benefit (and expressly provided for in the previous legislation) - must be considered intrinsic to the notion of "dwelling house" itself, to be understood as accommodation that is concretely suitable, both from an objective-material and legal point of view, to meet the housing needs of the interested party"; as well as Cass., Sec. 5, Judgment No. 2565 of 02/02/2018, which ruled that "on the subject of first home concessions... "the suitability" of the pre-owned dwelling must be assessed both from an objective point of view - actual uninhabitability - and from a subjective point of view - building inadequate in size or qualitative characteristics -, in the sense that the benefit also applies in the case of the availability of accommodation that is not concretely suitable, in terms of size and overall characteristics, to meet the housing needs of the interested party. " and in the same sense also Cass., Sec. 6-5, Order No. 5051 of 24/02/2021, Cass., Sec. 6-5, Order No. 18091 of 05/07/2019, and Cass., Sec. 6-5, Order No. 18092 of 05/07/2019)._ ### Extractive Summarisation through LLMs Although generative models are not intended to be used for extractive tasks, we attempted to use them for this purpose, too. To this end, we designed a prompt meant to push generative models toward literal extraction, and we tested it with both GPT-3 and GPT-4. In a preliminary phase, IT5 was also tried, but the results were unsatisfactory, since the model appeared to be unable to follow the instructions for creating the summaries. Instructions in our prompt were divided into three sections: * In the first section, the concept of an "extractive summary" goal was declared and defined. * In the second section, we described a method for obtaining a summary, i.e., breaking up the document into sentences, assigning a score to each sentence according to its semantic relevance, and identifying the most relevant sentences. * A final section indicated the format in which the output had to be produced, i.e., as a list of sentences and associated scores. We fed out the prompt to GPT 3.5 and 4. Here below we can see the prompt we used and its translation into English. ``` 1Fragraffe'presenteilcontenutodiunassentenza.2Vogliocostruireunriassuntoestratttivo,cioe'untastocompostodaunsototinsiemedellefrasipresentineltestoininput.3Perfareunriassuntoestratttivodvorestiprimauddividereiltestoinfrasi,valutarel'importanzasemantica,classifierinbaseaquestaimportanzaselezionareleKfrasipiu'rilevanti.4Inquestcasso,Ke'ugualea5.5Lefrasidevonceserepresentatenellostessoordineincuipapanionneltestooriginalesecondolaseguentstruttura.6[1]Frase1[Punteggio1]7...8[K]Frase1[PunteggioK]9{...}10{...} ``` The extractive summary obtained through this prompt was shorter than the outcome provided by the specific NLP tools. Following is the output produced feeding GPT-4 with the grounds of the decision in Subsection 4.2. **Example 2**.: _The fundamental preliminary question concerns the evaluation of the suitability or otherwise of the property pre-purchased by the appellant, with respect to the housing needs of himself and his family. It appears from the same that the property in question consists of a small room and a bathroom (depicted in the attached photographic documentation), having a total cadastral area of 25.00 sqm and a usable living area of 17.90 sqm (as shown in the attached plan). The technician also ascertained that the property shows traces of rising damp, mold and fungi, concluding that it is not suitable for meeting the most basic housing needs for a family unit composed of a father and two school-aged children. Based on the aforementioned assessment, supported by plans and photographs, it must therefore be considered established that the property purchased by the respondent_1 on 19/3/2003, must be considered unsuitable for meeting the housing needs of the aforementioned and his family, so that the first judge correctly considered such purchase not preclusive with respect to the tax benefits claimed. Therefore, the appeal filed must be rejected and the contested judgment must be confirmed_ Note that in many cases, unlike the extractive tools, GPT constructed the extractive summaries by selecting and combining phrases or sentence fragments (see, for example, the summary produced by GPT-3 using the same decision in the appendix). In this way, shorter and more informative summaries could be obtained. ## 7 Abstractive Summarisation We relied on IT5 and on both versions of GPT to produce abstractive summaries of two kinds: (1) "flowing-text" summaries and (2) "issues-based" summaries. ### Flowing-Text Summaries Flowing-text abstractive summaries have no prescribed structure. We tested IT5, GPT3, and GPT4 to generate such summaries of each of the two sections of our decisions: "development of the proceedings" (_svolgimento del processo_) and "reasons of the decision" (_motivi della decisione_). For this purpose, a very simple prompt, which we used for all of the three models, was enough: ``` 1Faiunsommariodelesguentetestotraparentesigraffe2 3{...} 4Makeasummaryofthefollowingtextwithinbrackets 51Makeasummaryofthefollowingtextwithinbrackets 52 53{...} 54Makeasummaryofthefollowingtextwithinbrackets 55 56{...} 57Makeasummaryofthefollowingtextwithinbrackets 58{...} 59{...} 60 61Makeasummaryofthefollowingtextwithinbrackets 62 63{...} 64 65 666 674 684 690 691Makeasummaryofthefollowingtextwithinbrackets 60 692 704 714 724 734{...} 742 753{...} 765 776 777 789 800 8101 8202 8303 8404 8505 866 8789 8890 89106 9206 93071 9308 9409 9500 95100 95200 95301 95401 95410 95542 95610 95710 95810 95911 95912 95913 95914 95915 95916 95917 95918 95192 9592 95193 95194 95195 95196 95197 95198 951999 95200 95210 95222 95232 95240 95256 9526 95271 95281 95292 95293 95294 95295 95296 95297 95298 95309 95310 95320 95332 95342 95356 95361 95371 95381 95391 95392 95393 95394 95395 95396 95397 95398 95409 95411 954211 95433 95444 95455 95556 95567 95578 95698 95899 95999 59999 59999 59999 59999 59999 59999 59999 59999 69999 6999 70000 70000 70000 70000 70000 70000 70000 70000 70000 70000 70000 700000 70000 700000 700000 700000 700000 700000 700000 700000 700000 700000 700000 7000000 700000 7000000 700000 700000 700000 700000 7000000 7000000 700000 7000000 7000000 7000000 7000000 7000000 7000000 7000000 7000000 7000000 70000000 70000000 7000000 7000000 70000000 7000000 7000000 70000000 70000000 70000000 70000000 70000000 7000000 7000000 70000000 70000000 70000000 70000000 70000000 70000000 700000000 70000000 700000000 700000000 70000000 700000000 700000000 700000000 700000000 700000000 700000000 7000000000 7000000000 700000000 7000000000 7000000000 70000000000 70000000000 7000000000 7000000000 70000000000 7000000000 700000000000 70000000000 700000000000 7000000000000 700000000000 700000000000 70000000000000 70000000000000 70000000000000 7000000000000000 700000000000000000 70 submitted is rejected and the contested judgment is confirmed, with the costs of this phase charged to the office._ ### Issue-Based Summaries We exploited the capacities of GPT for providing issue-based summaries. The idea is to distinguish the issues addressed by the judges and to provide a separate summary analysis for each of them. This approach was motivated by the hypothesis that this style of summarization may facilitate lawyers in identifying and examining the aspects of the case that are relevant to them. This hypothesis was confirmed by the expert evaluations, which were most favourable for this kind of summary (see Section 8). GPT-3.5 and GPT-4 were instructed on what to look for and list as output by a suitably designed prompt. We also tested IT5, but this experiment was unsuccessful, since IT5 appeared to be unable to follow the instructions for creating issue-based summaries. The set of instructions we used for GPT is devoted to the description of the intended output in two directions: a formal requirement and a conceptual requirement. According to the formal requirement, the output consists of a sequence of questions/answer pairs where questions are denoted as QD1,..., QDn, and the answers are denoted as PD1,..., PDn. The prompt also makes it possible to switch between a more human-readable list and a json structure. According to the conceptual requirement, the answers consist in the specification of principles, where a principle is defined as the application or interpretation of an explicit norm, regulation, or previous decision. After introducing _principles_, the prompt continues by stating that a _question_ is something that is answered by means of a _principle_. To make the model focus on essential but independent issues, we added a few prescriptions, namely, two _principles_ must be very different from each other; the number of _principles_ in a text is usually 1 or 2 with more _principles_ appearing only in lengthy texts. We also specified that a _principles_ are to be reported explicitly, and that _questions_ are to be stated in general terms, i.e., without reference to the specific case at hand. The adopted prompt is shown below, in the original Italian version and in an English translation (the variant requesting the output in a JSON structure is omitted): 1 Elenca in una lista nel formato 2 3QD1: testo 4PD1: testo 5 6QD2: testo 7PD2: testo 8 9... 10 11QDn: testo 12PDn: testo 13 14i principi di diritto (PD) ele questioni diritto (QD). 15 16Le QD sono le domande a cui i PD rispondono. 17Le QD non contengono nessun riferimento al caso di specie e agli attori della vicenda. 18 19I PD sono le interpretazioni di una o piu norme contenute nel testo tra parentesi graffe. 20Per ogni PD specifica i riferimenti alle norme. 21Il numero di PD in un testo e di solito 1 o 2. 22I PD non contengono nessun riferimento al caso di specie e agli attori della vicenda. 23Due PD devono essere molto diversi tra di loro. 24In testi lunghi il numero di PD puo essere maggiore di 2. 25 26{...} In the following, you can find one of the principles extracted using GPT4: **Example 4**.: _QD2: What is the current interpretation of the legislation on tax concessions on a first home in relation to the suitability of a pre-owned dwelling?_ _PD2: The suitability of the pre-owned dwelling must be assessed both from an objective point of view (actual uninhabitability) and from a subjective one (inadequate building in terms of size or qualitative characteristics), meaning that the benefit also applies in the case of the availability of a dwelling that is not concretely suitable, in terms of size and overall characteristics, to meet the housing needs of the interested party (Cass., sect. 5, order n. 19989 of 27/07/2018, Cass., sect. 5, judgment n. 2565 of 02/02/2018)._ We also experimented with expanding the prompt to include instructions for identifying the original text addressing the summarised issue, and for extracting keywords. Line 14 of the original prompt was substituted with 1 i principi di diritto (PD), le questioni diritto (QD), le parole chiave (KW) e le basi testuali (BT). While Line 25 was expanded into 1 E BT sono le porzioni del testo fra parentesi graffe piu rilevenati per l estrazione di un PD e di una QD. 2 Le BT non devono contenere variazioni rispetto al testo fra parentesi graffe. 3 Per ogni QD e PD, restituisci massimo tre BT. 4 5 Le KW identificano i temi fondamental del testo fra parentesi graffe, cioe i concetti giuridici impiegati, gli oggetti disciplini e le materie trattate. A keyword (KW) refers to relevant legal concepts and subjects contained in the text. The keyword-related portion of the prompt follows the extraction of legal principles, as they should provide contextual information to narrow down the generation to the most relevant keyword with substantive legal meaning. The prompt also includes instructions for connecting the extracted principles to the relevant part of the original decisions, from which they were extracted (BT). Such a reference allows us to easily verify the correspondence between the extracted principles and the original text. The model is requested to extract three fragments at most, without the introduction of any variation relative to the original text. Following are the text and keywords associated with the principles in the example above: **Example 5**.: _BT1: [on the subject of tax concessions for the first home, pursuant to art. 1, note ii bis, of the tariff attached to the d.p.r. n. 131 of 1986, in the text (applicable "ratione temporis") amended by art. 3, paragraph 131, of the law n. 549 of 1995] **BT2: [the concept of "suitability" of the pre-owned home - an obstacle to the enjoyment of the benefit (and expressly provided for in the previous legislation) - must be considered intrinsic to the very notion of "house of residence", to be understood as a dwelling concretely suitable, both from an objective-material and legal point of view, to meet the housing needs of the interested party] **BT3**: [the concessions under examination respond to the reasonable rationale of favoring the purchase of a dwelling in the place of residence or work for the benefit of those who do not have possession of another house of residence objectively suitable to meet their needs]_ _**KW**: [tax concessions, first home, housing suitability, housing needs, uninhabitability, inadequacy, property ownership, legislation, jurisprudence]_ ## 8 Evaluation The standard automated tools for accessing the quality of summaries, such as ROUGE, do not apply satisfactorily to abstractive summarisation. Thus we submitted questionnaires to tax law experts, asking them to evaluate the process. The questionnaires were previously submitted to the ethical committee of the PRODIGIT project, which reviewed them, proposed refinements and clarifications on the questions and the methodology, and finally approved the revised questionnaires. Based on the indications of the ethical committee, the evaluation only concerned the comparison between different automated systems, to the exclusion of summaries written by humans. This limitation responds to the following considerations: on the one hand, having human-prepared summaries for all tax law decisions is not a viable option, this due to the size of that case base; on the other hand, on the proposed approach, automated summarisation can be combined with human intervention at the validation and revision stage. We performed two different evaluations: The first one was devoted to choosing a subset of models that were performing better, this in order to reduce the number of models to be considered. The second one was focused on the resulting small set of models, aiming at collecting from domain experts how these models perform in the context of tax law. The questionnaires encompassed various evaluation criteria, including: * **Satisfaction**: Degree of satisfaction with the overall quality of the summary. * **Correctness**: Accuracy in capturing the source documents' key points, legal nuances, and essential information. * **Form**: Coherence, readability, and adherence to legal writing conventions. * **Completeness**: Coverage of important details and comprehensive representation of the source content. The domain experts reviewed the summaries, rated them under each criterion, and provided corresponding scores in the \([1,5]\) range. They also had the option of expressing their comments. They were encouraged to provide their insights, suggestions, and concerns regarding the quality of the summaries. The evaluations were blind, in the sense that the reviewers were not told by what models the summaries were generated. By incorporating the feedback and ratings from domain experts through the questionnaires, we obtained a holistic evaluation of the generated summaries. This evaluation process enabled us to gauge the strengths and weaknesses of each model, identify areas for improvement, and make informed decisions about the quality and effectiveness of the generated summaries in accordance with the domain-specific requirements. In the next sections, we report and describe the evaluation for the summaries. We report the average scores for each metric considered in Figures 1, 2, and 3. Each graph corresponds to a model, and for each model, we can notice the average value for each metric. ### First Evaluation As observed, the first evaluation had the purpose of selecting the most promising approaches, which would then be subject to the second evaluation. It was carried out by a limited number (12) of experts in tax law. Extractive SummariesFigure 1 and Table 1 show that the special-purpose NLP tools perform similarly. This is because the extractive nature of the task does not allow for outputs that differ too much in form. The correctness of all the models is quite high, as is to be expected, since they consisted of phrases literally extracted from the original text. Completeness is weak, as the information from other sentences was completely omitted. As a consequence the the experts' satisfaction was on average low. Similar -through slightly better - outcomes can be seen for the extractive summaries produced with generative tools (GPT3 and GPT4).These models as well appear to have omitted much relevant information, especially when dealing with long decisions. On the basis of this evaluation, we decided to discard all extractive approaches and limit the second evaluation to the abstractive methods. However, as shown in Section 7.2 we incorporated an extractive aspect in the issue-based summary so as to enable the user to link the abstracted principles to their textual basis. Abstractive Summaries In Figure 2 and Table 2, we report the average scores for abstractive summaries. In this case, IT5 models, which are the baseline, were evaluated as quite poor, while GPT3 and GPT4 performed very well in all the dimensions. On the basis of this evaluation, we decided for the second stage to keep GPT4 summaries, both in the flowing text and issue-based versions. We also kept IT5 as a baseline. We chose to omit GPT3, given its similarity to its successor, namely, GPT4. \begin{table} \begin{tabular}{r r r r r r r r} & **LEX** & **LSA** & **TRUNK** & **LUHN** & **NTLK** & **GPT4** & **GPT3** \\ \hline Form & 2.85 (1.23) & 3.00 (1.11) & 3.09 (1.00) & 2.73 (0.96) & 3.30 (0.90) & 3.10 (0.54) & 2.80 (0.75) \\ Completeness & 3.15 (1.51) & 2.85 (1.29) & 2.82 (1.27) & 2.45 (0.99) & 2.40 (1.43) & 3.00 (0.89) & 3.00 (1.10) \\ Correctness & 3.69 (1.26) & 3.54 (1.39) & 3.91 (1.00) & 3.73 (1.14) & 3.50 (1.20) & 3.60 (1.11) & 3.80 (1.17) \\ Satisfaction & 2.69 (1.32) & 2.77 (1.19) & 2.45 (1.08) & 2.36 (0.88) & 2.50 (1.36) & 2.80 (0.87) & 2.70 (0.64) \\ \hline \end{tabular} \end{table} Table 1: Evaluation of domain experts in the first round of questionnaires on extractive summaries. Standard deviation in brackets. ### Second Evaluation: Abstractive Summaries The second evaluation was focused on IT5Small, flowing-text GPT4, and issue-based GPT4. This evaluation involved around 80 experts, drawn from a pool of judges, lawyers, and others. Each evaluator assessed, for 5 decisions, the summaries produced by the 3 models. We received on average 50 answers for each model. Figure 3 reports the average values and Table 3 reports average values and standard deviation. From Figure 3 it appears that GPT4 scored better than IT5. It also appears that the issue-based summary outperformed the flowing-text summary under both completeness, correctness, and general satisfaction. It can also be seen from Table 3 Figure 1: Evaluation score for extractive summaries that there is relatively high dispersion in the assessments (according to the indicated standard deviation). We plan to study this aspect to understand whether it relates to the nature of cases, to the language used in the judicial opinions, or to idiosyncrasies of the evaluators. Figure 2: Evaluation score for abstractive summaries (first round) ## 9 Related Work Since the focus, and the most significant results, of our work concern applying LLMs to the summarization task, the related work most relevant to our project concerns, on the one hand, the use of LLMs in the legal domain and, on the other, automated summarisation. ### Large Language Models in the Law Large language models (LLMs), such as BERT, GPT, or XLM-RoBERTa, have already demonstrated considerable potential in various legal tasks. Notable areas for the deployment of LLMs have been judgement prediction and statutory reasoning. The study by [10] introduces legal prompt engineering (LPE) to enhance LLM performance in tasks involving predicting legal judgements. This method has proven effective across three multilingual datasets, highlighting the model's potential in handling the complexity of legal language and reasoning across multiple sources \begin{table} \begin{tabular}{r c c c c c c} & **IT5Small** & **IT5Large** & **GPT3** & **GPT4** & **GPT3 item** & **GPT4 item** \\ \hline Form & 1.75 (0.97) & 1.75 (1.30) & 4.12 (0.60) & 4.25 (0.66) & 3.12 (1.36) & 3.62 (1.32) \\ Completeness & 1.62 (1.32) & 1.50 (1.00) & 4.00 (0.71) & 4.00 (0.71) & 2.75 (1.20) & 3.75 (1.39) \\ Correctness & 1.62 (0.99) & 1.88 (1.54) & 4.25 (0.66) & 4.25 (0.66) & 3.38 (1.32) & 3.62 (1.41) \\ Satisfaction & 1.62 (1.32) & 1.50 (1.00) & 4.12 (0.33) & 4.00 (0.71) & 2.62 (1.11) & 3.38 (1.32) \\ \hline \end{tabular} \end{table} Table 2: Evaluation of domain experts in the of questionnaires on abstractive summaries (first round). Standard deviation in brackets. Figure 3: Evaluation score for abstractive summaries (second round) \begin{table} \begin{tabular}{r c c c} & **IT5** & **GPT4** & **GPT4 items** \\ \hline Form & 2.11 (1.10) & 3.69 (1.06) & 3.69 (1.14) \\ Completeness & 2.15 (1.06) & 3.20 (1.25) & 3.75 (1.02) \\ Correctness & 2.51 (1.17) & 3.54 (1.15) & 3.75 (1.05) \\ Satisfaction & 2.03 (1.04) & 3.30 (1.32) & 3.54 (1.12) \\ \hline \end{tabular} \end{table} Table 3: Evaluation of domain experts in the second round of questionnaires. Standard deviation in brackets. of information. Another study by [11] investigates GPT-3's capacity for statutory reasoning, revealing that dynamic few-shot prompting enables the model to achieve high accuracy and confidence in this task. Advancements in prompting techniques have played a crucial role in the success of LLMs in legal reasoning tasks. The paper by [12] introduces Chain-of-Thought (CoT) prompts, which guide LLMs in generating coherent and relevant sentences that follow a logical structure, mimicking a lawyer's analytical approach. The study demonstrates that CoT prompts outperform baseline prompts in the COLIEE entailment task based on Japanese Civil Code articles. LLMs have also been employed to understand fiduciary obligations, as explored in [13]. This study employs natural language prompts derived from US court opinions, illustrating that LLMs can capture the spirit of a directive, thus facilitating more effective communication between AI agents and humans using legal standards. The potential of LLMs in legal education has been examined in studies such as [14] and [15]. In [14], the authors task ChatGPT with writing law school exams without human assistance, revealing potential concerns and insights about LLM capabilities in legal assessment. On the other hand, the paper by [15] addresses the ethical use of AI language models like ChatGPT in law school assessments, proposing ways to teach students appropriate and ethical AI usage. The role of LLMs in supporting law professors and providing legal advice has also been investigated. The study in [16] suggests that LLMs can assist law professors in administrative tasks and streamline scholarly activities. Furthermore, LLMs have been explored as quasi-expert legal advice lawyers in [17], showcasing the possibility of using AI models to support individuals seeking affordable and prompt legal advice. The potential impact of LLMs on the legal profession has been a subject of debate, as discussed in [18]. This paper evaluates the extent to which ChatGPT can serve as a replacement for litigation lawyers by examining its drafting and research capabilities. Finally, the study by [19] proposes a legal informatics approach to align AI with human goals and societal values. By embedding legal knowledge and reasoning in AI, the paper contributes to the research agenda of integrating AI and law more effectively. In conclusion, LLMs have shown promising results in various legal tasks, with the advancement of prompting techniques playing a crucial role in their success. However, challenges remain in ensuring the ethical use of LLMs and addressing their potential impact on the legal profession. ### Legal Text Summarization Summarisation of has been a forefront task in legal informatics for some years. In 2004 a seminal contribution [20] provided the extractive summarisation of a legal dataset of 188 judgements from the House of Lords Judgement (HOLJ) website from 2001-2003. However, only recently have researchers started to produce promising results, thanks to state-of-the-art NLP, machine learning techniques, and, lately, LLMs. Existing research on legal summarization mostly applies extractive methods. A wide range of approaches to this effect exist, from classical algorithms [5, 6, 7] to domain-specific methods. Among the latter, there are works based on nature-inspired methods, i.e., algorithms emulating natural processes [21], using optimization approaches that adapt to challenging circumstances [22]; graph-based methods, where sentences are selected based on the construction and search over similarity graphs [7; 23; 24]; and citation-based methods relying upon the set of citing sentences within documents to build summaries [25]. Finally, there are also machine-learning-based models in which classifiers predict which sentences to include in the summary [26]. Kanapala et al. [22] focused on a domain-specific automatic summarization system based on a nature-inspired method. The authors framed legal document summarization as a binary optimization problem, utilizing statistical features such as sentence length, position, similarity, term frequency-inverse sentence frequency, and keywords. The authors used the gravitational search algorithm (GSA) as the optimization technique for generating summaries. GSA adjusted the weights assigned to sentence features, capturing the importance and relevance of sentences within legal documents. To evaluate their method, the authors compared it with other approaches, including genetic algorithms, particle swarm optimization, TextRank, latent semantic analysis (LSA), MEAD, SumBasic, and MS-Word summarizer. They utilized the FIRE-2014 dataset, which consisted of 1,000 Supreme Court judgements from 1950 to 1989. The proposed algorithm outperformed the other methods based on ROUGE evaluation metrics. Merchant and Pande [27] presented an automated text summarization system designed to help lawyers and citizens conduct comprehensive research for their legal cases. The researchers used LSA, a natural language processing technique, to capture concepts within individual documents. Two approaches were used - a single-document untrained approach and a multi-document trained approach - depending on the type of case (criminal or civil). The data used in the study was collected from the Indian official government websites and included Supreme Court, High Court, and District Court cases. The evaluation of the model resulted in an average ROGUE-1 score of 0.58. The system received the approval of professional lawyers. Licari et al. [28] introduce a method for automatically extracting legal holdings from Italian cases using Italian-LEGAL-BERT and present a benchmark dataset called ITA-CaseHold for Italian legal summarization. They introduced HM-BERT, an extractive summarization tool based on Italian-LEGAL-BERT. HM-BERT selects relevant sentences using a similarity function based on unigram and bigram overlap. The model achieved prominent results in terms of ROUGE scores, and the extracted holdings were validated by experts. The paper acknowledges limitations such as potential redundancy in sentence selection and the challenge of explaining HM-BERT's decisions. Recently, in connection with the availability of transformer models, some attempts at abstractive summarisation have been developed. Schraagen et al. [29] applied two abstractive models to a Dutch legal domain dataset and evaluated their performance using ROUGE scores and evaluation by legal experts. The study presents a hybrid model based on reinforcement learning and a transformer-based BART model trained on a large dataset of Dutch court judgements. The results show promising transferability of the models across domains and languages, with ROUGE scores comparable to state-of-the-art studies on English news articles. However, human evaluation shows that handwritten summaries are still perceived as more relevant and readable. Furthermore, summarisers struggle to include all necessary elements in the summary, leading to the omission of important details. The authors suggest that the abstractive summarisation process can be improved by incorporating domain-specific constraints, such as focusing on citations of legal sources and structuring summaries into facts, arguments, and decisions. Prabhakar et al. [30] presented a method using T5 to generate abstractive summaries of Indian legal judgements. The system uses a dataset of 350 judgements of the Honourable Supreme Court of India, compiled with the assistance of a lawyer. The generated summaries are evaluated using the ROUGE score, with a Rouge-L precision of 0.54955. Feijo et al. [31] addressed the problem of "hallucination" in abstractive text summarisation, focusing specifically on legal texts. They proposed a novel method, called _LegalSumm_, which aimed to improve the fidelity and accuracy of the generated summaries. To achieve this, the authors created multiple "views" of the source text and trained summarisation models to generate independent versions of the summaries. They also introduced an entailment module to evaluate the fidelity of candidate summaries to the source text. The authors demonstrated the effectiveness of their approach by showing significant improvements in ROUGE scores across all evaluation metrics. As well as contributing to the field of legal summarisation, the study provides a basis for further advances in the production of reliable and accurate summaries. Koniaris et al. [32] addressed the challenge of automatic summarization of Greek legal documents. To overcome the lack of suitable datasets in the Greek language, the authors developed a metadata-rich dataset of selected judgements from the Supreme Civil and Criminal Court of Greece, along with their reference summaries and category tags. They adopted state-of-the-art methods for abstractive (BERT) and extractive (LexRank) summarization and conducted a comprehensive evaluation using both human and automatic metrics, such as ROUGE. The results showed that extractive methods had average performance, while abstractive methods generated moderately fluent and coherent text but received low scores in relevance and consistency metrics. They identified the need for better metrics to evaluate legal document summaries' coherence, relevance, and consistency. The authors suggested future research directions involving better datasets and improved evaluation metrics, as well as exploring advanced techniques such as deep learning with various neural network architectures to enhance the quality of generated summaries. Huang et al. [33] proposed a two-stage legal judgement summarization model to address the challenges posed by lengthy legal judgements and their technical terms. They leveraged raw legal judgements with varying granularities as input information and treated them as sequences of sentences. Key sentence sets were selected from the full texts to serve as the input corpus for the summary generation. Additionally, the authors incorporated an attention mechanism by extracting keywords related to technical terms and specific topics in the legal texts, which were integrated into the summary-generation model. Experimental evaluations on the CAIL2020 and LCRD datasets demonstrated that their model, based on recurrent neural networks and attention mechanisms, outperformed baseline models (Lead-3, TextRank, and others), achieving an overall improvement of 0.19-0.41 in ROUGE scores. The results indicated that their method effectively captured essential and relevant information from lengthy legal texts and generated improved legal judgement summaries. ## 10 Conclusion We have presented some preliminary results obtained in the early development of the PRODIGIT project, a tax law initiative aimed at supporting judges, lawyers, and other legal practitioners. We have focused on the summarisation task, one of the main goals of PRODIGIT, which aims to support the summarisation of all Italian tax law decisions. We first introduced tax law adjudication in the Italian legal system, discussed the significance of summarisation for judges and practitioners, and described the database we used for our experiments. We then introduced the tools and approaches we used in our experiments, which range from single special-purpose NLP tools, based on classical statistical approaches, to the most recent LLMs. We described our experiments with regard to all such tools, providing examples of the obtained outputs and discussing the limitations and potentialities of each approach. In particular, we provided an in-depth account of our use of generative LLMs, which yielded clearly superior results. In this regard, we listed the prompts used to obtain such results. The most interesting approach we developed is that of "issue-based summarisation", i.e., outlining the legal issues examined by the judges and the corresponding legal criteria (principles). Issue-based summarisation was complemented by the extraction of issue-relative keywords and textual fragments. For this purpose an appropriate prompt had to be devised to direct generative tools toward providing legally meaningful information. We think that this is the most significant development our work provides in relation to the literature cited in Section 9. We have submitted the results of our experiments to an extensive evaluation by legal experts, from which emerged a clear preference for the outcomes delivered by generative tools. In particular, the issue-based summarisation delivered by GPT4 appeared to be the preferred approach, considering its linguistic quality, completeness, and correctness, as well as general satisfaction. This comparative assessment is also a significant innovative contribution to the theory and practice of legal summarisation, which cannot avoid facing the challenge of LLMs. We think that some interesting lessons emerge from our experience. The first takeaway is that the most advanced LLMs can provide very good results in automated summarisation, clearly outperforming earlier NLP tools. Different kinds of summaries can be obtained by carefully designing the corresponding prompt. We have observed that high-quality outcomes can be obtained even without fine-tuning large LLMs. The extent to which fine-tuning can provide improvements in performance is an important issue for further research. The second takeaway is that extensive human evaluation is needed to assess the outcomes of summarisation in the legal domain. As noted, we submitted the results of our experiment to expert evaluation, based on questionnaires reviewed by an ethical committee. This evaluation provided us with a clear comparative assessment of the summaries, on which the implementation of summarisation within PRODIGIT will be based. We do not think that, at the state of the art, any automated methods can be deployed to test the quality of summarisation, particularly in the abstractive case. In addition to the two-step formal evaluation described in Section 8, in multiple round we submitted our preliminary results to the judgement of expert tax lawyers involved in the project. This enabled us to refine our methods, and in particular to refine our prompts in order to approximate the desired outcomes before the formal evaluation. A third takeaway is that summarisation provides different satisfactory outcomes with regard to different input texts. In fact, Italian tax law decisions very greatly in length, in the language used, and in the clarity of reasoning. In some cases, making sense of their content may be a challenge for human readers; thus it is not surprising that automated summarisers give different results. Thus, it is important to preserve convenient human supervision over the outcomes of automated summarization. Under the PRODIGIT project, an application is being developed to enable expert lawyers to review and possibly revise the automatically generated summaries. Their feedback will support further improvement of automated summarisation. Based on the successful experiments described in this paper, summarisation of the PRODIGIT project will take steps toward putting out publicly available results. The idea is to include the automatically generated summaries in a publicly accessible database of all Italian tax law decisions: we first provide the summaries of decisions in the registration-tax area will, and if the public's response is positive, the exercise will be extended to all domains of tax law. We are also experimenting with using summaries - in the issue-based versions - for the purpose of indexing and searching the case law. The extracted information - legal issues and keywords - are being used to construct a conceptual graph through which to access the case-law database.
2302.09169
Quantum Algorithm for Multiplicative Linear Logic
This paper describes a quantum algorithm for proof search in sequent calculus of a subset of Linear Logic using the Grover Search Algorithm. We briefly overview the Grover Search Algorithm and Linear Logic, show the detailed steps of the algorithm and then present the results obtained on quantum simulators.
Lorenzo Saraiva, Edward Hermann Haeusler, Vaston Costa
2023-02-17T22:47:28Z
http://arxiv.org/abs/2302.09169v1
# Quantum Algorithm for Multiplicative Linear Logic ###### Abstract This paper describes a quantum algorithm for proof search in sequent calculus of a subset of Linear Logic using the Grover Search Algorithm. We briefly overview the Grover Search Algorithm and Linear Logic, show the detailed steps of the algorithm and then present the results obtained on quantum simulators. ## 1 Introduction Quantum computing has provided us with algorithms that have a better time complexity than any classical counterpart, one of those being the Grover's Search Algorithm(GSA)[1] for searching an element in an unordered database. The GSA is used in several contexts, including SAT, kmeans, genetic algorithms and pixel identification. In this work, we use the GSA to help in searching proofs of a subset of multiplicative linear logic to improve complexity compared to classic algorithms. We show the construction of the quantum circuit from the linear logic sequent to the end result and present our conclusions. ## 2 Background The GSA is one of the most famous quantum algorithms, and its goal is to search for an element in an unordered database. Assuming a database with \(n\) qubits that contains \(N=2^{n}\) elements in the superposition, it has time complexity of \(\sqrt{N}\), which outperforms any classical algorithm. The general steps of the Grover algorithm main iteration on \(n\) qubits are as follows: * In this step an operator \(A\) is applied to the database qubits to bring them from the initial state \(\left|0\right\rangle^{\otimes n}\) to the desired state \(\left|\Psi\right\rangle\). This state is usually the equal superposition state, and \(A=H^{\otimes n}\). * In this step an oracle \(O\) is applied to the prepared state \(\left|\Psi\right\rangle\). The oracle will flip the phase of the searched value \(x_{t}\) so that: \[\begin{array}{l}O\left|x_{s}\neq x_{t}\right\rangle=\left|x_{s}\right\rangle \\ O\left|x_{s}=x_{t}\right\rangle=\left|-x_{s}\right\rangle\end{array}\] * In this step an operator \(D\) is used to amplify the amplitude of the state marked by the oracle. For such, an inversion about the mean (IAM) is performed. Generalizing, the Grover iteration can be described as: \[G=AOA^{T}D\] The Grover iteration has to be repeated \(\lfloor\pi\sqrt{N}/4\rfloor\) times in order to maximize the probability of measuring the desired state. Our work follows Alsing's entangled database search [1] using the GSA. The main feature of Alsing's algorithm is that, instead of using \(A=H^{\otimes n}\) to prepare the equal superposition state, it chooses \(A\) in order to encode an arbitrary list of pairs \(\{s,t\}\). Thus, the algorithm's input is an entangled database with two sides, each side having one part of the pair. Every entry on the left side is entangled to an entry on the right side. In the GSA, it is necessary to know the searched value to construct the oracle. On Alsing's, on the other hand, one can construct the oracle based on a known entry \(s_{1}\) on the left side, apply the GSA, and then measure the right side, recovering the unknown value \(t_{1}\) entangled with \(s_{1}\). ## 3 Problem Description Linear logic is an extension of classical and intuitionistic logic that emphasizes the role of formulas as resources. For that reason, it does not allow the rules of contraction and weakening to apply to all formulas but only those formulas marked with special marks[1]. Due to the (formal) similarity between the logical rules that deal with these marks and the modalities in systems like S4, these marks might be considered as modalities. The absence of contraction and weakening allows Linear Logic to have two different versions of conjunction and disjunction: an additive and a multiplicative. The classical \(\land\) (and), for example, is divided between the additive version, \(\&\) (with), and the multiplicative version, \(\otimes\) (tensor). Linear logic also has a sequent calculus proof system. In this context, the algorithm for finding a cut-free proof in the multiplicative only version of Linear Logic has a worst-case time complexity of \(2^{k}\), where \(k\) is the number of atomic formulas. The subset of intuitionistic linear logic that deals only with the multiplicative connectives is called (intuitionistic) multiplicative linear logic(IMLL). In this work, we will be using a subset of IMLL, IMLL-\(\otimes\) using only the tensor connector. Figure 1: Oracle circuit for k=2 and searching for the value associated with 0 Considering a linear logic sequent with \(k=4\) atomic clauses \[A\otimes(B\otimes(C\otimes D))\vdash D\otimes(B\otimes(A\otimes C)).\] We want to find the successive splits that verify that this is a valid proof. We have two rules that can be applied, \(\otimes\)-Left and \(\otimes\)-Right. In the classical algorithm, we apply the successive splits until we reach a valid axiom. * Apply one of the possible rules until there are only left axioms, * If the axioms are all valid, the sequent is valid; if not, restart. Since there are \(2^{k}\) possible splits, the algorithm has time complexity of \(O(2^{k})\), where \(k\) is the number of atomic clauses. ``` 0:\(k\) copies of the database of \(2n\) qubits and \(k\) pairs, where \(n=\lceil\log k\rceil\) \(numIterations\leftarrow\lfloor(\pi\times\sqrt{N}/4)\rfloor\) for\(i<N\)do for\(j<numIterations\)do buildOracle(n, target) appendDiffuser(\(A\)) measure() endfor endfor ``` **Algorithm 1** General Description ## 4 Solution Steps Our quantum algorithm input is an entangled database with two sides, each part of a pair. Every entry on the left side is entangled to an entry on the right side, and they are both unordered. We will call left side of each pair as the _search_ part, or \(s\), and the right side the _target_, or \(t\). To be able to perform the algorithm in \(\sqrt{k}\) steps, it is necessary to have \(k\) copies of the paired database, where \(k\) is the number of unique atomic clauses. The complexity of building this database is not taken into account. Our algorithm also shows an explicit dynamic construction for the Grover oracle depending on the searched value. ### Preparing the entangled database Before starting the algorithm, one must construct an entangled database that accurately represents the sequent. In order to do so, we will need \(k\times 2n\) qubits. Then, the pairs \(\ket{a}\ket{b}\) will be encoded as \(\ket{Na+b}\), where \(a\) and \(b\) is the position of the clause in each side of the sequent. Assuming we have \(k=8\) and \(n=3\), where \(n\) is the number of qubits necessary to represent a solution space of \(k\) values. Thus our entangled database with \(8\) solutions will have \(2\) groups of \(n\) qubits, each representing \(8=k=2^{n}\) values. We will treat both groups of \(3\) qubits as a single array and prepare the resulting encoding in the superposition, using \(k\) of the \(k^{2}\) total possibilities that can be stored in \(2n\) qubits. Then, we will need \(k\) copies of the register, one for each clause. The construction of the database is not explicitly shown but its complexity is \(O(n)\) or \(O(\log k)\)[Alsing and McDonald 2011], taking \(O(k\log k)\) in total. This process is not strictly part of the algorithm, which only receives a pre-constructed entangled database. 1 Footnote 1: In our case, the entangled state, it is necessary to store the gate sequence \(A\) used to encode a copy of the entangled database state so it can be used later in the IAM step of the GSA. ### \(\otimes\)-Left The first step of the algorithm itself is to apply the \(\otimes\)-Left rule until it cannot be applied anymore, so we have a sequent of the form: \[A^{1},A^{2},...,A^{N}\vdash B\otimes\Delta\] Where \(\Delta=A\otimes(\Delta)\) or \(\Delta=A\) Now, we can use our entangled database to find out the correct split for the leftmost atomic clauses of the right side. ### Grover Search Now that we have the entangled database of \(k\) copies of \(2n\) qubits, we can perform the GSA. We start by picking the leftmost atomic clause of the right side. The first step is to construct the oracle dynamically for the chosen element on the left side. The construction of this Oracle takes into account the binary representation of the chosen clause position. We apply the necessary X-Gates to leave all the search space qubits in \(|1\rangle\) and apply a multi-controlled Toffoli Gate with a prepared qubit as a target to perform the phase kickback, as can be seen in 1. This oracle is applied only on the search, that is, the first \(n\) qubits of the first copy of the \(2n\) qubits. Then, the Grover operator for amplitude amplification is applied to all \(2n\) qubits \(\sqrt{k}\) times, and the measurement to the right side of the \(2n\) qubits. An example of this circuit for \(n=2\) qubits is shown in 1. It is important to note that in the IAM step of the GSA, it is necessary to apply the \(A\) operator, which takes \(\log k\) steps, making the overall complexity of the GSA step \(O(\sqrt{k}\log k)\). This process finds the corresponding entry of a pair, but we need to find the \(k\) corresponding pairs. The issue is that measuring the qubits destroys the prepared superposition corresponding to the pairs. Therefore, we need the \(k\) copies of the prepared \(2n\) qubit entangled database - so we perform the GSA for each pair on a different copy of the database, in \(k^{1.5}\) steps in total - \(k\) times \(\sqrt{k}\) steps. ## 5 Example We want to find out if \[A\otimes(B\otimes(C\otimes D))\vdash D\otimes(B\otimes(A\otimes C)))\] is a valid sequent in linear logic. We have \(k=4\) and consequently \(n=\log_{2}^{4}=2\), thus \(2\) qubits are used for each side, and \(2n\) for each copy in total. We will construct of the entangled representation of this sequent. \begin{tabular}{|c|c|c|} \hline A & 0 & 2 \\ B & 1 & 1 \\ C & 2 & 3 \\ D & 3 & 0 \\ \hline \end{tabular} Using the formula \(|ka+b\rangle\), with \(k=4\), we have \[\begin{array}{|c|c|}\hline\text{A}&\text{k0 + 2 = 2}\\ \text{B}&\text{k1 + 1 = 5}\\ \text{C}&\text{k2 + 3 = 11}\\ \text{D}&\text{k3 + 0 = 12}\\ \hline\end{array}\] Thus we have the state of the quantum database as \[\sqrt{\tfrac{1}{4}}(|2\rangle+|5\rangle+|11\rangle+|12\rangle)\] or \[\sqrt{\tfrac{1}{4}}(|0010\rangle+|0101\rangle+|1011\rangle+|1100\rangle)\] We perform the Grover search on any of the sides and are able to recover the value on the other side. But first, let's go back to the sequent. The rules for \(\otimes\) are shown in 2. Because \(\otimes\) is a binary operator, we can't search for values inside the parentheses and apply the rules, so we treat the sequent as \[A\otimes\Delta^{1}\vdash D\otimes\Delta^{2}\] For that reason, we first apply the \(\otimes\)-Left successive times, until every atomic clause is alone \[\begin{array}{l}A,B,C,D\vdash D\otimes\Delta^{2}\\ \hline A,B,C\otimes D\vdash D\otimes\Delta^{2}\\ \hline A,B\otimes\Delta^{3}\vdash D\otimes\Delta^{2}\\ \hline A\otimes\Delta^{1}\vdash D\otimes\Delta^{2}\\ \hline\end{array}\otimes\text{-Left}\] Now, we run the quantum algorithm for every entry on the right side, and apply the results to the sequent, following the order of appearance. \(D\) is encoded to the pair \((3,0)\), but the algorithm only knows the \(0\), which is the position of the value we're querying, which is encoded by \(\sqrt{\tfrac{1}{4}}\,|1100\rangle\). We'll apply the Oracle on the two last qubits, that represent the \(0\) part of the pair, with the shown circuit, and then measuring the first two qubits, with high probability of the result being \(3\). This process is done for every clause of the right side, so now we just apply the splits following the indexes \((3,1,0,2)\), with the following results \[\begin{array}{c}A\vdash A\quad\quad C\vdash C\cr\hline A,C\vdash A\otimes C \end{array}\otimes\text{-Right}\qquad\begin{array}{c}B\vdash B\cr\hline B\vdash B \cr\hline A,B,C\vdash B\otimes(A\otimes C)\end{array}\otimes\text{-Right}\qquad \begin{array}{c}D\vdash D\cr\hline A,B,C,D\vdash D\otimes\Delta^{2}\cr\hline A,B,C \otimes D\vdash D\otimes\Delta^{2}\end{array}\otimes\text{-Left}\\ \hline A,B,C\otimes D\vdash D\otimes\Delta^{2}\end{array}\otimes\text{-Left}\\ \hline A,B\otimes\Delta^{3}\vdash D\otimes\Delta^{2}\end{array}\otimes\text{-Left}\] ## 6 Results From a given entangled database state, our algorithm has time complexity of \(O(k^{1.5}\log k)\) and has a space qubit complexity of \(\log k\), where \(k\) is the number of atomic clauses on the sequent. Even when taking into account the construction of the database, which takes \(O(k\log k)\) steps, we're still left with a time complexity of \(O(k^{1.5}\log k+k\log k)=O(k^{1.5}\log k)\) which outperforms the classical algorithm. Additionally, it is important to note that when \(k>4\), we would need a controlled-NOT gate with more than two control qubits. For that, we need to concatenate the results of Toffoli gates, introducing additional \(n-2\) ancillary qubits[14]. We ran our circuit in the several simulators provided by IBM, such as the _qasm_simulator_ and _simulator_mps_. Each execution consisted of 1000 circuit's runs. We tested the implementation of the algorithm up to 64 atomic clauses with high precision, using \(2\times\log k=12\) qubits as search space. ## 7 Future Work While this solution uses the GSA to get an advantage when searching the matching pairs, it has some weaknesses. The first is the fact that you need to prepare the quantum database for each execution, since the quantum state is destroyed in the process. Another issue is that the algorithm fully quantum: while the index matching is found with the GSA, the splits are done classically taking into account the position of each atomic clause, and one could argue that this could add an overhead of complexity. For that reason, a different quantum approach is being currently developed, where each qubit value will represent the side of an atomic clause in a specific split. ## 8 Full Quantum approach This algorithm also uses the GSA to help in proof search for IMLL, but there is considerable difference between this and the first one. Now, we don't use Alsing's entangled database nor do we need to prepare a specific quantum state prior to the execution. The algorithms uses \((k-1)+\log k\) qubits, where \(k\) is the number of atomic clauses in the right side of the sequent. The first \((k-1)\) qubits represent the side picked by a clause in each of the \((k-1)\) splits and the last \(\log k\) qubits act as an index for the clauses. A qubit measured \(0\) means a clause will go to the left in a split and \(1\) means it will go the right. Starting with the simplest case: \[A,B\vdash A\otimes B\] For \(k=2\) We will need \((k-1)+\log k=2\) qubits. The quantum state that represents the correct splits is \(|00\rangle+|11\rangle\). The \(|00\rangle\) state is the \(A\) going to the right side and the \(|11\rangle\) is the \(B\) going to the left side. The GSA Oracle will mark both these states as correct ones. These states are defined by the right side of the sequent. Let's go over a slightly more complicated example: \[A,B,C,D\vdash D\otimes(B\otimes(A\otimes C)))\] We'll look at the right side to define the states that will be marked by the Oracle. First, \(D\) will go to the left side and all everybody else to the right. \(D\) will have no future splits, and in the case we fill the rest of its correspondent state with \(0\)s. Thus, one of the Oracle correct states is \(|010|11\rangle\). Applying a similar process we can construct the other three: \(|110|00\rangle\), \(|100|01\rangle\) and \(|111|10\rangle\), for A, B and C respectively. Now we just apply the GSA a sufficient time to measure the four possibilities and we'll have recovered the splits necessary to form a valid sequent. This has a time complexity of \(\sqrt{\frac{2^{k+\log k}}{k}}\). This can be simplified: \[2^{k+\log k}=2^{k}\times 2^{\log k}=2^{k}\times k\] \[\sqrt{\frac{2^{k}\times k}{k}}=\sqrt{2^{k}}\] Which is the expected quadratic speedup from the GSA. ## 9 Adding Linear Implication Following the full quantum approach, the next step is to add linear implication to the connectors accepted by the algorithm. This comes with some challenges. First, we can no longer use the right side as a fixed reference for the oracle to apply the successive splits based on the \(\otimes\)-Right rule - if we add linear implication, now the atomic clauses can switch sides depending on the rule, and the initial sequent no longer needs to have a balanced number of atomic clauses in each side. So, instead of only specifying the splits of the left side to follow a fixed order of the right side, we need to handle every atomic clause. Also, we have four options of "places to go"when applying the \(\multimap\)-Left rule: left side of the left sequent, right side of the left sequent, left side of the right sequent and right side of the right sequent. This is also an issue with the \(\otimes\)-Right, since now we have to explicitly say where each clause will go. To solve this, each step will use 2 qubits instead of one. The first qubit of the pair represents which sequent the clause will go, 0 for left, 1 for right. The second will represent which side of sequent the clause will go, again 0 for left, 1 for right. When applying the \(\multimap\)-Right, it will count as everybody going to the left sequent. Let's go over a simple example: \[A^{1},A^{2}\multimap B^{1}\vdash C^{1}\multimap B^{2},C^{2}\] \[A^{1},A^{2}\multimap B^{1},C^{1}\vdash B^{2},C^{2}\] \[A^{1}\vdash A^{2}\hskip 56.905512ptB^{1},C^{1}\vdash B^{2},C^{2}\] Thus, the correct states for the oracle will be: \[A^{1} =|0000|000\rangle\] \[A^{2} =|0001|001\rangle\] \[B^{1} =|0010|010\rangle\] \[B^{2} =|0111|011\rangle\] \[C^{1} =|0010|100\rangle\] \[C^{2} =|0111|101\rangle\] There's a few interesting things to point out here. The first is the increase of qubits. The complexity of the last solution was \(\sqrt{2^{k}}\), where \(k=n/2\), and \(n\) is the total number of atomic clauses. This solution, on the other hand has complexity of \(\sqrt{\frac{2^{c+\log n}}{n}}\). Simplifying on a similar way: \[2^{c+\log n} =2^{c}\times 2^{\log n}=2^{c}\times n\] \[\sqrt{\frac{2^{c}\times c}{c}} =\sqrt{2^{2c}}\] \(2^{c}\) is the final complexity.
2310.09378
The normal Casimir force for lateral moving planes with isotropic conductivities
We consider the two planes at zero temperature with isotropic conductivity that are in relative lateral motion with velocity $v$ and inter-plane distance $a$. Two models of conductivity are taken into account -- the constant and frequency-dependent Drude models. The normal (perpendicular to planes) Casimir force is analysed in detail for two systems -- i) two planes with identical conductivity and ii) one of the planes is a perfect metal. The velocity correction to the Casimir energy $\Delta_v\mathcal{E} \sim v^2$ for small velocity for all considered cases. In the case of the constant conductivity $\eta$, the energy correction is $ \Delta_v\mathcal{E} \sim \frac{\eta}{a^3} \left(\frac{v}{\eta}\right)^2$for $v\ll \eta \ll 1$.
N. Emelianova, N. Khusnutdinov
2023-10-13T19:54:25Z
http://arxiv.org/abs/2310.09378v2
# The normal Casimir force for lateral moving planes with isotropic conductivities ###### Abstract We consider the two planes with isotropic conductivity that are in relative lateral motion with velocity \(v\) and inter-plane distance \(a\). Two models of conductivity are taken into account - the constant and frequency-dependent Drude model. The normal (perpendicular to planes) Casimir force is analysed in detail for two systems - i) two planes with identical conductivity and ii) one of the planes is a perfect metal. The velocity correction to the Casimir energy \(\Delta_{v}\mathcal{K}\sim v^{2}\) for small velocity for all considered cases. In the case of the constant conductivity \(\eta\), the energy correction is \(\Delta_{v}\mathcal{K}\sim\frac{\eta}{a^{2}}\left(\frac{v}{\eta}\right)^{2}\)for \(v\ll\eta\ll 1\). ## 1 Introduction The Casimir effect [1] was first considered for perfect conductive plates, and nowadays, it is extended to many non-ideal and new materials [2, 3]. The relative motions of bodies give an additional contribution to the Casimir force between bodies (see the recent review on the dynamic Casimir effect [4] and Refs. [2, 5, 6]). The relative motions are lateral (parallel to the planes), perpendicular to the planes, or, in general, their combinations. The Casimir effect for perpendicularly and uniformly moving slabs has been considered firstly in Refs. [7, 8] for electromagnetic and massless scalar fields. It is a direct consequence of the Quantum Field Theory with the moving boundaries [9]. In the non-relativistic case, the velocity correction to the Casimir pressure is quadratic \(\sim v^{2}\) for both fields but with opposite signs. It is positive for the massless scalar field and it is negative for the electromagnetic case. The lateral relative motion of the planes gives two different Casimir pressures in the perpendicular directions. One of them is normal to the planes as it is for perpendicular motion, and the second is along planes; it is called a quantum, non-contact, or Casimir friction. The normal force was considered in Ref. [10] for layers in stratified dielectric media with magneto-electric and non-reciprocal coupling. For a system of three layers, the force may be attractive or repulsive depending on the velocity directions of extreme layers. In the non-relativistic case, the force becomes repulsive if extreme layers have the same velocity directions with respect to the middle layer and it is attractive for opposite directions of velocities. The velocity correction to the Casimir energy has the same order \(\sim v^{2}\). For relativistic velocities, it may be attractive as well as repulsive. Quantum friction is more difficult for analysis problems and it is currently under discussion these days [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], up to complete negation of quantum friction existence [16]. Two dielectric planes at different temperatures with lateral relative motion have been considered in Ref. [11, 12]. The quantum friction force was calculated in the framework of the Rytov fluctuation theory [25]. It was shown that it is proportional to the first power of velocity \(v\), but may get different signs - it may slow down or accelerate planes. Mkrtchian in Ref. [13] considered two conductive planes with relative lateral motion and calculated the force and viscosity of vacuum for two models of the plane's impedance. The dependence of the force on the inter-plane distance crucially depends on the model selected but in any case the velocity correction to the usual Casimir force \(\sim v\) which is the usual dependence for Casimir friction. The quantum friction for planes with temporally dispersive conductivity was analysed in Ref. [14]. For the case of constant conductivity the quantum friction \(\sim\nu^{3}\) at small velocity and \(\sim v^{-1}\ln v\) for high velocity. Volokitin and Persson [15] developed a general theory for quantum friction in the framework of fluctuation electrodynamics. The absence of quantum friction as a whole was claimed in Ref. [16], but the normal force exists (the discussion see in Ref. [17]). The quantum friction in the framework of scattering theory was considered in Ref. [18, 19]. It was shown that the quantum friction threshold exists - it is zero if the relative velocity is smaller than the speed of light in the materials of slabs. The origin of quantum friction was connected to the quantum Cherenkov radiation: the super-luminally moving object spontaneously emits photons. It is closely connected with super-radiance: the rotating body amplifies incident waves [26]. In Refs. [20, 21], quantum friction was calculated for two graphene sheets in the framework of the effective action approach. It was noted the threshold for velocity - the friction is zero if the relative velocity is smaller than the velocity of Fermi. It is correlated with [18, 19]: the Dirac electron in graphene is described by the Dirac equation with velocity Fermi instead of the velocity of light. The threshold was confirmed by different calculations in Ref. [24]. The quantum friction in the framework of the nonperturbative approach was considered in Ref. [22]. The friction force is associated with electromagnetic instability - the kinetic energy of the relative motion is transformed into an exponentially growing coherent radiation. In Ref. [24], the general approach was developed for two conductive planes with relative lateral motion. In the framework of this approach, the normal and tangential forces may be calculated. The normal force is reduced to the usual Casimir force for planes with tensorial conductivities [27] where the specific form for the tensor of moving plane was used. Quantum friction may appear as an imaginary part of the energy calculated for some complex frequencies, which are noted in earlier papers Refs. [28, 29, 30]. In this paper, we consider the normal force for lateral moving planes with isotropic conductivities. As noted in Refs. [31, 32], the Ohm law for the moving plane should be considered carefully. The Lorentz transformation even for a scalar constant conductivity is not simple because it is a coefficient between 3-vectors of the electric current and an electric field. The transformation law of conductivity tensor was discussed in Refs. [32] by using a linear response tensor [33]. In the case of graphene, the role of linear response tensor plays the polarization tensor [24, 34]. To obtain the isotropic conductivity of a moving plane we use the approach suggested in Ref. [24] for graphene and take the limit, where the velocity Fermi and mass gap tend to be zero. In this limit, the conductivity tensor becomes isotropic in the co-moving frame of the plane. The conductivity in the laboratory is not diagonal and depends on the velocity. Throughout the paper the units \(\hbar=c=1\) are used. ## 2 The Casimir energy of a moving plane We use the approach for the Casimir effect of lateral moving graphene developed in Ref. [24]. We consider two parallel conductive planes with isotropic conductivities and inter-plane distance \(a\). The first plane is at rest in the laboratory frame and the second one is in a lateral motion with velocity \(v\). The fluctuating electric field produces a current in the plane according to the Ohm law. This current contributes to the boundary condition and ultimately changes the energy spectrum. The second conductive and moving plane is described by the same Ohm law in its co-moving frame. To solve the scattering problem in the laboratory frame we need for conductivity of the second plane in the laboratory frame. This question in relation to the Ohm law was considered in Ref. [31, 32] by using the linear response tensor method [33]. A similar approach was used in Ref. [24] where the polarization tensor plays the role of the linear response tensor. In the framework of the scattering matrix approach [27] the Casimir energy \(\mathcal{E}\) and pressure \(\mathcal{P}\) for real frequencies may be represented in the following form \[\mathcal{E}=-\frac{1}{2\mathrm{i}}\iint\frac{\mathrm{d}^{2}k}{(2\pi)^{3}}\left( I_{-}-I_{+}\right),\,\mathcal{P}=\iint\frac{\mathrm{d}^{2}k}{(2\pi)^{3}}\left(J_{-}+J_ {+}\right), \tag{1}\] where \[I_{\pm} =\int_{k}^{\infty}\mathrm{d}\omega\ln\det\left[1-e^{\pm 2iak_{3}} \mathcal{B}(\pm k_{3})\right],\] \[J_{\pm} =\int_{k}^{\infty}\mathrm{d}\omega k_{3}\frac{e^{\pm 2iak_{3}}( \mathrm{tr}\mathcal{B}(\pm k_{3})-2e^{\pm 2iak_{3}}\det\mathcal{B}(\pm k_{3}))}{\det \left[1-e^{\pm 2iak_{3}}\mathcal{B}(\pm k_{3})\right]}, \tag{2}\] \(\mathcal{B}(\pm k_{3})=r_{1}^{\prime}(\pm k_{3})r_{2}(\pm k_{3})\), and \(k_{3}=\sqrt{\omega^{2}-k^{2}}\). These formulas take into account the propagating waves, only, because \(\omega\geq k\). The subscript \(1(2)\) means that all reflection matrices related to the rest (moving) plane. The scattering matrix for each part of system has the following form \[\mathcal{G}=\begin{pmatrix}\mathbf{r}&\mathbf{t}^{\prime}\\ \mathbf{t}&\mathbf{r}^{\prime}\end{pmatrix},\] with corresponding index and argument. The reflection matrices for conductive plane were found in Ref. [27] \[\mathbf{r}_{i}=\mathbf{r}_{i}^{\prime}=-\frac{\omega^{2}\mathbf{\eta}_{i}-\mathbf{k}\otimes( \mathbf{k}\mathbf{\eta}_{i})+\mathbf{I}\omega k_{3}\det\mathbf{\eta}_{i}}{\omega^{2}\, \mathrm{tr}\mathbf{\eta}_{i}-\mathbf{k}\mathbf{k}\mathbf{\eta}_{i}+\omega k_{3}(1+\det\mathbf{ \eta}_{i})}, \tag{3}\] where \(\mathbf{\eta}_{i}=2\pi\mathbf{\sigma}_{i}\), and \(\mathbf{\sigma}_{i}\) is the conductivity tensor of the plane \(i=1,2\) (\(i=1\) is at rest and \(i=2\) is moving in laboratory frame). In domain \(\omega<k\) there are evanescent and waveguide modes [35], but as shown in this paper, by rotation of the contour of integration, the contribution of these modes are cancelled with energy of boundary states of corresponding modes. After the rotation of the contour to the imaginary axis, the two contributions survived [24]. The first contribution with integration along the imaginary frequency \(\omega=\mathrm{i}\xi\) reads \[\mathcal{E}^{\perp}=\iint\frac{\mathrm{d}^{2}k}{2(2\pi)^{3}}\int_{-\infty}^{+ \infty}\mathrm{d}\xi\ln\det\left[1-e^{-2\alpha k_{E}}\mathcal{B}(\mathrm{i}k_ {E})\right], \tag{4}\] where \(k_{E}=\sqrt{\xi^{2}+k^{2}}\). The corresponding force is perpendicular to plane's as usual Casimir force. This expression may be simplified by using eigenvalues of matrix \(\mathcal{B}\) and represented as a sum of TE and TM contributions. The matrices \(\mathbf{r}_{1}^{\prime}\) and \(\mathbf{r}_{2}\) are not commutate and therefore the eigenvalues of \(\mathcal{B}\) are not a product of the eigenvalues of \(\mathbf{r}_{1}^{\prime}\) and \(\mathbf{r}_{2}\). The eigenvalues of \(\mathcal{B}\) may be found in closed but complicated forms [24], which correspond to the contributions of TM and TE modes separately. Instead of this approach, we use the expression for energy in the form [27] directly via conductivity matrices \(\mathbf{\eta}_{i}\) \[\mathcal{E}^{\perp} =\int\frac{\mathrm{d}^{2}k}{2(2\pi)^{3}}\int_{-\infty}^{\infty} \mathrm{d}\xi\ln\left(1+e^{-4ak_{E}}\frac{\xi^{2}k_{E}^{2}}{b_{1}b_{2}}\det \mathbf{\eta}_{1}\det\mathbf{\eta}_{2}\right.\] \[-\left.e^{-2ak_{E}}\left[\frac{\xi^{2}k_{E}^{2}}{b_{1}b_{2}} \left[(1-\det\mathbf{\eta}_{1})(1-\det\mathbf{\eta}_{2})+\det(\mathbf{\eta}_{1}-\mathbf{\eta} _{2})\right]-\frac{\xi k_{E}}{b_{1}}-\frac{\xi k_{E}}{b_{2}}+1\right]\right), \tag{5}\] where \(b_{i}=\xi^{2}\,\mathrm{tr}\mathbf{\eta}_{i}+(\mathbf{k}\mathbf{k}\mathbf{\eta}_{i})+\xi k_{E} \big{(}1+\det\mathbf{\eta}_{i}\big{)}\). The second contribution \(\mathcal{E}_{\parallel}\) gives a contribution to the force along planes, the Casimir friction. In Ref. [24], the normal force was considered for moving graphene with the following conductivity in a co-moving frame \[\mathbf{\eta}_{1}=\eta_{\mathrm{gr}}\frac{\tilde{k}}{\omega}\left(\mathbf{I}+\nu_{ F}^{2}\frac{\mathbf{k}\otimes\mathbf{k}}{\tilde{k}^{2}}\right)\widetilde{\Phi}\left( \frac{\tilde{k}}{2m}\right), \tag{6}\] where \(\eta_{\mathrm{gr}}=2\pi\sigma_{\mathrm{gr}}=\pi e^{2}/2\), \(\nu_{F}\) is the Fermi velocity, and \[\widetilde{\Phi}(y)=\frac{2\mathrm{i}}{\pi y}\left\{1-\frac{y^{2}+1}{y}\, \mathrm{arctanh}\,y\right\},\ \tilde{k}=\sqrt{\omega^{2}-\nu_{F}^{2}\mathbf{k}^{2}}. \tag{7}\] The general structure of the moving plane's conductivity tensor in the laboratory frame has the following form [24] \[\mathbf{\eta}_{2}=i_{1}\mathbf{I}+i_{2}\mathbf{k}\otimes\mathbf{k}+i_{3}(\mathbf{k}\otimes\mathbf{v} +\mathbf{v}\otimes\mathbf{k}), \tag{8}\] where \[i_{1} =\frac{\eta_{\mathrm{gr}}\widetilde{\Phi}^{\prime}}{\omega k^{2} \tilde{k}^{\prime}}\left(\mathbf{k}^{2}\tilde{k}^{\prime 2}+\frac{1-v_{F}^{2}}{1-v^{2}}(( \mathbf{k}\mathbf{v})^{2}-\mathbf{k}^{2}\mathbf{v}^{2})k_{3}^{2}\right),\] \[i_{2} =\frac{\eta_{\mathrm{gr}}\widetilde{\Phi}^{\prime}}{\omega k^{2} \tilde{k}^{\prime}}\left(\mathbf{k}^{2}\nu_{F}^{2}+\frac{1-v_{F}^{2}}{1-v^{2}} \mathbf{v}^{2}k_{3}^{2}\right),\] \[i_{3} =\frac{\eta_{\mathrm{gr}}\widetilde{\Phi}^{\prime}}{\mathbf{k}^{2} \tilde{k}^{\prime}}\frac{1-v_{F}^{2}}{1-v^{2}}\left(\mathbf{k}^{2}-\omega(\mathbf{k} \mathbf{v})\right), \tag{9}\] and \[\tilde{k}^{\prime}=\sqrt{\tilde{k}^{2}+\frac{1-v_{F}^{2}}{1-v^{2}}(\omega^{2} \mathbf{v}^{2}+(\mathbf{k}\mathbf{v})^{2}-2\omega(\mathbf{k}\mathbf{v}))}, \tag{10}\] is the Lorentz transformation of \(\tilde{k}\) (7). ## 3 The case of an isotropic conductivity The isotropic conductivity case may be obtained by the formal limits \(v_{F}\to 0\) and \(\widetilde{\Phi}\to 1\) (\(m\to 0\)) in Eq. (6) and changing \(\eta_{\mathrm{gr}}\) to conductivity of corresponding plane \(\eta_{i}\). After these limits the conductivity tensor for the plane at rest becomes diagonal \(\mathbf{\eta}_{1}=\eta_{1}\mathbf{I}\). Taking the limits we obtain \(\tilde{k}^{\prime}=\gamma\omega_{v}\), where \(\omega_{v}=\omega-\mathbf{k}\mathbf{v}\), \(\gamma=1/\sqrt{1-v^{2}}\) is relativistic factor, and \[\mathbf{\eta}_{2}=i_{1}^{\prime}\mathbf{I}+i_{2}^{\prime}\mathbf{k}\otimes\mathbf{k}+i_{3 }^{\prime}(\mathbf{k}\otimes\mathbf{v}+\mathbf{v}\otimes\mathbf{k}),\] (11a) where \[i_{1}^{\prime}=\frac{\eta_{2}\gamma\left(\mathbf{k}^{2}\omega_{v}^{2}+((\mathbf{k} \mathbf{v})^{2}-\mathbf{k}^{2}\mathbf{v}^{2})k_{3}^{2}\right)}{\mathbf{k}^{2}\omega\omega_{v}},\ i_{2}^{\prime}=\frac{\eta_{2}\gamma\mathbf{v}^{2}k_{3}^{2}}{\mathbf{k}^{2}\omega \omega_{v}},\ i_{3}^{\prime}=\frac{\eta_{2}\gamma}{\mathbf{k}^{2}\omega_{v}}(\mathbf{k }^{2}-\omega(\mathbf{k}\mathbf{v})). \tag{11b}\] The \(\omega_{v}\gamma\) is frequency of photon in laboratory which was emitted in co-moving frame. The straightforward calculations at the imaginary axis \(\omega=\mathrm{i}\xi\) give \[b_{1} =\left(\eta_{1}\xi+k_{E}\right)\left(\xi+\eta_{1}k_{E}\right),\ b _{2}=\frac{\xi}{\gamma\xi_{v}}\left(\eta_{2}\gamma\xi_{v}+k_{E}\right)\left( \gamma\xi_{v}+\eta_{2}k_{E}\right),\] \[\det(\mathbf{\eta}_{1}-\mathbf{\eta}_{2}) =(\eta_{1}-\eta_{2})^{2}+\eta_{1}\eta_{2}\left(\gamma\nu^{2}\frac {k_{E}^{2}}{\xi\xi_{v}}+2(1-\gamma)\right),\ \det\mathbf{\eta}_{i}=\eta_{i}^{2}, \tag{12}\] where \(\xi_{v}=\xi+\mathrm{i}\mathbf{k}\mathbf{v}\). Then we use the polar coordinates for \(\mathbf{k}\) in Eq. (5), \(\mathbf{k}\mathbf{v}=k\nu\cos\varphi\) and transform coordinates of plane \(k\in[0,\infty)\), \(\xi\in(-\infty,\infty)\) to polar coordinates \(\xi=k_{E}\cos\theta,k=k_{E}\sin\theta\) where \(\theta\in[0,\pi]\). After these changes, the dependence of \(k_{E}\) is survived in exponents only. By changing the variable \(ak_{E}=y\) we observe that the energy depends on the inter-plane distance as \(1/a^{3}\) for constant conductivities, as expected [36]. Thus, the energy has the following form (\(x=\cos\theta\)) \[\mathcal{E}_{1,2}^{\perp}=\mathrm{Re}\int_{0}^{\infty}\frac{y^{2}\mathrm{d}y}{(2 \pi a)^{3}}\int_{0}^{1}\mathrm{d}x\int_{0}^{\pi}\mathrm{d}\varphi E_{1,2},\ \mathcal{P}_{1,2}^{\perp}=\frac{3}{a}\mathcal{E}_{1,2}^{\perp}, \tag{13}\] where \[E_{1,2}=\ln\left(1+e^{-4y}\frac{x^{2}\eta_{1}^{2}\eta_{2}^{2}}{\beta_{1}\beta_{ 2}}\right.\] \[-e^{-2J}\left(\frac{x^{2}}{\beta_{1}\beta_{2}}\left[(1-\eta_{1}\eta_{2})^{2}+\eta_ {1}\eta_{2}\left(\gamma\nu^{2}\frac{k_{E}^{2}}{\xi\xi_{\nu}}+2(1-\gamma)\right) \right]-\frac{x}{\beta_{1}}-\frac{x}{\beta_{2}}+1\right)\right), \tag{14}\] where \[\beta_{1}=\left(\eta_{1}x+1\right)\left(x+\eta_{1}\right),\ \beta_{2}=\frac{x}{ \gamma x_{\nu}}\left(\eta_{2}\gamma x_{\nu}+1\right)\left(\gamma x_{\nu}+\eta _{2}\right),\ x_{\nu}=x-\mathrm{i}\nu\sqrt{1-x^{2}}\cos\varphi. \tag{15}\] If the first plane (at rest) is a perfect conductor we tend \(\eta_{1}\to\infty\) and obtain \[E_{\mathrm{id},2}=\ln\left(1+e^{-4\gamma}\frac{x\eta_{2}^{2}}{\beta_{2}}-e^{- 2J}\left(\frac{x\eta_{2}^{2}}{\beta_{2}}-\frac{x}{\beta_{2}}+1\right)\right), \tag{16}\] and for two ideal planes \[E_{\mathrm{id},\mathrm{id}}=2\ln\left(1+e^{-2J}\right), \tag{17}\] the energy does not depend on the velocity. Without a relative movement, \(\mathbf{v}=0\), we return to the results obtained in Ref. [36]: \[E=\ln\left(1-e^{-2J}\frac{\eta_{1}\eta_{2}}{(\eta_{1}+x)(\eta_{2}+x)}\right)+ \ln\left(1-e^{-2J}\frac{\eta_{1}\eta_{2}x^{2}}{(x\eta_{1}+1)(x\eta_{2}+1)} \right)=E_{\mathrm{tr}}+E_{\mathrm{te}}, \tag{18}\] the sum of TM and TE contributions. Let us consider the constant conductivities case \(\eta_{1}=\eta_{2}=\eta\). For small velocity and conductivity, and \(v\ll\eta\ll 1\) we obtain from Eq. (13): \[\mathcal{E}_{i,k}^{\perp}\approx e_{i,k}\frac{\eta}{a^{3}}\left(\frac{\nu}{ \eta}\right)^{2},\mathcal{P}_{i,k}^{\perp}\approx p_{i,k}\frac{\eta}{a^{4}} \left(\frac{\nu}{\eta}\right)^{2}, \tag{19}\] where \[e_{1,2}=4.8\cdot 10^{-4},\ e_{\mathrm{id},2}=6.1\cdot 10^{-4},\ p_{1,2}=1.4 \cdot 10^{-3},\ p_{\mathrm{id},2}=1.8\cdot 10^{-3}. \tag{20}\] Numerical evaluations of (13) are shown in Fig. 1 for two systems: \((1,2)\) - two conductive planes with constant conductivity \(\eta\) (solid lines) and \((\mathrm{id},2)\) - the first plane at rest is a perfect metal (dashed lines). We calculated velocity correction to the energy: \(\Delta_{\nu}\mathcal{E}_{\perp}/\mathcal{E}_{0}=(\mathcal{E}_{\perp}-\mathcal{ E}_{0})/\mathcal{E}_{0}\), where \(\mathcal{E}_{0}\) is energy without relative movement. The velocity correction is negative for both systems and it reveals quadratic behaviour \(\sim\nu^{2}\) for small velocity \(\nu\ll\eta\ll 1\). The absolute value of correction is greater for the first system. The above derivation is applicable for isotropic but frequency-dependent conductivity, \(\eta=\eta(\omega)\). Let us consider the simple case of Drude conductivity with \[\eta_{1}=\frac{\eta\Gamma}{\Gamma+\xi}\,\ \eta_{2}=\frac{\eta\Gamma}{\Gamma+ \gamma\xi_{\nu}}, \tag{21}\] Figure 1: The plots of the velocity correction \(\frac{\Delta_{\nu}\mathcal{E}_{\perp}}{\mathcal{E}_{0}}=\frac{\mathcal{E}_{ \perp}-\mathcal{E}_{0}}{\mathcal{E}_{0}}\) for \(\eta=0.01,0.1\). For small velocity and conductivity \(v\ll\eta\ll 1\) (left and middle panels) we have quadratic correction according with (19). The solid lines are for two planes with the same conductivity (14) and dashed lines are for system with a plane at rest with a perfect conductivity (16). and parameters are taken for graphene, \(\Gamma=6.365\) eV and \(\eta=\eta_{\text{gr}}=e^{2}/4\)[37]. After changing the integrand variables as were made above, the Casimir energy acquires an additional dependence on the inter-plane distance through conductivity: \[\eta_{1}=\frac{\eta_{\text{gr}}(a\Gamma)}{(a\Gamma)+yx}\,\ \eta_{2}=\frac{\eta_{ \text{gr}}(a\Gamma)}{(a\Gamma)+\gamma yx_{v}}. \tag{22}\] For \(a=100\) nm and \(\Gamma=6.365\) eV, one has, \(a\Gamma=3.225\). For \(a\Gamma\gg 1\) the conductivities \(\eta_{1}=\eta_{2}=\eta_{\text{gr}}\), as should be the case - the constant conductivity model is valid for large inter-plane distances. The numerical evaluations of Casimir energy are plotted in Fig. 2. We observe that the greater the inter-plane distance the closer energy is to the case of constant conductivity (blue lines) as should be the case. Let us compare the obtained results with the case of graphene analysed in Ref. [24]. For two graphene sheets the energy as a function of velocity at the beginning is positive and then becomes negative with maximum for \(v_{c}=v_{F}+(ma)/2\). The region with positive energy disappears for \(m=v_{F}=0\). The case of two planes with constant conductivity reveals a negative value of energy for all values of velocity. The velocity correction \(\sim v^{2}\) for both cases. Concerning inter-plane distance we have different pictures. For two planes with constant conductivity, the energy is \(\sim 1/a^{3}\) for any distance while for graphenes it has this dependence for large distances only. It is expected because the constant conductivity model for graphene is valid for large inter-plane distances. For the case of a system perfect conductor/graphene the energy is zero up to a specific velocity, while for the above-considered case, we have quadratic behaviour at the beginning. A similar conclusion is valid for the case of the Drude model of conductivity (21). For large inter-plane distances, both models are very close. For small distances, the weak dependents on the distance takes place. ## 4 Conclusion In this paper we considered the normal (perpendicular to the planes) Casimir force for two conductive planes with an isotropic conductivity that is laterally moving with relative velocity \(v\). The main problem is connected with calculating the conductivity of a moving plane in a laboratory frame. In a co-moving frame, the isotropic conductivity is a coefficient in the Ohm law, \(\mathbf{J}^{\prime}=\sigma^{\prime}\mathbf{E}^{\prime}\), where \(\mathbf{E}^{\prime}\) and \(\mathbf{J}^{\prime}\) are fluctuations of the electric field and corresponding density of current. Transformation of this relation to the laboratory frame (where the first plane is at the rest) is not trivial. The simple way [31, 32] to solve this problem is to start from the linear relation be Figure 2: The plots of velocity correction \(\frac{\Delta_{x}\mathcal{E}_{z}}{\mathcal{E}_{y}}=\frac{\mathcal{E}_{z}- \mathcal{E}_{0}}{\mathcal{E}_{y}}\) for Drude model (\(a=10,100\) nm) and constant conductivity with \(\eta=\eta_{\text{gr}}\). Left panel: the region of small velocity \(v\ll 1\). Right panel: the whole range of \(v\). The solid lines are for two planes with the same conductivity and dashed lines are for system where a plane at rest is a perfect conductor. tween the density of current and electromagnetic vector potential, \(J^{\mu}=\Pi_{\nu}^{\mu}A^{\nu}\) which is usual for plasma physics [33]. The \(\Pi_{\nu}^{\mu}\) is the linear response tensor. The same approach was used in Ref. [34] where the polarization tensor played the role of the linear response tensor \(\Pi_{\nu}^{\mu}\). The transformation of the conductivity has no simple form [32] even for a constant conductivity case. A similar approach was used in Ref. [24] where the linear relation for current and electromagnetic potential, and invariance of the boundary condition were used. To obtain the case of isotropic conductivity we used expressions obtained for graphene in which we take limits \(v_{F}\to 0\) for Fermi velocity and \(m\to 0\) for mass gap. With these limits the conductivity tensor in the co-moving frame becomes diagonal. In the laboratory frame, it has the form (11). Having these tensors, the Casimir energy may be calculated by expression (5), firstly obtained in Ref. [27]. Analysis of expressions obtained for two conductive planes (14), and for system (perfect conductivity)/(constant conductivity) is difficult due to two small dimensionless parameters: velocity of plane \(v\) and conductivity \(\eta=2\pi\sigma\) (dimensionless for 2D systems). For the case \(v\ll\eta\ll 1\) the energy \(\sim\eta(v/\eta)^{2}\)[19]. The quadratic dependence is usual for normal force and different directions of motion [7, 8, 10, 24]. The inter-plane distance dependence of energy is \(1/a^{3}\) for any distance as usual for constant conductivity case [36] because the constant conductivity model is valid for large distances where the Casimir regime is satisfied. The Drude model of conductivity reveals close behaviour of the system with a weak dependence on the inter-plane distances (see Fig. 2). ## Acknowledgments NK was supported in part by the grants 2022/08771-5, 2021/10128-0 of Sao Paulo Research Foundation (FAPESP).
2310.07355
IMITATE: Clinical Prior Guided Hierarchical Vision-Language Pre-training
In the field of medical Vision-Language Pre-training (VLP), significant efforts have been devoted to deriving text and image features from both clinical reports and associated medical images. However, most existing methods may have overlooked the opportunity in leveraging the inherent hierarchical structure of clinical reports, which are generally split into `findings' for descriptive content and `impressions' for conclusive observation. Instead of utilizing this rich, structured format, current medical VLP approaches often simplify the report into either a unified entity or fragmented tokens. In this work, we propose a novel clinical prior guided VLP framework named IMITATE to learn the structure information from medical reports with hierarchical vision-language alignment. The framework derives multi-level visual features from the chest X-ray (CXR) images and separately aligns these features with the descriptive and the conclusive text encoded in the hierarchical medical report. Furthermore, a new clinical-informed contrastive loss is introduced for cross-modal learning, which accounts for clinical prior knowledge in formulating sample correlations in contrastive learning. The proposed model, IMITATE, outperforms baseline VLP methods across six different datasets, spanning five medical imaging downstream tasks. Comprehensive experimental results highlight the advantages of integrating the hierarchical structure of medical reports for vision-language alignment.
Che Liu, Sibo Cheng, Miaojing Shi, Anand Shah, Wenjia Bai, Rossella Arcucci
2023-10-11T10:12:43Z
http://arxiv.org/abs/2310.07355v4
# IMITATE: Clinical Prior Guided Hierarchical Vision-Language Pre-training ###### Abstract In the field of medical Vision-Language Pre-training (VLP), significant efforts have been devoted to deriving text and image features from both clinical reports and associated medical images. However, most existing methods may have overlooked the opportunity in leveraging the inherent hierarchical structure of clinical reports, which are generally split into 'findings' for descriptive content and 'impressions' for conclusive observation. Instead of utilizing this rich, structured format, current medical VLP approaches often simplify the report into either a unified entity or fragmented tokens. In this work, we propose a novel clinical prior guided VLP framework named IMITATE to learn the structure information from medical reports with hierarchical vision-language alignment. The framework derives multi-level visual features from the chest X-ray (CXR) images and separately aligns these features with the descriptive and the conclusive text encoded in the hierarchical medical report. Furthermore, a new clinical-informed contrastive loss is introduced for cross-modal learning, which accounts for clinical prior knowledge in formulating sample correlations in contrastive learning. The proposed model, IMITATE, outperforms baseline VLP methods across six different datasets, spanning five medical imaging downstream tasks. Comprehensive experimental results highlight the advantages of integrating the hierarchical structure of medical reports for vision-language alignment. Self-supervised Learning, Vision-Language Pre-training, Chest X-ray Image Analysis ## I Introduction Self-supervised learning has made significant progress in representation learning from a single modality such as image or text [5, 6, 7, 8, 9]. To link the representations between different modalities, vision-language pre-training (VLP) has been introduced to align the vision and language content typically in large datasets [2, 10]. In the medical domain, as a frontline triaging and diagnosis tool, chest x-ray (CXR) scans are often accompanied by text reports as the result of the standard clinical procedure, providing a rich source of paired image-text data for VLP. The challenge in medical VLP arises from the structure of CXR text reports, which consist of two parts, 'Findings' and 'Impressions'. The 'Findings' section describes the image content, e.g. 'the lungs are well inflated', whereas the 'Impressions' section concludes the report, e.g. 'clear lungs'. Conventional VLP methods align high-level visual features with the entire medical report, without distinguishing between the descriptive and conclusive sections in the report [1, 2, 3, 4]. To better utilise the hierarchical information in the medical report, we propose a novel clinical prior guided VLP framework, IMITATE, that aims to perform VLP via hIerachical **M**ulti**-level **********O********T******a**T**a**T**ive **I****E**arning. As depicted in Fig. 1, our framework aligns different levels of visual features separately with the descriptive and conclusive parts of the medical report. We hypothesize that low-level visual features embody more descriptive properties of images corresponding to the descriptive part of the report, while high-level visual features contain more semantic information corresponding to the conclusive part of the report. Apart from aligning between visual and textual features, we also align between visual features of different views to enhance the model's invariance to view variation. To perform the alignment, current VLP approaches perform one-to-one alignment between each image-te Fig. 1: (a) Conventional VLP approaches [1, 2, 3, 4] align the high-level visual feature with the entire medical report via a classic contrastive loss (\(\mathcal{L}^{CL}\)). (b) IMITATE leverages clinical prior knowledge to perform hierarchical alignment between multi-level visual features from medical images and descriptive and conclusive textual features from medical reports. Moreover, it utilizes a clinically-informed contrastive loss (\(\mathcal{L}^{CICL}\)), which takes into account clinical correlations among different image-report pairs. \(E_{v}\) and \(E_{t}\) denotes the vision and text encoders respectively. \(\overleftarrow{\mathcal{B}}E_{t}\) denotes a frozen text encoder. \(\mathcal{P}(\cdot)\) indicates the hierarchical aggregation block. ignoring the clinical similarity across different pairs. This can be problematic especially in the medical domain, because different patients may share similar symptoms, which makes their imaging scans or medical reports similar. We need to be cautious in defining the contrastive loss for different patients. To address this issue, we introduce a new alignment loss function named Clinical-Informed Contrastive Loss (CICL). This function integrates clinical correlations among patients into the alignment loss mechanism. Unlike traditional approaches that use a binary affinity matrix as the target [1, 2, 4], CICL constructs the affinity matrix based on the similarity among different image-report pairs. We compare the proposed method with the state-of-the-art (SOTA) VLP approaches and evaluate them on a variety of downstream tasks, including supervised medical image classification, semantic segmentation, object detection and zero-shot image classification. We show that our method significantly outperforms the SOTA methods on six public CXR datasets. Overall, the contributions of this work are three-fold: 1. We address the alignment challenge in medical VLP via the hierarchical alignment between multi-level visual features from medical images and the descriptive and conclusive textual features from medical reports. 2. We propose a new clinical-informed contrastive loss for visual-textual alignment, which incorporates the similarity among different patients into the alignment. 3. We achieve SOTA results across 6 datasets encompassing 5 distinct medical image tasks. Notably, IMITATE stands out by attaining superior performance on RSNA segmentation even with just 1% of data for fine-tuning. This accomplishment surpasses the performance of other baseline methods that require 100% data for fine-tuning. ## II Related Work ### _General vision-Language Pre-training_ Joint training of Vision and Language ModelsIn an effort to overcome the limitations of single-modality learning and better utilize the relationship between image and text data, VLP has been introduced [12, 13, 14], which learns and aligns representations from visual and textual input. Recent methods such as CLIP [2], ALIGN [15], Florence [16], LiT [17] and ALBEF [18] have shown significant progress in improving the representation learning for natural images and languages, although they require substantial training data and computational resources [2, 19, 20]. Alternative methods have been proposed to reduce the training cost or data need in VLP. For instance, BeiT3 [21] employs masked inputs and multi-modal data reconstruction to avoid the comparison of image-text pairs. A-FLIP [22] utilizes partially masked images as input to decrease the computational cost. SLIP [23] introduces an additional image contrastive branch to improve CLIP [2]. The utilization of multi-level text in VLP can be found in PyramidCLIP [24]. However, PyramidCLIP does not fully embody self-supervised learning, as it employs an additional object detector to extract regional visual features during the VLP stage. This detector is trained on annotated data complete with bounding box labels. Also, PyramidCLIP uses randomly cropped images to align with the text, which, according to [25], can be inappropriate for medical imaging where anatomical correspondence is important when matching between the image and text. Frozen Language Model in VLPUP requires tremendous computational resources which can be prohibitively expensive. To address this issue, [26, 27] freeze the language model for VLP and achieve competitive results on the visual question answering task. Furthermore, a recent work of [28] freezes both the language and vision models and designs an additional trainable block to align visual and language embeddings. These methods demonstrate the potential of using a frozen language model in VLP. In this paper, we investigate the frozen language model in medical VLP. ### _Medical Vision-Language Pre-training_ Research in medical VLP is limited due to the complexity of the medical reports and the scarcity of large-scale medical image-text datasets. ConVIRT [1] pioneered VLP within the CXR domain, leveraging a bidirectional contrastive loss to align image-text pairs. GLoRIA [3] proposed a global-local VLP technique that seeks alignment between image regions and individual text tokens, fostering enhanced representation learning. MGCA [4] adopted a crafted prototype-level alignment strategy to align semantic relationships between CXR images and their associated reports. Notably, these methodologies [3, 4] attempt to align comprehensive medical reports to high-level visual features, while fragmenting the report into word-level tokens for local alignment. However, this token-level alignment might compromise the medical context, potentially leading to misalignments. For instance, within the 'Impressions' section, terms like 'compatible' or 'acute' lack direct visual correlates, rendering local alignment potentially ambiguous. While MedKLIP [29] and KAD [30] utilize domain-specific knowledge from external datasets to enhance textual information extraction, one might argue about their dependence on these external resources for vision-language alignment. Furthermore, Med-UniC [31], which integrates multi-lingual medical data for VLP, aims to analyze and mitigate language biases originating from different communities in the realm of medical VLP. MRM [32] shifts from the alignment task to a reconstruction task that uses masked visual and textual tokens. These studies enhance the performance of medical VLP in various ways; however, they do not account for a clear distinction between the descriptive and conclusive segments within a medical report. Moreover, the potential similarity inherent in medical data is overlooked during the execution of vision-language alignment, which in turn adversely affects the cross-modal representation learning. ## III Method ### _Overview_ Our framework overview is illustrated in Fig. 2. It is composed of a ResNet50 [33] vision encoder, denoted by \(E_{v}\), followed by a hierarchical aggregation block \(\mathcal{P}(\cdot)\); and a BioClinicalBERT [34] text encoder denoted by \(E_{t}\) (as described in Sec. III-C). During the training stage, only \(E_{v}\) and \(\mathcal{P}(\cdot)\) are trained, while \(E_{t}\) is frozen to prevent disturbance from the language side and also minimize the training expense. The proposed framework aims to both optimize the vision-to-vision (V-V) branch between visual features of different views and the vision-to-language (V-L) branch between visual and textual features. Given a set of \(N\) image-report pairs \(\mathcal{X}=\{(x_{v,i},x_{t,i})\}_{i=1}^{N}\), where \((x_{v,i},x_{t,i})\) denote the paired image and text, each report \(x_{t,i}\) includes two parts: 'Findings' and 'Impressions', such that \(x_{t,i}\) can be split into \(x_{F,i}\) and \(x_{I,i}\). Therefore, each image-report pair can be further represented by \((x_{v,i},x_{F,i},x_{I,i})\). For simplicity, unless needed, we omit the subscript \(i\) in later text. ### _Semantic Difference in Hierarchical Medical Report_ As illustrated in Tab. I, the 'Findings' section presents descriptive content, while the 'Impressions' section provides a conclusive remark. To graphically observe the semantic distinctions between the two sections, we randomly sample 2,000 medical reports from the MIMIC-CXR dataset [35] and extract the text embedding of the 'Findings' and 'Impressions' sections following [4] using a pre-trained text encoder. Fig. 3 visualizes the text embeddings using the first two components after applying the principal component analysis (PCA) [36]. As depicted in Fig. 3, the text embeddings corresponding to the 'Findings' and 'Impressions' sections demonstrate discernible differences, thereby signaling divergent semantic content between these two sections. Although subtle overlap is observed, this can be attributed to the fact that both sections originate from the same medical report. In light of this, aligning images with entire medical reports runs the risk of introducing misalignment errors. To address this challenge, we devise a sophisticated hierarchical alignment strategy that both mitigates the issue of misalignment and more effectively capitalizes on the inherent hierarchical structure of medical reports. Fig. 2: Overview of the proposed framework. (a) Each image is augmented to two different views (\(x_{v}^{1},x_{v}^{2}\)) and provided as input to a vision-to-language (V-L) alignment branch and a vision-to-vision (V-V) alignment branch. (b) The V-V branch aligns the visual features of two augmented views. MHSA indicates the multi-head self-attention mechanism. \(p_{v}\) denotes a non-linear projector for visual features. CLS indicates the special token to aggregate multi-level visual features. The dashed black line indicates the feature channel dropping mechanism. (d) The V-L branch aligns different levels of visual features to the text features from the Findings and Impressions sections of the report. \(p_{t}\) denotes a non-linear projector for textual features. The [CLS] token serves to aggregate multi-level visual features to \(z_{v,m}\) and facilitate hierarchical alignment between visual features and \(z_{t,F}\). \(\mathcal{E}_{t}\) denotes a frozen pre-trained language model. \begin{table} \begin{tabular}{p{284.5pt}} **INDICATION:**_Patient Name_ with cough / acute process?** \\ \hline **FindINGS:** Single frontal view of the chest provided. \\ The cardiomediastinal silhouette is normal. \\ No free air below the right hemidiaphragm is seen. \\ \hline **IMPRESSIONS:** No acute intrathoracic process. \\ \end{tabular} \end{table} TABLE I: An exemplar CXR report. ### _Hierarchical Vision-Language Alignment_ To align visual features separately to the 'Findings' and 'Impressions' of a medical report, we extract multi-level features from the medical image and develop the hierarchical alignment scheme, which includes the hierarchical feature aggregation and intermediate channel dropping mechanism to make the learning more effective and efficient. **Hierarchical feature aggregation** As illustrated in Fig.2, the hierarchical feature aggregation mechanism, \(\mathcal{P}(\cdot)\), employs multi-head attention (MHSA) [37], positional embedding [37], and a [CLS] token to aggregate diverse feature levels. This architecture is micticulously devised to enhance the understanding of the distinct sections, 'Findings' and 'Impression', in medical reports. The 'Findings' section predominantly reflects low-level image features, e.g., intensity, texture and size, while the 'Impression' encapsulates image semantics, e.g., disease and symptom manifestations [35, 38]. In the proposed method, multi-level visual features, extracted from the output of each ResBlock [33] in the vision encoder, are designated as middle-level features, aligning with the detailed nature of the 'Findings' section. In contrast, the terminal output from the vision encoder, denoted as \(E_{v}\), is treated as the high-level visual feature \(z_{v,h}\), aligning with the summarization text in the 'Impressions' section. We use \((h,w,c)\) to denote the size of visual features from different levels, where \(h\) and \(w\) represent the height and width of the feature map, and \(c\) denotes the number of channels. To standardize the shape of these features, we pool and flatten all feature maps to a size of \((16,16,c)\). Then, we treat these pooled and flattened features as a sequence, which has a shape of \((256,c)\). \(c\) is also the number of tokens of a sequence. To aggregate the sequence visual features, we concatenate a [CLS] token and a positional embedding to the features following [39]. The multi-level features have channel sizes of 256, 512, 1024 and 2048 from the shallower to the deeper blocks, respectively. We introduce an intermediate feature channel dropping scheme to squeeze the numbers of feature channels for them, detailed in the next subsection. Afterward, we concatenate all flattened feature maps with the positional embedding and the [CLS] token. They are embedded using MHSA similar to that in a transformer [37]. To aggregate the sequence, only the [CLS] token embedding from the MHSA output layer is utilized for later alignment. **Intermediate feature channel dropping** If the multi-level (middle-level) features are used directly for subsequent alignment via MHSA, there might be a significant increase in computational cost. This is because the computational complexity is \(\mathcal{O}(N^{2})\) for MHSA [37]. Here, \(N\) represents the total length of the visual feature sequence from all ResBlocks [33]. To alleviate this issue, We propose to randomly drop the feature channels of these middle-level features. The drop ratio is set to \(0.85\) for the shallowest level and \(0.9\) for the other levels. Above we have the multi-level visual features ready through the hierarchical feature aggregation module. Below we introduce how to get the high-level visual features, as well as the descriptive and conclusive textual features. We organize the alignment in two branches, i.e., V-L and V-V branches. **V-L branch** In the V-L branch, as depicted in Fig. 2c, given the 'Findings' and 'Impressions' sections of the text report, the text encoder \(E_{t}\) is utilized to extract text embedding \((z_{t,F},z_{t,I})\) correspondingly. Given the medical image, the last layer output of the vision encoder \(E_{v}\) is considered as the high-level visual feature \(z_{v,h}\), and the features from intermediate layers of \(E_{v}\) are aggregated and projected to the multi-level visual feature \(z_{v,m}\) using the above mentioned hierarchical aggregation block \(\mathcal{P}(\cdot)\). Hence, one image-report pair \((x_{v},x_{F},x_{I})\) is embedded to \((z_{v,m},z_{v,h},z_{t,F},z_{t,I})\), where the subscripts \(v\) and \(t\) denote visual and textual respectively. **V-V branch** In the V-V branch, as illustrated in Fig. 2b, we augment each medical image into two different views, \(x_{v}^{1}\) and \(x_{v}^{2}\), to promote the learning of invariant image features. The augmentation is implemented following the methods in [3, 4]. Then, the two augmented images are embedded to \((z_{v,m}^{1},z_{v,h}^{1},z_{v,m}^{2},z_{v,h}^{2})\) via \(E_{v}\). Finally, each image-report pair \((x_{v},x_{F},x_{I})\) is embedded into \((z_{v,m}^{1},z_{v,h}^{2},z_{v,m}^{2},z_{v,h}^{2},z_{t,F},z_{t,I})\). Next, we perform hierarchical alignment between these visual and textual features. **Hierarchical feature alignment** After obtaining all visual and textual features, we perform the hierarchical alignment by using the clinical-informed contrastive loss (CICL), which includes three parts: * In the V-L branch, we minimize the CICL between \((z_{v,h}^{1}\) and \(z_{t,I})\), \((z_{v,h}^{2}\) and \(z_{t,I})\) to align the high-level visual feature from the image and the conclusive textual feature from the report. * In the V-L branch, we minimize the CICL between \((z_{v,m}^{1}\) and \(z_{t,F})\), \((z_{v,m}^{2}\) and \(z_{t,F})\) to align the multi-level visual feature from the image and the descriptive textual feature from the report. Fig. 3: 2D PCA visualization of text embedding from ‘Findings’ and ‘Impression’. * In the V-V branch, we minimize the CICL between \((z^{1}_{v,m}\) and \(z^{2}_{v,m})\), \((z^{1}_{v,h}\) and \(z^{2}_{v,h})\) to align the visual features across multiple views. ### _Clinical-Informed Contrastive loss_ Contrastive loss is the most commonly used optimization strategy for VLP [1, 2, 3]. Given the visual and textual features, \(z_{v}\) and \(z_{t}\), the contrastive loss is defined as below: \[\mathcal{L}^{CL}=\sum_{i=1}^{B}\Bigg{(}-\log\frac{\exp\left(sim\left(z^{i}_{v},z^{j}_{t}\right)\right)}{\sum_{j=1}^{B}\exp\left(sim\left(z^{i}_{v},z^{j}_{t} \right)\right)}\Bigg{)}, \tag{1}\] where \(B\) denotes the size of a mini-batch, \(i,j\) denote the sample index in the mini-batch. Two trainable non-linear projectors \(p_{v}\) and \(p_{z}\) are utilized to map the visual embedding \(z^{i}_{v}\) and the text embedding \(z^{i}_{t}\) into the same latent space for the alignment. In our case, we use the \(\{z^{i}_{1},z^{j}_{2}\}\) to represent visual or textual embedded features: \[\{z^{1}_{1},z^{j}_{2}\}\in\{z^{1,i}_{v,m},z^{1,i}_{v,h},z^{2,i}_{ v,m},z^{2,i}_{v,h},z^{i}_{t,F},z^{i}_{t,I}\}\times\] \[\{z^{1,j}_{v,m},z^{1,j}_{v,h},z^{2,j}_{v,m},z^{2,j}_{v,h},z^{j}_ {t,F},z^{j}_{t,I}\},\] The similarity of two components \(sim(z^{1}_{1},z^{j}_{2})\) is computed as, \[sim(z^{i}_{1},z^{j}_{2})=p_{1}(z_{1})^{T}p_{2}(z_{2}). \tag{2}\] where \(\{p_{1},p_{2}\}\in\{p_{v},p_{z}\}^{2}\), representing visual or text projectors. In contrastive learning, sample \(i\) and sample \(j\) denote two patients. Although they are two different patients, they may present similar symptoms and in this case, it implies the textual descriptions of their reports will exhibit higher similarity, so as are their CXR images. Conventional approaches [1, 3, 4, 23] do not consider this factor in the alignment and assume that two different patients have dissimilar visual and textual features. Our idea is to leverage the clinical information in the medical report to adaptively adjust the contrastive loss. To do so, we first compute an empirical report correlation matrix \(R\in\mathbb{R}^{B\times B}\): \[R_{i,j}=\frac{\sum_{k=1}^{B}(z^{i,(k)}_{t}-\overline{z}^{i}_{t})(z^{j,(k)}_{t }-\overline{z}^{j}_{t})}{\sqrt{\sum_{k=1}^{B}(z^{i,(k)}_{t}-\overline{z}^{i}_ {t})^{2}}\sqrt{\sum_{k=1}^{B}(z^{j,(k)}_{t}-\overline{z}^{j}_{t})^{2}}}, \tag{3}\] where \(z^{i,(k)}_{t}\) denotes the \(k\)th element of the latent code \(z^{i}_{t,F}\) or \(z^{i}_{t,I}\). \(\overline{z}^{i}_{t}=\sum_{k=1}^{B}z^{i,(k)}_{t}/B\) is the averaged vector. To avoid ill-defined empirical correlation matrix and centralize the correlation coefficients, we smooth \(R_{i,j}\) as \[R^{i,j}_{\text{smooth}}=\begin{cases}1&\text{if }i=j\\ 1-\exp^{(-\lambda R^{i,j})}&\text{otherwise}\end{cases}, \tag{4}\] where \(\lambda\) denotes a regularization coefficient, empirically set to 0.2. This hyperparameter is extensive ablated in Sec. IV-D3 and Fig. 4. The key component of \(\mathcal{L}^{CICL}\) consists of approximating the smoothed text correlation based on the similarity of visual and textual components. We introduce the clinical-informed contrastive loss as \[\mathcal{L}^{CICL}=\sum_{i=1}^{B}\sum_{j=1}^{B}\mathcal{L}_{CrossEntropy}(sim (z^{i}_{1},z^{j}_{2}),\ R^{i,j}_{\text{smooth}}).\] ### _Total Loss_ As mentioned in Sec. III-C, \(\mathcal{L}^{CICL}\) is applied different terms in the V-V and V-L branches, leading to the final loss function, \[\mathcal{L}_{total}=\mathcal{L}^{CICL}(z^{1}_{v,h},z_{I})+ \mathcal{L}^{CICL}(z^{1}_{v,m},z_{F})\] \[+\mathcal{L}^{CICL}(z^{2}_{v,h},z_{I})+\mathcal{L}^{CICL}(z^{2}_{v,m},z_{F})\] \[+\mathcal{L}^{CICL}(z^{1}_{v,h},z^{2}_{v,h})+\mathcal{L}^{CICL}(z^{ 1}_{v,m},z^{2}_{v,m}).\] ## IV Experiments and Analysis ### _Vision-Language Pre-training Configuration_ **Dataset** Our method, IMITATE, is pre-trained on the MIMIC-CXR dataset [38, 40]. The preprocessing of this dataset adheres to practice described in [1, 3, 4], including image resizing, pixel value normalization, and text tokenization. To refine the dataset further, lateral views and reports comprising fewer than three tokens were excluded, resulting in a pre-training dataset with \(213,384\) image-text pairings for MIMIC-CXR [40]. **Implementation** The original CXR images from the MIMIC-CXR dataset [40] are resized to \(256\times 256\) and randomly cropped to \(224\times 224\), following the procedure in [1, 3, 4]. All images are normalized to the range \([0,1]\). For data augmentation during pre-training, we apply horizontal flip, random rotation in the range \([0^{\circ},180^{\circ}]\), and auto contrast using the PyTorch vision library1. We employ BiocLinicalBERT [34] to derive text embeddings from medical reports. To conduct a fair comparison with existing methods [1, 3, 4, 29, 30], we utilize the same ResNet50 [33] backbone as our vision encoder. Adhering to practices outlined in [3, 4], the proposed model is pre-trained for 50 epochs. This training utilizes an early stopping strategy and is conducted on 16 A100-40GB GPUs parallel, each handling a batch size of 128. For optimization, we employ the AdamW optimizer, configuring a learning rate of \(4e^{-5}\) and a weight decay of \(5e^{-2}\). Throughout this phase, a combination of a linear warm-up and a cosine annealing scheduler is employed [41]. Footnote 1: [https://pytorch.org/vision/stable/transforms.html](https://pytorch.org/vision/stable/transforms.html) ### _Downstream Tasks_ We evaluate our framework on five downstream tasks: **Medical Image Linear Classification** This task is executed on the CheXpert [42], RSNA [43], and COVIDx [44] datasets. In line with prior research [1, 3, 4], we restrict updates to the parameters of a randomly initialized linear layer designated for classification, keeping the pre-trained vision backbone frozen. Our evaluation metrics comprise the AUC scores (AUC) for CheXpert and RSNA, and accuracy (ACC) for COVIDx, as endorsed by [3, 4]. We freeze our visual backbone and perform fine-tuning only on the final linear layer for 50 epochs with early stopping. The learning rate of 5e-4 is maintained, and a default batch size of 8 is used. Our approach also involves utilizing the AdamW optimizer to manage the learning rate schedule, incorporating a \(\beta_{1}\) value of 0.9, a \(\beta_{2}\) value of 0.999, and a weight decay rate of 1e-6. **Medical Image Fine-tuned Classification** Following [29, 30], we employ the ChestX-ray14 dataset for our experiments, as originally presented in [45]. This dataset encompasses 112,120 frontal-view X-ray images, obtained from a cohort of 30,805 patients. These images were curated between the years 1992 and 2015 by the National Institutes of Health and were annotated for 14 prevalent diseases. To maintain a consistent evaluation criterion with earlier methods, we utilize the official test set partition as outlined in [29, 30, 45]. During the fine-tuning process, we update all model parameters, including both the backbone structures and the linear classifier. All images are resized to \(256\times 256\) resolution and undergo data augmentation as suggested in [30]. For the optimization, we employ the AdamW optimizer with a learning rate of \(1\times 10^{-4}\) and a batch size of 64. The number of epochs is set as 50. **Medical Image Semantic Segmentation** This task is performed on the RSNA [43] and the SIIM [46] datasets, follow-the data preprocessing in [3, 4]. Similar to [3, 4], the U-Net [47] fine-tuning settings are adopted for segmentation. All pre-trained models are considered as frozen encoders and only the decoders of U-Net are updated during the fine-tuning. The segmentation performance is evaluated using Dice scores following [3, 4, 29]. Following the preprocessing steps outlined in [3, 4], to generate the segmentation mask for pneumonia regions, we resize all images and masks to \(512\times 512\), applying the same data augmentation techniques as in [4]. For fine-tuning, we use the AdamW [48] optimizer with a learning rate of \(5\times 10^{-4}\) and weight decay of \(1\times 10^{-6}\), and optimize the segmentation model using a combination \(\alpha\times\)FocalLoss+DiceLoss with a coefficient \(\alpha\) set to 10 [4]. We fine-tune the segmentation task for 50 epochs and early stop if the loss does not decrease on the validation set for 10 steps. We use a batch size of 16 for RSNA segmentation and 8 for SIIM segmentation. **Medical Image Object Detection** We conduct pneumonia detection on the RSNA dataset [43] and foreign objects detection on the Object-CXR dataset [49], adhering to the preprocessing standards outlined by [4]. Consistent with [4], we employ the YOLOv3 [50] for the detection framework. Within this architecture, our pre-trained vision encoder acts as the backbone, and during fine-tuning, only the detection head is optimized. Evaluation metrics for the detection task are based on the Mean Average Precision (mAP) with IOU thresholds spanning from 0.4 to 0.75. According to [4], we normalize all pixel intensity of images in RSNA Pneumonia [43] to range [0,1] and do not apply data augmentation during the fine-tuning stage for fair comparison. For all data fractions, the batch size is 16. We choose AdamW [48] as the optimizer with learning rate set to \(5\times 10^{-4}\) and weight decay of \(1\times 10^{-6}\). The detection model is trained for 50 epochs and early stopping is applied when the validation loss does not decrease for 10 steps. Other details follow [4]. **Medical Image Zero-shot Image Classification** Following [29, 30], we execute this task utilizing RSNA and SIIM datasets [45]. To fairly compare with previous methods, we adopt the official test set split from [29, 30]. To alleviate potential biases stemming from human-crafted prompts, our positive prompts are structured as '_disease_' and negative prompts as 'No _disease_'. The original image undergoes a two-step process. Firstly, it is resized to dimensions of \(256\times 256\) and then center cropped to \(224\times 224\). Subsequently, all pixel values are normalized within the range of \([0,1]\), following [29, 30]. The resulting resized image is then passed through an image encoder to generate an image embedding. Concurrently, the prompts are inputted into a text encoder to obtain a text embeddings. To evaluate the classification, we measure the cosine similarity between the image and text embeddings for each prompt associated with a specific class. Our results are reported as the macro average of AUC, F1, and ACC scores across the spectrum of all diseases. All data split information and train/valid/test set details are in Tab. II. For all downstream tasks, except zero-shot classification, we train with \(1\%,10\%,100\%\) of the training set. ### _Results_ In this section, we evaluate the performance of our method on five downstream tasks regarding medical images, in comparison to 8 SOTA medical VLP methods. #### Iv-C1 Medical Image Linear Classification To assess the quality of the visual representation derived from IMITATE, we employ linear classification as described in [4, 51, 52]. Our evaluations span three CXR image datasets: CheXpert [42], RSNA [43], and COVIDx [53]. The results, depicted in Tab. III, showcase the results achieved in the supervised image linear classification task [3, 4]. Evidently, IMITATE consistently eclipses competing methodologies across all three datasets and their respective data fractions. It is worth noting that despite MedKLIP [29] incorporating extra annotated data with disease-level annotations during its VLP stage, IMITATE continually outperforms MedKLIP [29] in all datasets and experimental settings. This underscores the potency of IMITATE in enhancing disease prediction. \begin{table} \begin{tabular}{c c c c c c} Task & \multicolumn{2}{c}{Dataset} & \multicolumn{2}{c}{Split} & \multicolumn{2}{c}{Train} & Valid & Test \\ \hline Linear & CheXpert [42] & [42] & 186,027 & 5,000 & 202 \\ Classification & RSNA [43] & [4, 43] & 16.010 & 5,337 & 5,337 \\ & COVIDx [44] & [4, 44] & 23988 & 5998 & 400 \\ \hline Semantic & RSNA [43] & [3, 4] & 16,010 & 5,337 & 5,337 \\ Segmentation & SIIM [46] & [3, 4] & 8,433 & 1,807 & 1,807 \\ \hline Object & RSNA [43] & [3, 4] & 16,010 & 5,337 & 5,337 \\ Detection & Object-CXR [49] & [4] & 6,400 & 1,600 & 1,000 \\ \hline Fine-grained & ChestX-ray14 [45] & [30] & 77,872 & 8,652 & 25,956 \\ Classification & & & & & \\ \hline Zero-shot & RSNA [43] & [29] & / & / & 5,337 \\ Classification & SIIM [46] & [29] & / & / & 1,807 \\ \hline \end{tabular} \end{table} TABLE II: Data Split Details, ‘/’ indicates that no training/validation data is required in the zero-shot classification task. #### Iv-C2 Medical Image Fine-tuned Classification In a comprehensive evaluation of various medical VLP methods on the ChestX-ray14 dataset [45], we evaluate the performance of our proposed approach, IMITATE, in fine-grained classification tasks and present the results in Tab. IV. The assessment metrics included AUC scores for 14 individual diseases and a macro-averaged mean AUC. IMITATE consistently excels across all evaluated subsets of training data, specifically 1%, 10%, and 100%. This superior performance is perhaps due to its innovative VLP strategy: hierarchical alignment with different sections of medical reports. This feature allows IMITATE to learn multi-level knowledge during pre-training, thereby considerably enhancing its effectiveness in fine-grained classification tasks. Notably, even when compared with methods like MedKLIP [29] and KAD [30], which use additional annotated data for pre-training, IMITATE maintains a significant lead in both the mean and disease-specific AUC scores. As the fraction of training data increases, all methods generally improve. However, IMITATE consistently ranks highest, emphasizing its potential for more accurate disease diagnosis. #### Iv-C3 Semantic Segmentation and Object Detection As demonstrated in Tab. V, IMITATE consistently exhibits superior performance compared to all SOTA methods across various data fractions in all four tasks. Remarkably, IMITATE achieves a Dice score of 70.5% on RSNA segmentation with only 1% data fine-tuning, surpassing the performance of all other SOTA methods fine-tuned on 100% data except for MedKLIP [29]. Moreover, when fine-tuning with just 1% of data, IMITATE achieves a 3.9% mAP on the Object-CXR dataset. In contrast,other methods struggle to even touch a 1% mAP under the same data fraction. The remarkable enhancements achieved by IMITATE across diverse downstream tasks underscore the advantages of leveraging hierarchical alignment with structured medical reports during pre-training. This approach yields more informed and general medical representations that are better suited for a wide range of downstream applications beyond high-level tasks. #### Iv-C4 Zero-shot Image Classification In order to assess the efficacy of VLP in establishing connections between vision and language, we conduct zero-shot classification experiments on RSNA and SIIM datasets [43, 46]. The results for both datasets are presented in Tab. VI. IMITATE surpasses other SOTA methods in three average metrics and performs well on both datasets. This outcome underscores the efficacy of IMITATE in the vision-language task. ### _Ablation Studies_ #### Iv-D1 Improvement of Hierarchical Alignment We first investigate the effect of hierarchical alignment on pre-training. We conduct pre-training with various loss combinations. As indicated in Tab. VII, only aligning high-level features in V-V or V-L branch leads to poor performance, while IMITATE consistently improves results across all four datasets. Additionally, we observe that hierarchical alignment in the V-L branch produces better outcomes than hierarchical alignment in the V-V branch. This suggests that hierarchical alignment is more advantageous to vision-language than vision-vision contrasting. Combining all losses yields consistently superior results, indicating that joint hierarchical vision-vision and vision-language alignments are beneficial for VLP. #### Iv-D2 Alignment with different Parts of Medical Reports After showing the importance of hierarchical vision-language alignment, we experiment with structured reports to study the effect of report hierarchy on hierarchical alignment. The results of IMITATE aligned with different parts of reports during pre-training are presented in Tab. VIII. Notably, pre-training IMITATE with hierarchical medical reports yields the highest performance across four tasks. This outcome underscores the efficacy of aligning with various report segments based on their inherent structure. On the contrary, the lowest performance is observed when aligning the reversed target, implying the alignment of high-level visual features with the 'Findings' section and middle-level features with the 'Impression' section. In contrast, aligning solely with the 'Impression' section and concatenating 'Findings' and 'Impressions' as the vision-language alignment target do not show any significant improvements due to the ambiguity arising from the absence of the medical reports hierarchy. This is likely due to the ambiguity stemming from the absence of the hierarchical structure in medical reports. #### Iv-D3 Impact of Clinical-Informed Contrastive Loss In this section, we investigate the impact of \(\mathcal{L}^{CICL}\) and different \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{CheXpert (AUC)} & \multicolumn{4}{c}{RSNA (AUC)} & \multicolumn{4}{c}{COVIDx (ACC)} \\ Method & 1\% & 10\% & 100\% & 1\% & 10\% & 100\% & 1\% & 100\% \\ \hline Random Init & 56.1 & 62.6 & 65.7 & 58.9 & 69.4 & 74.1 & 50.5 & 60.3 & 70.0 \\ ImageNet Init & 74.4 & 79.7 & 81.4 & 74.9 & 74.5 & 76.3 & 64.8 & 78.8 & 86.3 \\ \hline ConVIRT [1] & 85.9 & 86.8 & 87.3 & 77.4 & 80.1 & 81.3 & 72.5 & 82.5 & 92.0 \\ GLoRIA [3] & 86.6 & 87.8 & 88.1 & 86.1 & 88.0 & 88.6 & 67.3 & 77.8 & 89.0 \\ GLoRIA-MIMIC [3] & 87.1 & 88.7 & 88.0 & 87.0 & 89.4 & 90.2 & 66.5 & 80.5 & 88.8 \\ MGCA [4] & 87.6 & 88.0 & 88.2 & 88.6 & 89.1 & 89.9 & 72.0 & 83.5 & 90.5 \\ MRM [32] & 88.5 & 88.5 & 88.7 & 91.3 & 92.7 & 93.3 & 66.9 & 79.3 & 90.8 \\ MedKLIP\({}^{*}\)[54] & 86.2 & 86.5 & 87.7 & 87.3 & 88.0 & 89.3 & 74.5 & 85.2 & 90.3 \\ \hline **IMITATE** & **89.1** & **89.5** & **89.7** & **91.7** & **92.9** & **93.5** & **76.8** & **87.6** & **93.1** \\ \hline \hline \end{tabular} \end{table} TABLE III: Linear classification results for CheXpert, RSNA, and COVIDx datasets with 1%, 10%, and 100% training data. The best results are highlighted in bold. Methods with + use extra annotated data for pre-training. smooth kernels with \(\lambda\) on the effectiveness of IMITATE. The outcomes are presented in Tab. IX. Clinical-Informed Contrastive LossWe observe a significant reduction of performance when using \(\mathcal{L}^{CL}\) instead of \(\mathcal{L}^{CICL}\) in contrastive learning. This indicates that the clinical prior is a crucial component when using \(\mathcal{L}^{CICL}\). Smooth KernelFurthermore, we evaluate the results for various smooth kernels in Eq. (4) and find that the smoothed Exponential kernel outperforms the others, as shown in Tab. IX. The Gaussian and Laplacian kernels convert negative correlation coefficients to positive values, which can disrupt prior knowledge. The Sigmoid kernel preserves the coefficient range in \([-1,1]\) but may lead to strong penalization during pre-training, resulting in substandard performance. Gaussian, Laplacian, and Sigmoid kernels all exhibited poorer results than the smoothed Exponential kernel (Eq. (4)), which shrinks the coefficient range but does not convert negative values to positive ones, thereby preserving most prior knowledge and leading to superior performance. Hyperparameter Sensitivities AnalysisWe evaluate the sensitivity of pre-training to different values of \(\lambda\) on various downstream tasks. As shown in Fig. 4, all pre-trained models with \(\lambda\leq 0.4\) outperform the best baseline on three downstream tasks, while \(\lambda\geq 0.5\) led to worse performance. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Semantic Segmentation (Dice)} & \multicolumn{4}{c}{Object Detection (mAP)} \\ \hline & & \multicolumn{3}{c}{SIMM} & \multicolumn{3}{c}{RSNA} & \multicolumn{3}{c}{RSNA} & \multicolumn{3}{c}{Object CXR} \\ Method & 1\% & 10\% & 100\% & 1\% & 10\% & 100\% & 1\% & 100\% & 1\% & 10\% & 100\% \\ \hline Random & 9.0 & 28.6 & 54.3 & 6.9 & 10.6 & 18.5 & 1.0 & 4.0 & 8.9 & - & 0.5 & 4.4 \\ ImageNet & 10.2 & 35.5 & 63.5 & 34.8 & 39.9 & 64.0 & 3.6 & 8.0 & 15.7 & - & 2.9 & 8.3 \\ \hline ConVIRT [1] & 25.0 & 43.2 & 59.9 & 55.0 & 67.4 & 67.5 & 8.2 & 15.6 & 17.9 & - & 8.6 & 15.9 \\ GLoRIA [3] & 35.8 & 46.9 & 63.4 & 59.3 & 67.5 & 67.8 & 9.8 & 14.8 & 18.8 & - & 10.6 & 15.6 \\ GLoRIA-MIMIC [3] & 37.4 & 57.1 & 64.0 & 60.3 & 68.7 & 68.3 & 11.6 & 16.1 & 24.8 & - & 8.90 & 16.6 \\ MGCA [4] & 49.7 & 59.3 & 64.2 & 63.0 & 68.3 & 69.8 & 12.9 & 16.8 & 24.9 & - & 12.1 & 19.2 \\ MedKILP\({}^{*}\)[54] & 50.2 & 60.8 & 63.9 & 66.2 & 69.4 & 71.9 & 8.9 & 16.3 & 24.5 & - & 7.1 & 11.6 \\ \hline **IMIATE** & **53.9** & **61.7** & **64.5** & **70.5** & **71.4** & **73.8** & **15.3** & **19.7** & **26.4** & **3.9** & **12.7** & **20.3** \\ \hline \hline \end{tabular} \end{table} TABLE V: Results of semantic segmentation on SIIM and RSNA datasets and object detection on RSNA and Object-CXR datasets. The best results for each setting are highlighted in bold, and the ’-’ denotes mAP values smaller than 1%. Methods with \(\star\) use extra annotated data for pre-training. \begin{table} \begin{tabular}{c|l|c|c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & & & & & \\ Training data fraction & Method & & & & & & & & & & & & & & & & & & \\ \hline \multirow{4}{*}{\(1\%\)} & Random & 58.1 & 55.7 & 57.7 & 63.6 & 61.6 & 55.0 & 60.2 & 57.1 & 58.2 & 60.8 & 63.3 & 53.4 & 63.7 & 56.8 & 46.0 \\ & ImageNet & 63.5 & 66.2 & 64.2 & 72.1 & 57.0 & 59.0 & 58.5 & 60.0 & 62.6 & 62.4 & 66.8 & 61.5 & 70.7 & 63.1 & 64.5 \\ & CoVIRT & 64.9 & 66.0 & 78.2 & 78.9 & 61.1 & 59.6 & 65.5 & 60.8 & 68.8 & 65.7 & 60.7 & 65.8 & 68.0 & 62.7 & 46.6 \\ & GLoRIA & 59.7 & 59.7 & 56.7 & 74.1 & 64.6 & 55.9 & 55.7 & 61.1 & 60.7 & 66.5 & 66.9 & 55.0 & 55.8 & 59.2 & 43.6 \\ & BioVIL & 57.9 & 55.5 & 56.4 & 72.2 & 65.0 & 56.7 & 54.6 & 62.6 & 56.0 & 65.7 & 68.1 & 51.6 & 51.3 & 59.2 & 36.0 \\ & MedKILP\({}^{*}\) & 60.9 & 65.5 & 59.0 & 74.5 & 64.3 & 55.0 & 61.1 & 60.9 & 59.9 & 65.9 & 68.2 & 53.5 & 64.8 & 59.3 & 40.0 \\ & KAD\({}^{*}\) & 78.7 & 77.0 & 88.2 & 82.9 & 69.2 & 75.1 & 69.7 & 73.5 & 86.1 & 72.7 & 81.3 & 89.3 & 74.3 & 69.2 & 93.8 \\ & **IMITATE** & **80.2** & **79.6** & **89.5** & **83.2** & **70.5** & **77.4** & **71.9** & **74.8** & **76.7** & **73.7** & **82.4** & **90.5** & **75.8** & **71.3** & **94.6** \\ \hline \multirow{4}{*}{\(10\%\)} & Random & 69.1 & 68.2 & 76.6 & 74.6 & 67.4 & 62.3 & 58.0 & 63.6 & 72.8 & 67.8 & 78.0 & 64.7 & 71.5 & 65.3 & 77.1 \\ & ImageNet & 72.6 & 70.9 & 79.8 & 76.9 & 68.4 & 69.3 & 65.6 & 63.0 & 79.3 & 67.1 & 76.7 & 74.9 & 72.9 & 71.1 & 81.0 \\ & CoVIRT & 77.1 & 74.0 & 84.3 & 81.1 & 69.3 & 74.8 & 70.0 & 67.1 & 82.8 & 70.1 & 81.4 & 87.1 & 76.7 & 71.9 & 89.3 \\ & GLoRIA & 74.3 & 72.1 & 80.8 & 80.0 & 68.7 & 73.3 & 67.5 & 65.8 & 77.9 & 67.6 & 79.7 & 79.9 & 78.7 & 69.3 & 78.7 \\ & BioVIL & 72.7 & 70.3 & 78.5 & 79.0 & 66.6 & 71.8 & 67.1 & 66.5 & 76.7 & 68.4 & 79.9 & 76.1 & 74.8 & 65.3 & 76.3 \\ & MedKILP\({}^{*}\) & 74.8 & 72.9 & 80.2 & 79.3 & 69.8 & 71.9 & 68.1 & 66.6 & 79.6 & 69.6 & 81.1 & 79.5 & 75.6 & 71.3 & 81.9 \\ & KAD\({}^{*}\) & 80.7 & 77.6 & 88.9 & 83.3 & 71.8 & 73.9 & 73.7 & 87.2 & 75.0 & 83.3 & 90.3 & 80.7 & 72.3 & 95.3 \\ & **IMITATE** & **82.2** & **78.8** & **90.3** & **84.6** & **73.2** & **80.6** & **73.8** & **75.3** & **88.8** & **76.7** & **84.3** & **91.5** & **82.6** & **73.9** & **97.4** \\ \hline \hline \multirow{4}{*}{\(100\%\)} & Random & 79.0 & 75.0 & 87.9 & 81.5 & 69.1 & 79.8 & 72.6 & 70.3 & 82.6 & 73.1 & 83.9 It is crucial to note that excessive values of \(\lambda\) can lead to bias due to the lack of control over the prior knowledge. Therefore, our framework's performance is stable for various downstream tasks when the strength of \(\lambda\) is constrained to a small range. This finding suggests that the strength of clinical prior knowledge should be controlled within a certain range since the reports' correlation should only be considered as a weak constraint. **IMITATE compared with unfrozen variants** Tab. III,V,V present the performance of all downstream tasks when using IMITATE pre-trained with a frozen language model. In this section, we sequentially unfreeze the last six layers of the language model to evaluate the effectiveness of the trainable language model on downstream tasks for four datasets. The outcomes are reported in Tab. X. We observed that the performance did not improve as we increased the number of unfrozen layers, while the training cost increased. This suggests that the trainable language model could be ablated to reduce training costs significantly. Furthermore, using a frozen language model can alleviate perturbations to visual feature learning from language embeddings. ### _Visualizing Qualitative Results_ To delve deeper into the learned visual knowledge from IMITATE, we utilized Grad-CAM [57] to produce saliency maps for CXR images derived from the model in its pre-trained state. We select two CXR images showcasing two prevalent diseases, _Edema_ and _Lung Opacity_. Notably, each of these images comes with ground truth annotation pinpointing the region of concern, as documented in [58]. As evident from Fig. 5, IMITATE boasts an impressive capability to accurately delineate the clinical regions of concern in the CXR images, outperforming its counterpart MGCA [4]. This is particularly noteworthy considering that IMITATE achieves this precision without relying on any external prompts or the need for additional model fine-tuning. ## V Conclusion and Discussion This study introduces a novel VLP framework that imitates the human understanding of paired image-text in a hierarchical manner. This framework, named IMITATE, aligns CXR images and hierarchical medical reports at multiple levels. IMITATE utilizes hierarchical alignment and different parts of medical reports to enhance image representation by incorporating hierarchical information from medical report structures. Notably, this operation requires no additional data or manual pre-processing. Moreover, we propose Clinical-Informed Contrastive Loss, which explicitly integrates clinical prior knowledge through smoothed medical report correlation. To best of our knowledge, IMITATE is the first framework to align hierarchical information from structured medical reports to multi-level visual features in medical images. Furthermore, we incorporate the clinical similarity into \begin{table} \begin{tabular}{c|c|c|c|c} \hline & CheXpert & SIIM & RSNA & ChestXray-14 \\ & AUC(\%) & Dice(\%) & mAP(\%) & AUC(\%) \\ & 1\% & 1\% & 1\% & 1\% \\ \hline Imp & 87.5 & 33.6 & 12.2 & 77.5 \\ Find\&Imp & 88.2 & 35.4 & 13.4 & 78.4 \\ reversed & 83.4 & 29.8 & 12.7 & 67.6 \\ IMITATE & **89.1** & **53.9** & **15.3** & **80.2** \\ \hline \end{tabular} \end{table} TABLE IX: Ablation of different smooth kernels. ’w/o \(\mathcal{L}^{CICL}\)’, indicates that only using the original contrastive loss [2] as Eq. (1) for VLP. \begin{table} \begin{tabular}{c|c|c|c|c} \hline & \begin{tabular}{c} TheNumber \\ Parameters (M) \\ \end{tabular} & \begin{tabular}{c} CheXpert \\ AUC(\%) \\ \end{tabular} & \begin{tabular}{c} SIIM \\ Doce(\%) \\ \end{tabular} & \begin{tabular}{c} RSNA \\ mAP(\%) \\ \end{tabular} & \begin{tabular}{c} ChestXray-14 \\ AUC(\%) \\ \end{tabular} \\ \hline \begin{tabular}{c} VWH \\ VPH,VLH \\ VPH,VLM \\ VPH,VLM \\ IMITATE \\ \end{tabular} & 88.4 & 35.4 & 8.5 & 67.2 \\ \begin{tabular}{c} VPH,VLH,VSM \\ 88.7 \\ \end{tabular} & 87.3 & 12.1 & 73.4 \\ VPH,VLH,VLM & 88.8 & 38.2 & 12.6 & 74.6 \\ IMITATE & **89.1** & **53.9** & **15.3** & **80.2** \\ \hline \end{tabular} \end{table} TABLE VIII: Ablation study of the different parts of reports for IMITATE. Find/Imp indicates the ‘Findings’ and ‘Impression’ part of medical reports. Find&Imp notes the concatenation of these two parts as one. ’reversed’ indicates switching two parts of reports for alignment. \begin{table} \begin{tabular}{c|c|c|c|c} \hline & \begin{tabular}{c} CheXpert \\ AUC(\%) \\ \end{tabular} & \begin{tabular}{c} SIIM \\ Doce(\%) \\ \end{tabular} & \begin{tabular}{c} RSNA \\ mAP(\%) \\ \end{tabular} & \begin{tabular}{c} ChestXray-14 \\ AUC(\%) \\ \end{tabular} \\ \hline \begin{tabular}{c} Imp \\ Find\&Imp \\ reversed \\ \end{tabular} & 87.5 & 33.6 & 12.2 & 77.5 \\ Find\&Imp & 88.2 & 35.4 & 13.4 & 78.4 \\ \begin{tabular}{c} FullyRefmp \\ reversed \\ \end{tabular} & 83.4 & 29.8 & 12.7 & 67.6 \\ IMITATE & **89.1** & **53.9** & **15.3** & **80.2** \\ \hline \end{tabular} \end{table} TABLE VIII: Ablation study of the different parts of reports for IMITATE. Find/Imp indicates the ‘Findings’ and ‘Impression’ part of medical reports. Find&Imp notes the concatenation of these two parts as one. ’reversed’ indicates switching two parts of reports for alignment. the contrastive loss. These contributions address a critical limitation in existing VLP approaches that ignore clinical similarity among patients. Furthermore, hierarchical alignment and \(\mathcal{L}^{CICL}\) provide more reasonable learning targets for the visual modality, resulting in significant improvements in the performances of all downstream tasks with a 50% reduction in trainable parameters compared to other SOTA methods. We believe that this framework will benefit the medical domain, as hierarchical medical report generation is a standard procedure without extra cost. Additionally, it will inspire the general VLP domain, as hierarchical information commonly exists worldwide, such as title and content, caption, and description, among others.
2301.11276
BayesSpeech: A Bayesian Transformer Network for Automatic Speech Recognition
Recent developments using End-to-End Deep Learning models have been shown to have near or better performance than state of the art Recurrent Neural Networks (RNNs) on Automatic Speech Recognition tasks. These models tend to be lighter weight and require less training time than traditional RNN-based approaches. However, these models take frequentist approach to weight training. In theory, network weights are drawn from a latent, intractable probability distribution. We introduce BayesSpeech for end-to-end Automatic Speech Recognition. BayesSpeech is a Bayesian Transformer Network where these intractable posteriors are learned through variational inference and the local reparameterization trick without recurrence. We show how the introduction of variance in the weights leads to faster training time and near state-of-the-art performance on LibriSpeech-960.
Will Rieger
2023-01-16T16:19:04Z
http://arxiv.org/abs/2301.11276v1
# BayesSpeech: A Bayesian Transformer Network for Automatic Speech Recognition ###### Abstract Recent developments using End-to-End Deep Learning models have been shown to have near or better performance than state of the art Recurrent Neural Networks (RNNs) on Automatic Speech Recognition tasks. These models tend to be lighter weight and require less training time than traditional RNN-based approaches. However, these models take frequentist approach to weight training. In theory, network weights are drawn from a latent, intractable probability distribution. We introduce BayesSpeech for end-to-end Automatic Speech Recognition. BayesSpeech is a Bayesian Transformer Network where these intractable posteriors are learned through variational inference and the local reparameterization trick without recurrence. We show how the introduction of variance in the weights leads to faster training time and near state-of-the-art performance on LibriSpeech-960. ## 1 Introduction In the majority of neural networks, randomness is usually introduced through perturbation of the input or randomly removing nodes from the network (Hinton et al., 2012). There has been great success using these methods across a variety of domains including Automatic Speech Recognition (Park et al., 2019). Models continue to evolve. However and data augmentation methods rarely take large leaps in terms of the features they can help express. Newer models are generally larger and larger and require incredible amounts of compute to properly train. We especially see this in the field of Automatic Speech Recognition. Newer models such as Jasper (Li et al., 2019), the Conformer (Gulati et al., 2020), LAS (Chan et al., 2016), and the Transformer (Vaswani et al., 2017) all require training for multiple days across multiple GPUs. Creating deeper models can certainly help attain better performance on the domain task. But what if we approach the model differently and try to leverage their probabilistic nature?
2302.10438
Stability and retention force factor for binary-nanofluid sessile droplets on a inclined substrate
We investigate the retention force factor of sessile droplets of pure (ethanol) and binary (water-ethanol) fluids laden with alumina nanoparticles placed on a critically inclined substrate. It is observed that while the critical angle of an ethanol droplet increases with an increase in nanoparticles concentration, for water-ethanol binary droplets, it reaches to plateau and decreases slightly after 0.6 wt.\% nanoparticle loading. The effect of composition and concentration of nanoparticles on the retention force factor is studied, and correlations are proposed for the retention force factor and critical angle for pure and binary droplets. Infrared images of evaporating droplets of pure and binary fluids reveal richer hydrothermal waves in droplets with nanoparticles loading than in droplets without loading, and these waves are more intense in pure ethanol droplets. On an inclined substrate, the body force caused the droplets to elongate more toward the receding side, which led to an earlier breakup of the droplet at the receding side. To the best of our knowledge, our study is a first attempt to investigate the retention force factor for the droplets loaded with nanoparticles on an inclined substrate.
Pallavi Katre, Sayak Banerjee, Saravanan Balusamy, Kirti Chandra Sahu
2023-02-21T04:34:45Z
http://arxiv.org/abs/2302.10438v1
# Stability and retention force factor for binary-nanofluid sessile droplets on a inclined substrate ###### Abstract We investigate the retention force factor of sessile droplets of pure (ethanol) and binary (water-ethanol) fluids laden with alumina nanoparticles placed on a critically inclined substrate. It is observed that while the critical angle of an ethanol droplet increases with an increase in nanoparticles concentration, for water-ethanol binary droplets, it reaches to plateau and decreases slightly after 0.6 wt.% nanoparticle loading. The effect of composition and concentration of nanoparticles on the retention force factor is studied, and correlations are proposed for the retention force factor and critical angle for pure and binary droplets. Infrared images of evaporating droplets of pure and binary fluids reveal richer hydrothermal waves in droplets with nanoparticles loading than in droplets without loading, and these waves are more intense in pure ethanol droplets. On an inclined substrate, the body force caused the droplets to elongate more toward the receding side, which led to an earlier breakup of the droplet at the receding side. To the best of our knowledge, our study is a first attempt to investigate the retention force factor for the droplets loaded with nanoparticles on an inclined substrate. Keywords: Wetting dynamics, inclined substrate, sessile droplet, binary mixture, nano-fluid, thermal Imaging, machine learning ## 1 Introduction Evaporation of droplets is fascinating from a scientific and practical applications perspective due to its relevance in a wide range of applications, such as inkjet printing [1, 2, 3, 4, 5, 6, 7], coating technology [8], combustion, hot-spot cooling, agriculture and microfluidics [9], to name a few. In addition to the above-mentioned applications, droplet evaporation has applications in biological systems like fabrication of DNA microarrays [10, 11], determining the lifetime of respiratory droplets [12] and disease detection [13, 14, 15]. The addition of nanoparticles to the base fluid (also called "nanofluids") boosts the thermal conductivity and heat transfer rate. A nano-fluid droplet exhibits the "coffee-ring" effect due to the deposition of nanoparticles near the contact line of the droplet. To examine the interesting physics, some researchers have examined the evaporation of droplets containing nanoparticles on horizontal substrates [16, 17, 18, 19, 20, 21] and inclined substrates [22, 23, 24]. The stability and retention force factor of pure fluid droplets on inclined substrates have also been investigated [25, 26]. To the best of our knowledge, however, this is the first time we are reporting the droplet stability for pure fluid and binary fluid containing nanoparticles on inclined substrates. First, we review the research on pure (single-component) droplets. Janardan and Panchagulla [27] studied the shape of a sessile droplet on an inclined hysteretic substrate. The moving and sliding angles were calculated for different surfaces. They found that while the loss of global equilibrium and the onset of motion induces the sliding angle, the loss of local equilibrium causes the moving angle. The Bond number and the initial static contact angle were used to establish correlations between advancing and receding angles. The critical sliding angle and the sliding resistance of the droplet on a grooved surface were calculated by Ding et al. [28]. The geometric variations of a rolling droplet for different droplet sizes were investigated by Yilbas et al. [29]. They found that increasing the droplet size increases the drag, shear, and adhesion forces along the contact line. A scaling law was used to describe the dependency of sliding velocity on the inclination angle, and droplet volume [30]. Droplet shape and wetting behaviour have been predicted for various inclination angles, and the critical inclination angle was found to be a function of droplet size [31, 32]. The relationship between surface-tension forces and contact-angle hysteresis was also estimated using the retentive-force factor (\(k\)) [25]. They found that the aspect ratio of a droplet has a negligible effect on the retention force. In both axisymmetric and asymmetric droplets, the initial width is almost constant for the same droplet volume, resulting in a steadily increasing \(k\) with tilting [33]. For any solid/liquid combination and droplets with various shapes, a theory has been developed to calculate the retention force factor [26, 34]. It was found that the retention force factor values for pure droplets range from 1 to 3.14 [25]. The majority of research on pure sessile droplets containing colloidal particles focused on the mechanism underlying deposition patterns. The sessile droplet dispersed with colloids produces a coffee-ring effect, whereas coffee-eyes are observed in the case of pendent droplets [23]. This is because the bulk flow advection in a pendant drop is directed toward the contact line, whereas interface-mediated transport is directed toward the apex of the drop. Gravity affects both the suspended particles and the droplet when it is placed on an inclined substrate, disrupting the symmetry of the particle deposition. The main causes of this asymmetric deposition include gravitational sedimentation, particle movement over the asymmetrically curved liquid-air interface, and droplet splitting at the very end of the evaporation process. Li et al. [22] investigated the impact of gravity on the deposition pattern of sessile and pendant droplets with sub-micron and micro-size particles. The morphology of particle deposition is governed by gravitational sedimentation, interfacial shrinkage, and outward capillary flow. The competition between gravitational sedimentation and interface shrinkage in the first stage decides whether the liquid-air interface can trap the particles. The second stage involves the capillary flow, which moves the particles towards the edge. The evaporation and wetting dynamics of binary fluid droplets on horizontal and inclined surfaces were also studied by a few researchers [35, 36, 37, 38, 39, 40]. Yonemoto et al. [41] investigated the critical angle of inclination of the substrate as a function of surface energy density on a low-surface-energy solid for a water-ethanol binary mixture. The relationship between adhesion and gravitational force was analysed using a model with a particular contact area. According to Edwards et al. [42], Gravitational force predominates the flow in the evaporation of microlitre-sized droplets. An equation which describes the motion of droplets due to gravitational, capillary, and Marangoni stresses resulting from the dependence of surface tension on local temperature was developed using lubrication theory on a heated inclined substrate. Mamalis et al. [43] considered a self-rewetting mixture of a binary fluid droplet and visualised the thermal patterns on the droplet placed on a heated inclined substrate. The presence of unique temperature patterns on evaporating droplets indicates the existence of thermocapillary/solutal effects owing to the internal flows. The interaction of the Marangoni stresses produced by the contact line caused it to move in the opposite direction to gravity. A few researchers have also investigated the binary component droplets laden with nanoparticles. The evaporation of a binary fluid droplet of water and ethanol laden with graphite nanoparticles was investigated by Zhong, and Duan [18]. It was observed that the evaporation behaviour deviates from the constant evaporation rate because the droplet containing more nanoparticles and ethanol evaporates more quickly. Increasing the concentration of nanoparticles increases the rate of evaporation. The droplet containing graphite nanoparticles has a higher pinning effect and a higher initial contact angle throughout the drying process than a pure water droplet. Three different flow regimes were observed during the evaporation process [19]. The final deposition pattern was found to be the result of the relative weightings of stage 1 (when the nanoparticles migrate to the contact line) and stage 2 (when the Marangoni flow drives the nanoparticles to move inward). This behaviour is reinforced with an increasing load of ethanol. Parsa et al. [20] employed infrared thermography and optical microscopy to observe the three distinct flow patterns for water-ethanol binary droplets laden with copper oxide (CuO) at different substrate temperatures. On a non-heated substrate, a uniform deposition pattern was seen. However, on a heated substrate, dual rings and stick-slip were observed. The dynamics of the droplet as it evaporates were found to be similar to a water-butanol droplet without nanoparticles. As the gradient of surface tension decreases, convection currents and the chaotic motion of nanoparticles are also reduced. At room temperature, the difference in evaporation rate between a droplet containing a nanoparticle and one without nanoparticles is higher and diminishes as the substrate temperature rises. Katre et al. [44] investigated the evaporation dynamics of a water-ethanol binary mixture with alumina nanoparticles at different substrate temperatures. They observed that the droplet containing 0.6 wt.% loading is pinned for most of its lifetime, whereas the wetting diameter of the droplet without loading decreases monotonically. The infrared images reveal that a droplet with nanofluid loading exhibits significantly richer thermal patterns than a droplet without nanoparticle loading. The droplet with nanoparticle loading shows vigorous mixing and a faster evaporation rate due to its pinning effect as well as thermo-capillary and thermo-solutal convection. The deposition pattern after the complete droplet evaporation shows that the nanoparticles were deposited near the triplet contact line, indicating the coffee ring effect. This symmetry is broken when the droplet is placed on an inclined substrate [45]. In this case, the deposition of particles was more significant near the advancing side of the droplet, and an uneven stick-slip pattern was observed on the receding side. The aforementioned review of the literature reveals that while many researchers have studied the evaporation dynamics, shapes, and deposition patterns of pure and binary droplets with and without nanoparticles, only a few studies have discussed the stability and retention force factor of droplets on an inclined substrate; albeit only for pure fluids and without nanoparticles. The droplet parameters and retention force play an important role in studying droplet stability at a critical inclination angle. In the present study, we investigate the stability and retention force factor for pure ethanol (E 100% + W 0%) and ethanol (E) and water (W) binary fluid droplets with alumina (Al\({}_{2}\)O\({}_{3}\)) nanoparticles of varying concentrations (wt.%). In the present study, for binary fluid, we choose (E 80% + W 20%) composition based on our earlier investigations [44, 45] for different compositions that show rich convection patterns for the (E 80% + W 20%) droplet. The correlations for the retention force factor and critical angle for pure and binary droplets are proposed. We also discuss the thermal patterns of the evaporating droplets by performing infrared thermography. The rest of the paper is organised as follows. The experimental setup and post-processing method are described in section 2. The results obtained from our experiments and the proposed correlations are discussed in section 3. Finally, we conclude the study in section 4. ## 2 Experimental Methodology ### Experimental set-up The evaporation of sessile droplets of pure ethanol (E 100% + W 0%) and binary mixture (E 80% + W 20%) loaded with different concentrations (wt.%) of alumina nanoparticles has been investigated using shadowgraphy and infrared (IR) imaging techniques. The experimental setup is shown schematically in figure 1. We use a customised goniometer (Make: Holmarc Opto-Mechatronic) for our experiments. It consists of a motorised pump with a syringe which produces droplets of required volume, a multilayer metal block, a proportional-integral-derivative (PID) controller which maintains the substrate temperature, an infrared (IR) camera (Make: FLIR, Model: X6540sc) and a metal-oxide-semiconductor (CMOS) camera (Model: DS-CBY501E-H). A light source is placed opposite to CMOS camera. The entire assembly was enclosed inside the goniometer box to minimise environmental disturbances from the outside. The goniometer box is maintained at 22\({}^{\circ}\)C temperature and 50 \(\pm\) 5% relative humidity. The relative humidity is measured using a hygrometer (Make: HTC, Model: 288-ATH) that is installed inside the goniometer box. The CMOS camera records the side view of the sessile droplet at a frame rate of 10 frames per second (fps) with a spatial resolution of \(1280\times 960\) pixels. The IR camera captured the top view of the droplet with a spectral range of \(3\;\mu\mathrm{m}\;-\;5\;\mu\mathrm{m}\), which displays the temperature profile on the liquid-air interface of the droplet. The thermal images are recorded at 50 fps with a spatial resolution of \(640\times 512\) pixels. The multilayer block consists of two electrical heaters controlled by a PID controller embedded in a stainless steel foundation and an aluminium plate of size 100 mm \(\times\) 80 mm \(\times\) 15 mm painted with black paint to minimise reflection in the infrared imaging system. We use a PTFE (polytetrafluoroethylene) tape having a thickness of \(100\mu\mathrm{m}\) as the substrate which is applied on a black painted aluminium plate. The stability of the PTFE tape is verified for the temperature range examined in this work. The roughness of the PTFE substrate measured using a digital microscope is Figure 1: Schematic of the experimental setup (customized goniometer). It consists of a heater placed in a stainless steel block, an aluminium plate with PTFE tape coated in black paint, a CMOS camera and light source arrangement and an infrared (IR) camera. found to be in between 4.74 and 18.16 \(\mu\)m [45]. Prior to each experiment, the PTFE tape is cleaned with an isopropanol solution, dried using compressed air, and then placed on the metal plate. The substrate is maintained at a temperature \(T_{s}=50^{\circ}\)C and is checked using a K-type thermocouple before placing the droplet on the substrate. To prepare the binary solution of (E 80% + W 20%), deionized water and absolute ethanol (99.9% purity) are mixed with a stirrer to create a homogeneous mixture on a volume basis. The nanoparticles are then added by weight percentage (wt.%). In our experiments, we use alumina (Al\({}_{2}\)O\({}_{3}\)) nanoparticles with a mean diameter of 20-30 nm purchased from Sisco Research Laboratories Pvt. Ltd. to prepare a mixture of various compositions laden with nanoparticles. To ensure homogeneous distribution in the solution, the mixture is ultrasonically treated for an hour (Make: BRANSONIC, CPX1800H-E). A motorised pump that regulates the volume flow rate was connected to a chromatographic syringe with a capacity of 100\(\mu\)m and a piston 8 size of 1.58 mm from Unitek Scientific Corporation. The syringe is fitted with a 21G needle with an aperture diameter of 0.51 mm which produces droplets with consistent size. Pure and binary droplets of volume (\(3\pm 0.3\)) \(\mu\)l loaded with different nanoparticle concentrations are placed on a critically inclined substrate. The critical angle (\(\alpha\)) of inclination is the angle above which a droplet started to slide. It is to be noted that \(\alpha\) depends on the composition of the fluid and the concentration of the nanoparticle. To find out the critical angle at a given condition, experiments are conducted with an initial inclination angle of 15\({}^{\circ}\) and with an increment of 5\({}^{\circ}\) until the droplet started to slide down at a particular angle. To get the exact value of the critical angle, an increment of 2\({}^{\circ}\) is then used between the angles where the droplet slide and does not slide. ### Post-processing The side view obtained from the CMOS camera is processed using an in-house developed Matlab(r) program. The gradient is improved by utilising an unsharp masking approach to sharpen the image and a median filtering technique to remove random noise. After being filtered, the image is transformed into a binary image using an appropriate threshold that distinguishes the droplet boundary from the surrounding area. The reflection due to light is then eliminated, and the droplet contour was traced using a Matlab(r) tool. The detailed post-processing method can be found in our previous study [46]. To process the infrared images, the droplet contour is extracted using the U-net machine learning model as discussed in our previous work [44]. It is usual to practice edge detection and intensity thresholding to separate the droplet contour from the background. Additionally, the U-net architecture uses data augmentation by elastically deforming the annotated input photos, which enables the network to make greater use of the available annotated images. The U-net-based machine learning model only requires a few annotated images. Therefore a total of 40 manually annotated grey-scale infrared images are used to train the network on a computer with a GPU (NVIDIA Quadro P1000). The network extracts the binary masks and droplet boundaries from the infrared images. Finally, the background is removed using a Matlab(r) function, and the thermal profiles of the evaporating droplets are analysed. ## 3 Results and discussion We investigate the droplet stability and retention force factor of pure ethanol (E 100% + W 0%) and binary (E 80% + W 20%) sessile droplets with and without Al\({}_{2}\)O\({}_{3}\) nanoparticles loading at the onset of sliding. The substrate temperature is maintained at 50\({}^{\circ}\)C. The concentration of nanoparticles (in wt.%) in the solution is varied from 0 wt.% to 1 wt.%, and its effect on the retention force factor is studied. A sessile droplet starts to slide when the inclination angle exceeds the critical angle, while the surface tension force allows the droplet to stick to the inclined or vertical substrate. The two forces that act on a sessile droplet placed on an inclined substrate are (i) surface tension force (\(F_{s}\)) and (ii) gravitational force. On a critically inclined surface, \(F_{s}=mg\sin\alpha\) (which is the tangential component of gravitational force). When the droplet is placed on an inclined substrate, the front edge of the drop moves forward, whereas the rear edge remains fixed. As this happens, the advancing angle increases, and the receding angle decreases. Figure 2 shows the different profiles of an (E 80% + W 20%) droplet loaded with 0.6 wt.% nanoparticles with changing inclination angles of the substrate. It can be seen that when the droplet is placed at a lower inclination angle (\(\alpha=20^{\circ}\pm 1^{\circ}\)) than its critical angle, the surface tension force dominates the gravitational force, and the droplet is stable (figure 2a). Figure 2b depicts a droplet placed at its critical angle (\(\alpha=35^{\circ}\pm 1^{\circ}\)). The critical angle corresponds to the situation when the surface tension and gravitation forces balance each other. A further increase in the inclination angle causes the droplet to slide. Figure 2c shows a droplet placed above its critical inclination angle, and the blue dashed lines show profiles of the sliding droplet. In this case, the droplet starts to slide as a parallel component of gravity (\(mg\sin\alpha\)) overcomes the surface tension force. The critically inclined droplets having non-spherical contours create an effective surface tension force upward along the inclination, counteracting the gravitational force component. Because of this, it is simple to calculate the upward component of the surface tension force using the contact angle hysteresis data. The relationship between surface tension force (\(F_{s}\)) holding a droplet on a solid substrate and the contact angles is given by[25] \[\frac{F_{s}}{\gamma R}=k(\cos\theta_{R}-\cos\theta_{A}), \tag{1}\] where \(\theta_{A}\) and \(\theta_{R}\) are advancing and receding contact angles, respectively, \(\gamma\) is liquid-gas surface tension, and \(R\) is the length scale that reflects the droplet contour's size. The advancing (\(\theta_{A}\)) and receding contact angles (\(\theta_{R}\)) of a droplet are depicted in supplementary figure S1. The values of \(\theta_{A}\) and \(\theta_{R}\) for (E 100% + W 0%) and (E 80% + W 20%) droplets placed at their respective critical angles for different values of nanoparticle loading (wt.%) are presented in supplementary table ST1. In the present study, due to the non-spherical contour of the droplet, \(R\) is calculated as \((D_{1}+D_{2})/2\), wherein \(D_{1}\) and \(D_{2}\) are the diameters of the droplet along and across the inclination direction, respectively. In Eq. (1), \(k\) is a constant called as retention force factor. It is to be noted that \(k\) depends on the droplet geometry [26]. For pure fluid droplets, the retention force factor is between 1 and 3.14 [25]. However, no one has yet reported the retention force factor for binary-nanofluid droplets, which is the objective of the present work. Thus it is possible to directly evaluate the force required to cause any droplet to move with any contact angle hysteresis if the values of \(R\) and \(k\) are known. We have restricted our investigation in the present study to Newtonian fluids only. Other classes of fluids may exhibit a different retention force factor. In the present study, we investigate the effect of different parameters like concentration of nanoparticles (in wt.%) and composition on \(k\) values for sessile droplets placed on their respective critically inclined substrates. Supplementary table ST2 presents the critical angle values for different wt.% of nanoparticle for (E 100% + W 0%) and (E 80% + W 20%) droplets. It is observed that the critical angle of the (E 100% + W 0%) droplet is less than the (E 80% + W 20%) droplet for no loading (0 wt.%) condition as water has a higher surface tension than ethanol. The critical angle increases with an increase in the nanoparticle wt.% for the (E 100% + W 0%) droplet because the surface tension of the droplet increases with the addition of nanoparticles [47] and the upward component of this surface tension force can balance with the downward gravitational component even at higher inclination angles. Figure 2: Side-view of a (E 80% + W 20%) droplet with 0.6 wt.% nanoparticles loading placed on a substrate with different inclination angle, \(\alpha\). (a) \(\alpha=20^{\circ}\pm 1^{\circ}\) (almost symmetrical droplet), (b) \(\alpha=35^{\circ}\pm 1^{\circ}\) (a droplet at its critical inclination angle) and (c) \(\alpha=37^{\circ}\pm 1^{\circ}\) (a droplet that starts to slide). In panel (c), the dashed lines represent the profiles of the sliding droplet. However, in the case of the (E 80% + W 20%) droplet, the critical angle reaches a plateau value at about 0.6 wt.% loading and even decreases with a further increase in nanoparticle wt.%. This may be because the further addition of nanoparticles increases the weight of the droplet, which leads to an increase in the downward component of gravity. This is not fully compensated by the increase in the surface tension force due to the addition of nanoparticles. This does not apply to the (E 100% + W 0%) droplet as the (E 80% + W 20%) droplet has a higher weight due to the presence of water (which is heavier than ethanol). ### Droplet profile: side view This section analyses the initial side view profiles of (E 100% + W 0%) and (E 80% + W 0%). The initial side view profiles are shown in Fig. 3. The initial side view profiles are shown in Fig. 4. The initial side view profiles are shown in Fig. 5. The initial side view profiles are shown in Fig. 6. The initial side view profiles are shown in Fig. 7. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. The initial side view profiles are shown in Fig. 8. 8. W 20%) droplets laden with different loading conditions (wt.%) placed at their respective critical angles (figure3). The initial volume of the droplet was maintained practically constant by performing many repetitions. In figure3, \(R\) and \(A\) indicate the receding and advancing side, respectively. As more liquid is shifted toward the advancing side due to body force, the contact angle on the advancing side is greater than the receding side for all cases. Due to its lower surface tension, the (E 100% + W 0%) droplet spreads more than the (E 80% + W 20%) droplet and exhibits lower contact angle values. It can be seen that (E 100% + W 0%) droplets exhibit similar profiles for small nanoparticle loadings (wt.% \(\leq\) 0.6). However, a noticeable difference is observed in the profiles of droplets containing 0.8 wt.% and 1.0 wt.%. The spread of the droplet with 0.8 wt.% loading is higher and exhibits lower contact angles than that of the droplet with 1.0 wt.% loading. This may be because a droplet with 1.0 wt.% loading deposits more nanoparticles near the triple contact line than a droplet with 0.8 wt.% loading. Similar behavior is shown for the (E 80% + W 20%) droplet with 1.0 wt.%, which exhibits a smaller spread compensated by higher height and contact angles. To illustrate this behaviour, in figure 4(a) and (b), the superimposed contours profiles of the droplets with different wt.% of nanoparticles loading are plotted for (E 100% + W 0%) and (E 80% + W 20%) droplets, respectively. Here, the substrate is maintained at 50\({}^{\circ}\)C. The values of critical angle correspond to (E 100% + W 0%) and (E 80% + W 20%) droplets for different loadings (wt.%) are presented graphically in figure 5(a) (also see, table Figure 4: Comparison of the droplet profile with different values of the nanoaparticles loading (wt.%) when it is placed on a substrate inclined at the corresponding critical inclination angle. The panels (a) and (b) are associated with (E 100% + W 0%) and (E 80% + W 20%) droplets, respectively. ST2 of the supplementary material). It can be seen that, for both (E 100% + W 0%) and (E 80% + W 20%) droplets, increasing the nanoparticles loading (wt.%) up to 0.6 wt.% monotonically increases the critical angle (\(\alpha\)). Then the behaviour diverges for the two cases. For (E 80% + W 20%) droplet, the critical angle is highest for 0.6 wt.% and starts an appreciable decline up to 1 wt.% loading. On the other hand, for (E 100% + W 0%) droplet, the critical angle nearly stabilises after 0.6 wt.%. The variations of the Bond number, \(Bo\), that signifies the competition between the effective body force acting on the droplet with the surface tension force as defined in Eq. (2) with nanoparticle loading (wt.%) for (E 100% + W 0%) and (E 80% + W 20%) droplets are presented in figure 5(b). The Bond number, \(Bo\) is given by \[Bo=\frac{4\rho_{\mathrm{eff}}gR^{2}\sin\alpha}{\gamma}. \tag{2}\] where \(g\) is acceleration due to gravity and \(\rho_{\mathrm{eff}}\) is the effective density of the fluid. For the droplet laden with nanoparticles, \(\rho_{\mathrm{eff}}\) is calculated using mixture rule as \[\rho_{\mathrm{eff}}(t)=(1-Y_{n}(t))\rho_{f}(t)+Y_{n}(t)\rho_{n}. \tag{3}\] Here, \(\rho_{f}\) and \(\rho_{n}\) are the densities of the base fluid and base fluid laden with nanoparticles, respectively. The surface tension for the ethanol-water binary mixture at the substrate temperature is taken from the literature [48]. Previous studies have shown that the surface tension of liquids with tiny quantities of nanoparticles does not significantly deviate from the value of the surface tension of pure liquids [49]. Figure 5b shows that for the (E 100% + W 0%) droplet, the Bond number increases with an increase in nanoparticle wt.% up to 0.8 wt.% and then declines. In Eq. (2), it can be observed that the Bond number is directly proportional to \(\sin\alpha\), \(R^{2}\) and \(\rho_{eff}\). Thus, it can be inferred that the increase in the Bond number up to 0.8 wt.% is caused by the rise in the critical angle and \(\rho_{eff}\) of the droplet. It can be seen that for 1.0 wt.%, there is a rise in the critical angle by 1\({}^{\circ}\). The influence of \(R^{2}\) prevails on the Bond number since \(R\) is lower for 1.0 wt.% than for 0.8 wt.% (see, figure 3), causing a decrease in the Bond number. For (E 80% + W 20%) droplet, the Bond number rises till 0.6 wt.%, decreases for 0.8 wt.%, and then slightly rises for 1.0 wt.%. The value of the Bond number changes when we increase the nanoparticle loading because of an increase in the effective density and change in the critical angle of the droplet. The rate of increase or decrease of Bond number won't track the change in critical angle exactly. When the change in critical angle is modest, the wt.% also plays a role. When wt.% changes from 0.6 wt.% to 0.8 wt.%, there is a decrease in the critical angle by 5\({}^{\circ}\), and the Bond number also decreases. However, the critical angle changes a little (1\({}^{\circ}\)) from 0.8 wt.% to 1.0 wt.%. As a result, the rise in the nanoparticles loading also comes into play, and we notice a slight divergence in the response of the Bond number. ### Contact angle hysteresis The advancing (\(\theta_{A}\)) and receding contact angle (\(\theta_{R}\)) obtained from our experiments are plotted in figure 6(a) and (b) along with the results from different sources [50, 51, 52, 53, 54, 55, 56, 57]. They provided the values of \(\theta_{A}\) and \(\theta_{R}\) for a variety of liquids and surfaces, including various liquid compositions and surface conditions as presented by blue dots. ElSherbini and Jacobi's model [25] Figure 5: Variation of the critical inclination angle (\(\alpha\)) and the Bond number (\(Bo\)) with different nanoparticle loadings (wt.%) for (E 100% + W 0%) and (E 80% + W 20%) droplet. based on the two-circle analysis fits this data with a correlation coefficient of 0.90. A linear fit (\(\theta_{R}=-11.5+0.9\theta_{A}\)) agrees with the data with a correlation coefficient of 0.97. The data thus confirms that there is a linear relationship between the advancing and receding contact angles. It is possible to generalize the relationship between the advancing and receding contact angles for binary and nanofluid droplets since our experimental results are consistent with the range shown in figure 6. For the generalised relationship, if \(\theta_{A}\) is specified, then \(\theta_{R}\) and the maximum Bond number should remain constant. It can be seen that the advancing and receding angles obtained from our experiments for (E 100% + W 0%) and (E 80% + W 20%) droplets (indicated by the red and green symbols) lie within this range. It is to be noted that our results and earlier experiments have shown that the data is noisy, but the spread of our results falls within the range of those earlier studies. Although the earlier experimental results involved pure fluids on unheated substrates, the current work involves pure ethanol and ethanol-water binary mixture on a heated substrate with nanoparticle loadings. We infer that the correlations given by ElSherbini and Jacobi and linear fit (\(\theta_{R}=-11.5+0.9\theta_{A}\)) are reasonably robust for binary droplets and other types of substrate conditions. Despite our Figure 6: (a) Comparison of the experimentally obtained advancing (\(\theta_{A}\)) and receding (\(\theta_{R}\)) contact angles with that obtained from the model and linear fit. The dashed magenta line shows the linear fit, the solid red line is associated with the model developed by ElSherbini and Jacobi [25] and blue dots show the measurements from different sources [50, 51, 52, 53, 54, 55, 56, 57]. (b) A magnified view of panel (a) to show our results represented by red and green diamond symbols for (E 100% + W 0%) and (E 80% + W 20%) droplets, respectively. limited experimental data, we anticipate that our results will contribute to an eventual generalisation that can explain all experimental results for various droplet types and substrate conditions. Figure 7 depicts the variation of \(\theta_{R}/\theta_{A}\) with the Bond number, \(Bo\) for (E 100% + W 0%) and (E 80% + W 20%) droplets with different nanoparticle loadings. A quadratic curve provides a good fit to the data for both (E 100% + W 0%) and (E 80% + W 20%) droplets for all wt.%. This quadratic fit with a correlation coefficient of 0.84 is given by \[\frac{\theta_{R}}{\theta_{A}}=-0.079Bo^{2}+0.14Bo+0.74. \tag{4}\] ElSherbini and Jacobi[25] correlated \(\theta_{min}\) and \(\theta_{max}\) with the Bond number and mentioned that \(\theta_{max}\approx\theta_{A}\). The results indicate that, regardless of the size of the drop, the minimum contact angle in a drop at a critical condition is equal to the receding contact angle \(\theta_{R}\). Thus, in the calculation of the retention force factor, we use \(\theta_{A}\) and \(\theta_{R}\) instead of \(\theta_{max}\) and \(\theta_{min}\), respectively. Figure 7: Variation of Bond number with a ratio of advancing (\(\theta_{A}\)) and receding (\(\theta_{R}\)) contact angles. A quadratic curve fits the data with R\({}^{2}\) = 0.84. ### 3.3 Retention force factor To study the droplet stability on a critically inclined substrate, the retention force factor is calculated from Eq. (1) as \[mg\sin\alpha=k\gamma R(\cos\theta_{R}-\cos\theta_{A}), \tag{5}\] where \(m\) is the mass of the droplet. It is calculated as \[m=V\rho_{eff}. \tag{6}\] Here, \(V\) is volume of the droplet. By assuming spherical cap assumption, the volume of an Figure 8: Variation of the modified retention force factor (\(K\)) with normalised inclination angle (\(\alpha/\alpha_{0}\)) for (E 100% + W 0%) and (E 80% + W 20%) nanofluid droplets. The data is fitted by quadratic curves with R\({}^{2}\) values of 0.99 and 0.84 for ethanol and (E 80% + W 20%), respectively. The equations of quadratic fits for (E 100% + W 0%) and (E 80% + W 20%) droplets are \(\alpha/\alpha_{0}=1+1.2K-0.35K^{2}\) and \(\alpha/\alpha_{0}=1+1.2K-0.61K^{2}\), respectively. asymmetrical droplet can be calculated using [25], \[V=\frac{\pi R^{3}}{3}\frac{(2-3\cos\theta_{\rm avg}+\cos^{3}\theta_{\rm avg})}{ \sin^{3}\theta_{\rm avg}}, \tag{7}\] where \(\theta_{\rm avg}=(\theta_{A}+\theta_{R})/2\). Figure 8 depicts the retention force factor (\(k\)) for binary nanofluid droplets obtained from Eq. (5) in terms of \(K\). The modified retention force factor, \(K\), is given by \(k\times(V_{m}/V)\times C\), where \(V_{m}\) is the mean volume of droplets considering all the cases and \(C\) is the nanoparticle concentration in wt.%. It is to be noted that the initial volume of the droplets for different conditions varies slightly due to experiment uncertainty. Thus, we multiply \(k\) by the volume correction factor \(V_{m}/V\). The modified retention force factor, \(K\), is plotted against the normalized critical inclination angle (i.e. the critical angle normalized with the critical angle of no loading case). The results for (E 100% + W 0%) and (E 80% + W 20%) droplets are fitted by quadratic fit with a correlation coefficient of 0.99 and 0.83, respectively. The equations for quadratic fits are given by For (E 100% + W 0%) droplets: \[\alpha/\alpha_{0}=1+1.2K-0.35K^{2}, \tag{8}\] and for (E 80% + W 20%) droplets: \[\alpha/\alpha_{0}=1+1.2K-0.61K^{2}. \tag{9}\] The constants and linear terms in Eqs. (8) and (9) are the same for both (E 100% + W 0%) and (E 80% + W 20%) droplets. For droplets with no nanoparticle loading, \(C\) is zero, and consequently, \(K\) becomes zero from its definition mentioned above. The proposed correlation focuses on the change in a droplet's critical angle driven by the addition of nanoparticles in comparison to the no-loading condition. The linear term represents the increase in the critical angle due to an increase in friction and surface tension caused by the addition of the nanoparticles. The expressions show that for lower loading wt.%, the effect of adding nanoparticles is the same for both compositions since the linear term has the same constant for both droplets. On the other hand, the quadratic term is negative and decreases the critical angle with increasing nanoparticle loading. This term comes in due to the increase in weight of the nanofluid due to the addition of nanoparticles that increases the gravitational force component. Thus this term becomes important only at higher wt.% of nanoparticle loading. The constant associated with this quadratic term is also different for pure and binary fluids. This is likely because the balance between surface tension and gravitational forces differs for critically inclined droplets of different compositions. The coefficient of \(K^{2}\) for pure fluid (E 100% + W 0%) (\(a_{1}\)) and binary fluid (E 80% + W 20%) (\(a_{2}\)) are related as \(a_{2}=1.74a_{1}\). The \(K\) values are influenced by the inclination, composition of the mixture, and concentration of the nanoparticles added, as shown in figure 8. The value of \(K\) for (E 100% + W 0%) droplet increases with increasing inclination angle and nanoparticle concentration. For (E 80% + W 20%) droplet, the value of \(K\) increases initially up to 0.6 wt.% as the critical angle increases. After 0.6 wt.%, a slight decrease in the critical angle is observed despite this value of \(K\) increases as other terms in Eq. 5 like \((\cos\theta_{R}-\cos\theta_{A})\) and \(m\) dominate. ### Evaporation dynamics: thermal profiles In this section, we discuss the thermal patterns of the evaporating sessile droplets of (E 100% + W 0%) and (E 80% + W 20%) compositions laden with different nanoparticle loadings. The substrate is maintained at \(T_{s}=50^{\circ}\) and inclined at the respective critical angle of different droplets. Figure 9 shows the temporal variations of the temperature contours of an (E 100% + W 0%) droplet with no loading, 0.4 wt.%, and 0.8 wt.% loadings. Here, \(t/t_{e}\) is the normalized time, wherein \(t_{e}\) is the lifetime of the droplet and \(t=0\) represents the instant when the droplet is placed on the substrate. It can be observed that the wetting diameter of the (E 100% + W 0%) droplet without nanoparticle loading shrinks (i.e the wetting diameter decreases with time) whereas the droplet with loading shows a pinned behavior for the majority of its lifetime. In the loading condition, we observe more hydrothermal waves and instabilities since the droplet is pinning and the addition of nanoparticles accelerates the evaporation process. This demonstrates that in droplets laden with nanoparticles, the thermocapillary forces due to the surface tension gradients produce more vigorous Marangoni convection. The differences in the surface tension brought on by the temperature gradients cause the interfacial waves to move from warmer (lower surface tension) to colder (higher surface tension) zones. As a result of these actions, the droplet tends to spread along the direction of inclination. We observe more elongation of the droplet in the case of 0.8 wt.% than 0.4 wt.% case due to its higher critical angle of inclination. This resulted in an earlier breakdown of the droplet at the receding side as more fluid volume shifted to the advancing side. Figure 10 depicts the temporal evolution of the temperature contours of an (E 80% + W Figure 9: Temporal evolution of the temperature contours of an ethanol (E 100% + W 0%) droplet for different nanoparticle loadings (wt.%). The color bar shows the temperature variation. 20%) droplet for different nanoparticle loadings (wt.%). It can be seen that, similar to the pure ethanol droplet shown in figure 9, the (E 80% + W 20%) droplet with nanoparticles loadings also shows a pinned nature whereas no loading droplet shrinks as its wetting diameter decreases with time (figure 10). For this composition, due to the small difference in the critical angle values between 0.4 wt.% and 0.8 wt.%, no significant variation is observed in the thermal patterns. The hydrothermal waves and instability are less intense in an (E 80% + W 20%) droplet than an (E 100% + W 0%) as the gradient of surface tension \(\frac{d\sigma}{dT}\) for water is lower than that of ethanol [48], that is, \(\left(\frac{d\sigma}{dT}\right)_{water}<\left(\frac{d\sigma}{dT}\right)_{ethanol}\). ## 4 Conclusions The droplet stability in terms of the retention force factor at their respective critical angles is studied for pure ethanol (E 100% + W 0%) and binary (E 80% + W 20%) sessile droplets Figure 10: Temporal evolution of the temperature contours of a (E 80% + W 20%) droplet for different nanoparticle loadings (wt.%). The color bar shows the temperature variation. laden with alumina (Al\({}_{2}\)O\({}_{3}\)) nanoparticles. Shadowgraphy and infrared imaging techniques are employed to investigate the dynamics, and experiments are conducted in a customized goniometer. The images are post-processed using Matlab(r) and U-net architecture based on a convolution neural network (machine learning technique). It is observed that the critical angle of inclination increases with an increase in nanoparticle loading (wt.%) for (E 100% + W 0%) droplets. However, for (E 80% + W 20%) droplets, the critical angle decreases slightly after 0.6 wt.% as the gravitational force dominates the surface tension force. With the increase in the critical angle of inclination, the Bond number also increases. The values of advancing and receding angles from different sources show that the relationship between the advancing and receding contact angles can be generalized for a given range. Our experimental advancing and receding contact angle values fall under this range. A quadratic fit with R\({}^{2}\) value of 0.84 fits the relationship between the ratio of the advancing to receding contact angles and the Bond number for both (E 100% + W 0%) and (E 80% + W 20%) droplets. The retention force factor is calculated for different nanoparticle loadings for pure and binary fluid droplets. The droplet composition, nanoparticles loading, and the critical angle of inclination affect the retention force factor. For (E 100% + W 0%) droplets, the retention force factor increases with an increase in nanoparticle loading and the critical inclination angle. The data are fitted by a quadratic fit and the correlation for retention force factor and the critical angle between (E 100% + W 0%) and (E 80% + W 20%) droplets is found. Infrared images of the evaporating droplets are presented for different loading conditions. The droplets with nanoparticle loading show richer hydrothermal waves than the droplets without loading, and these waves are more intense in droplets with higher ethanol concentration. The droplets elongated more towards the receding side due to body force, resulting in an earlier breakdown of the droplet at the receding side. **Acknowledgement:** K.C.S. thanks Science & Engineering Research Board, India, for financial support through grant number: CRG/2020/000507. We also thank Dr Lakshmana Dora Chandrala for his help in employing the machine learning method for post-processing.
2301.09439
Autoencoder-based Joint Communication and Sensing of Multiple Targets
We investigate the potential of autoencoders (AEs) for building a joint communication and sensing (JCAS) system that enables communication with one user while detecting multiple radar targets and estimating their positions. Foremost, we develop a suitable encoding scheme for the training of the AE and for targeting a fixed false alarm rate of the target detection during training. We compare this encoding with the classification approach using one-hot encoding for radar target detection. Furthermore, we propose a new training method that complies with possible ambiguities in the target locations. We consider different options for training the detection of multiple targets. We can show that our proposed approach based on permuting and sorting can enhance the angle estimation performance so that single snapshot estimations with a low standard deviation become possible. We outperform an ESPRIT benchmark for small numbers of measurement samples.
Charlotte Muth, Laurent Schmalen
2023-01-23T13:59:20Z
http://arxiv.org/abs/2301.09439v2
# Autoencoder-based Joint Communication and Sensing of Multiple Targets ###### Abstract We investigate the potential of autoencoders (AEs) for building a joint communication and sensing (JCAS) system that enables communication with one user while detecting multiple radar targets and estimating their positions. Foremost, we develop a suitable encoding scheme for the training of the AE and for targeting a fixed false alarm rate of the target detection during training. We compare this encoding with the classification approach using one-hot encoding for radar target detection. Furthermore, we propose a new training method that complies with possible ambiguities in the target locations. We consider different options for training the detection of multiple targets. We can show that our proposed approach based on permuting and sorting can enhance the angle estimation performance so that single snapshot estimations with a low standard deviation become possible. We outperform an ESPRIT benchmark for small numbers of measurement samples. Joint Communication and Sensing, Neural Networks, Angle estimation, Multiple Radar Target Detection, ESPRIT ## I Introduction Electromagnetic sensing and radio communications remain vital services for society, yet an increase in their sustainability, and consequently in their efficiency, is of rising importance. We can increase spectral and energy efficiency by combining radio communication and sensing into one waveform compared to operating two separate systems. Therefore, this work focuses on the codesign of both functionalities in a joint communication and sensing (JCAS) system. So far, standardized approaches for localization and communication, such as the LTE Positioning Protocol (LPP), or the New Radio Positioning protocol A (NRPPa), need the cooperation of the user equipment to localize it. The future 6G network is envisioned to natively support JCAS by extending sensing capabilities to non-cooperating targets, such as objects without communication capabilities, and performing general sensing of the surroundings [1]. From this approach, we expect to increase spectral efficiency by making spectral resources accessible to communication while maintaining their use for sensing. Simultaneously, we predict an increase in energy efficiency because of the dual-use of a joint waveform. In the radar community, the integration of communication capabilities into sensing signals to enhance a standard radar signal with an information sequence for a possible receiver has already been studied [2]. A well-studied approach to combined communication and sensing is OFDM radar [3, 4]. OFDM radar enables the robust detection of objects while maintaining its communication capabilities through careful signal processing. However, there is a growing interest in data-driven approaches based on machine learning (ML) since they can overcome deficits that model-based techniques as used in OFDM face. Especially at higher frequencies used for sensing applications, which will become more important in 6G, these deficits become more pronounced because of hardware imperfections [5]. ML is expected to be prevalent in 6G since its use has matured in communication as well as in radar processing [1]. Autoencoders (AEs) have been studied for communication systems, e.g., [6, 7], and in the context of radar [8, 9]. In [5], an AE for JCAS in a single-carrier system has been proposed and has shown to robustly perform close to a maximum a-posteriori ratio test detector benchmark for single snapshot evaluation and one possible radar target. In this paper, we explore the monostatic sensing capabilities of a wireless single-carrier communication system. We use an AE approach and study the influence of multi-target sensing and multi-snapshot sensing on the overall performance. This work extends the AE model of [5] by adding multiple target capabilities for detection and localization. We describe the detection of multiple targets not as a classification task with the number of targets as classes but instead design it as parallel detection tasks resulting in the novel counting encoding. The permutation invariance of targets during detection brings additional challenges to the training of the neural networks (NNs). To address this issue, we present multiple approaches with low additional complexity. ## II System Model The system block diagram, shown in Fig. 1, is based on [5]. The encoder transforms the data symbols \(m\in\mathcal{M}:=\{1,2,\ldots,M\}\) into complex modulation symbols \(x\in\mathcal{C}\subset\mathbb{C}\), with \(|\mathcal{C}|=M\). The complex symbols are multiplied with a unique \(\nu_{i}=g_{i}\exp(\mathrm{j}\gamma_{i})\) for each antenna \(i\) with beamforming gain \(g_{i}\) and phase shift \(\gamma_{i}\) to steer the signal to our areas of interest. The encoder and beamformer employ power normalization to fulfill power constraints. We consider a maximum of \(T_{\max}\) radar targets and a linear array of \(K\) antennas in the transmitter and the radar receiver. The beamformer inputs are the azimuth angle regions in which communication and sensing should take place. The communication receiver is situated randomly in the interval \([\varphi_{\min},\varphi_{\max}]\) and the radar target positions are uniformly drawn from \([\theta_{\min},\theta_{\max}]\). The transmit signal \(\mathbf{y}\) is fed into a Rayleigh channel before being received by the communication receiver with a single antenna as \[z_{\text{c}}=\beta\mathbf{a}_{\text{TX}}(\varphi)^{\top}\mathbf{y}+n, \tag{1}\] with complex normal distributed \(\beta\sim\mathcal{CN}(0,\sigma_{\text{c}}^{2})\) and \(n\sim\mathcal{CN}(0,\sigma_{\text{n}}^{2})\). We assume that channel estimation has already been performed, therefore the channel state information (CSI) \(\kappa=\beta\mathbf{a}_{\text{TX}}^{\top}(\varphi)\mathbf{\nu}\) is available at the communication receiver. The input of the communication receiver is \(z_{\text{c}}/\kappa\). The outputs of the receiver are estimates of the symbol-wise maximum a posteriori probabilities that are transformed into bitwise log-likelihood ratios (LLRs) that can be used as input to a soft-decision channel decoder. For the simulation of multiple radar targets, we express the sensing signal that is reflected from \(T\) radar targets as \[\mathbf{z}_{\text{r}}=\left(\sum_{k=0}^{T}\alpha_{k}\mathbf{a}_{\text{RX}}(\theta_{k} )\mathbf{a}_{\text{TX}}(\theta_{k})^{\top}\mathbf{y}\right)+\mathbf{n}, \tag{2}\] with the radar targets following independently a Swerling-1 model \(\alpha_{k}\sim\mathcal{CN}(0,\sigma_{\text{r}}^{2})\) and \(\mathbf{n}\sim\mathcal{CN}(0,\sigma_{\text{n}}^{2}\mathbf{I})\). The signal propagation from \(K\) antennas toward an azimuth angle \(\theta_{k}\) is modeled with the spatial angle vector \(\mathbf{a}_{\text{TX}}(\theta_{k})\in\mathbb{C}^{K}\) whose entries are given by \[[\mathbf{a}_{\text{RX}}(\theta_{k})]_{i}=[\mathbf{a}_{\text{TX}}(\theta_{k})]_{i}= \exp\left(\mathrm{j}2\pi\left(\frac{d_{y}}{\lambda}i\sin\theta_{k}\right) \right). \tag{3}\] The parameter \(d_{y}\) describes the horizontal distance between each antenna element at the transmitter and the radar receiver. Target detection and angle estimation are both performed using \(\mathbf{z}_{\text{r}}\). The output of the target detection NN is a probability vector \(\mathbf{p}_{\text{T}}\in[0,1]^{T_{\max}}\). Each entry of \(\mathbf{p}_{\text{T}}\) denotes the probability that a specific target is present, without a specific order. From \(\mathbf{p}_{\text{T}}\), we determine the number of detected targets. The angle estimation block outputs a vector \(\hat{\mathbf{\theta}}\in[-\frac{\pi}{2},\frac{\pi}{2}]^{T_{\max}}\) denoting the estimated azimuth angle of each target. With a Swerling-1 model, we model scan-to-scan deviations of the radar cross section (RCS). During training of target detection, the values \(\alpha_{k}\) remain equal over all receive antennas, while being independently sampled from the complex normal distribution for different targets or different time instants. Our system is designed to solve three different tasks: * transmit data over a Rayleigh channel, * estimate the number of targets in our angle region of interest (detection), * estimate the position of the targets (angles of arrival). Considering a possible upsampling with \(u>1\), we combine outputs of the sensing receiver by averaging the detection probabilities along the upsampling axis. Similarly, we average the estimated angles after having applied the corresponding set method discussed in Sec. II-F. ### _Angle Estimation Benchmark_ We use the well-studied ESPRIT algorithm as a benchmark for angle estimation as studied in [10, 11]. The estimation variance of ESPRIT increases when the number of snapshots is small, therefore we also adapt ESPRIT for single snapshot evaluation as described in [12], by constructing a Hankel matrix before auto-correlation to improve the estimation root mean squared error (RMSE). For validation purposes, we only measure the RMSE for all targets that were detected by the target detection block and are also present. We assume that in cases where the target detector fails at recognizing a target, the reflected signal power from the target is very low or there is another target extremely close to it and its reflection is shadowed. Therefore calculating the error only for detected targets can lead to a higher effective signal-to-noise ratio (SNR) by ignoring low power reflections in the evaluated samples. ### _Neural Network Training and Validation_ We realize all blocks in transmitter and receiver highlighted in Fig. 1 by NNs, which are jointly trained in an end-to-end manner. We utilize fully connected NN layers with an exponential linear unit (ELU) activation function. The Fig. 1: JCAS autoencoder as proposed in [5], light blue blocks are trainable NNs, red dashed paths are only active while propagating the training data number of neurons and the output functions vary according to the task and are summarized in Tab. I. Although arriving at a similar structure to [5], we couple the NN layer size with different system parameters. The fully connected NNs each of depth 5 have different layer widths; each list item denotes the number of neurons in a layer of the NN. The output layer size of encoder and beamformer requires two neurons to represent each complex output value with two real numbers. Consequently, the number of input neurons for target detection, angle estimation, and the communication receiver also use two real-valued inputs to represent complex input signals. The encoder and beamformer are subject to power normalization representative for the power constraints of a radio transmitter. The decoder output uses a softmax layer to generate probabilities \(\hat{P}(m|z_{\mathrm{e}})\). We set the learning rate to \(0.001\) for all NNs and employ the Adam optimizer. We use \(20\cdot T_{\mathrm{max}}\) mini-batches with \(N_{\text{mb}}=10^{4}\) samples in each epoch and train for \(150\) epochs, resulting in convergence of the NN training. During training, additional knowledge is injected into the NNs as shown in Fig. 1. To decouple both sensing tasks during training, the actual number of radar targets is injected into the angle estimation network by only propagating through the NN if one or more targets are present. During validation, we measure the bit-wise mutual information (BMI) of the communication receiver. Since the JCAS system learns both symbol constellation and bitmapping, this is the most suitable metric [7]. ### _Loss Functions_ We need a combined loss function to jointly optimize our different networks. 1. Communication Loss: As proposed in [7], we use the binary cross entropy (BCE) as a loss function \(L_{\text{comm}}\) to optimize mainly the encoder, decoder, and beamformer. Since this loss function takes the BMI into account, the complex symbol alphabet and the bit mapping are jointly optimized. 2. Detection Loss: We utilize the BCE between estimated and present targets as a loss function \(L_{\text{detect}}\). This optimization mainly affects the target detection and the beamformer. 3. Angle Estimation Loss: We use a mean squared error (MSE) loss between valid and estimated angles as a loss function \(L_{\text{angle}}\), which mainly affects angle estimation and the beamformer. We propose a training schedule consisting of three different training stages to improve the results. Therefore we adapt the loss function after a third and two-thirds of all training epochs. Different loss terms are weighted and added to enable joint training. The loss functions \(L_{i}\) of the different training stages are: \[L_{1} =(1-w_{\text{r}})\cdot L_{\text{comm}}+w_{\text{r}}w_{\text{a}} \cdot L_{\text{angle}}, \tag{4}\] \[L_{2} =(1-w_{\text{r}})\cdot L_{\text{comm}}+w_{\text{r}}\cdot L_{\text {detect}},\] (5) \[L_{3} =(1-w_{\text{r}})\cdot L_{\text{comm}}+w_{\text{r}}\cdot L_{\text {detect}}+w_{\text{r}}w_{\text{a}}\cdot L_{\text{angle}}. \tag{6}\] We choose a weighting factor of \(w_{\text{r}}=0.9\). Since both communication and sensing functionalities profit from a high SNR, the beamformer is trained to radiate most energy toward the possible positions of communication receiver and radar target. Since only limited power is available, \(w_{\text{r}}\) affects the magnitude of the beam in direction of the radar targets and the direction of the communication receiver by being able to change the optimal power trade-off of communication and sensing. By increasing \(w_{\text{r}}\), we can increase the importance of the sensing functionality, therefore increasing the radiated power towards \([\theta_{\min},\theta_{\max}]\) but decreasing the radiated power toward the communication receiver in \([\varphi_{\min},\varphi_{\max}]\). The other weighting factor was chosen to \(w_{\text{a}}=20\) to further improve the angle estimation. The training schedule has the effect that initially everything but the target detection is trained. The effect of the angle estimation on the transmit beam is comparably weak; this leads to a good initial performance of the communication part while the angle estimation is trained to extract features from reflections with comparably low power. Afterwards, switching the angle estimation with target detection in \(L_{2}\) results in a beamform radiating mostly toward our angle ranges of interest while \(w_{\text{r}}\) controls the ratio of average radiated power in \([\theta_{\min},\theta_{\max}]\) and \([\varphi_{\min},\varphi_{\max}]\). Lastly, applying the fully joint loss function \(L_{3}\) accelerates the training of the angle estimation as well as target detection, when the communication part has almost converged. ### _One-hot vs. Counting Encoding_ To extend the system from the one target case as proposed in [5], we need to decide how to encode different numbers of detectable targets. To model partially correct detection, e.g., detection of one target when two are present, we propose a novel representation called _counting encoding_ that can be understood as a subcategory of multi-hot encoding. It enables direct measurement of detection probabilities and notably supports choosing a resulting false alarm rate. In essence, the detection of \(T_{n}\) targets gets divided into \(T_{\max}\) tasks to confirm the presence of a maximum of \(T_{\max}\) targets. The detection vector \(\mathbf{c}\) that represents \(T_{i}\) targets is built with \[c_{i}=\begin{cases}1&\text{if }i\leq T_{i},\\ 0&\text{otherwise},\end{cases}\quad\text{for }i=1,\dots,T_{\max}. \tag{7}\] For an example with \(T_{\max}=3\), the encoded vectors \([0,0,0],[1,1,1]\) and \([1,1,0]\) represent the occurrence of zero, three, and two targets. By summation, we can recover the number of targets and by element-wise multiplication with the angle estimates, we can mask the angle estimate vectors \(\hat{\mathbf{\theta}}\) to match the number of targets present. We can train the target detection NN with a sigmoid output layer and transform the logits \(\ell_{n}\) into probabilities \(c_{\text{est},n}=\sigma(\ell_{n})\) with \[c_{\text{est},n}=P(\text{``$n$ or more targets detected''}). \tag{8}\] We introduce a weighted false alarm rate that emphasizes the number of targets falsely detected. Counting encoding implicitly supports this weighting when summing over multiple entries since the event described by \(c_{\text{est},n}\) includes \(c_{\text{est},n+1}\). We calculate both the detection rate \(P_{\text{d}}\) and the weighted false alarm rate \(P_{\text{f}}\) from the valid \(T_{n}\) targets in timestep \(0\leq n\leq N-1\) with \(\mathbf{C}\in\{0,1\}^{N\times T_{\text{max}}}\) and the estimated targets \(\mathbf{C}_{\text{est}}\in[0,1]^{N\times T_{\text{max}}}\) as \[P_{\text{d}}=\frac{1}{\sum_{n=0}^{N-1}T_{n}}\sum_{i=1}^{N}\sum_{j=1}^{T_{n}} \lfloor c_{\text{est},i,j}\rceil, \tag{9}\] and \[P_{\text{f}}=\frac{1}{\sum_{n=0}^{N-1}(T_{\text{max}}-T_{n})}\sum_{i=1}^{N} \sum_{j=T_{n}+1}^{T_{\text{max}}}\lfloor c_{\text{est},i,j}\rceil, \tag{10}\] where \(\lfloor\cdot\rceil\) denotes rounding to the next integer. During validation, the target detection probability is sorted in descending order. This ensures \(c_{\text{est},n+1}\leq c_{\text{est},n}\). The detection output remains therefore easily interpretable by preventing impossible states, e.g., no detection of a first target but still detection of a second target. This sorting is arguably necessary to interpret all possible outputs, but it should be already performed by the detection NN since we do not sort during training. Since traditional one-hot encoding is prevalently in use for classification problems as in [6], we adapt the target detection NN for one-hot encoding as a benchmark alternative. We add one neuron to the output layer and replace the sigmoid function with softmax. We denote the valid one-hot matrix as \(\mathbf{O}\in\{0,1\}^{N\times(T_{\text{max}}+1)}\) and the estimated targets as \(\mathbf{O}_{\text{est}}\in[0,1]^{N\times(T_{\text{max}}+1)}\) describing the presence of \(0,1,\dots,T_{\text{max}}\) targets. For the one-hot encoding, detection probability and the weighted false alarm rate are calculated using the hard-decision \(h_{n}=\operatorname*{arg\,max}_{k}(o_{\text{est},n,k})\) as \[P_{\text{d,onehot}}=\frac{\sum_{n=0}^{N-1}\min\{T_{n},h_{n}\}}{\sum_{n=0}^{N-1 }T_{n}}, \tag{11}\] and \[P_{\text{f,onehot}}=\frac{\sum_{n=0}^{N-1}(\max\{T_{n},h_{n}\}-T_{n})}{\sum_{n= 0}^{N-1}(T_{\text{max}}-T_{n})}. \tag{12}\] The probability vectors can be transformed from one-hot encoding to counting encoding by \[c_{\text{est},k}=\sum_{n=k}^{T_{\text{max}}}o_{\text{est},n}, \tag{13}\] and for counting encoding to one-hot encoding using \[o_{\text{est},k}=\begin{cases}c_{\text{est},k}&\text{for $k=T_{\text{max}}$},\\ c_{\text{est},k}-c_{\text{est},k+1}&\text{for $k\in[1,T_{\text{max}}-1]$},\\ 1-c_{\text{est},1}&\text{for $k=0$}.\end{cases} \tag{14}\] ### _Fixed False Alarm Rate_ For many applications, the implications of a false alarm and a missed detection are different. For example in automotive driving or malicious drone detection, the actions associated with detection and non-detection are so vastly different that the probability of false alarm and missed detection should be different. We train for a fixed weighted false alarm rate (meaning the probability that a target is detected even though none are present), but our model can easily be adapted to train for a fixed missed detection rate. During training, we proceed as follows: * choose all output logits \(\ell_{n}\) of the target detection with \(c_{n}=0\), \(n\in[0,N-1]\), with \(X=\sum_{n=0}^{N-1}T_{n}\) being the number of chosen logits in the whole training minibatch, * sort these logits in ascending order, * choose \(\ell_{i}\) with \(i=\lfloor(1-P_{\text{f}})\cdot X\rfloor\), * subtract \(\ell_{i}\) from all logits and set \(\ell_{\text{off}}=\ell_{i}\), and * apply the sigmoid function \(c_{\text{est},n}=\sigma(\ell_{n})\). During validation, we set \(c_{\text{est},n}=\sigma(\ell_{n}-\ell_{\text{off}})\) without updating \(\ell_{\text{off}}\), ensuring the same system behavior during validation. For multiple target detection, one \(\ell_{\text{off}}\) is used for \(\mathbf{C}_{\text{est}}\). In order to specify a targeted \(P_{\text{f}}\) using one-hot encoding, we offset the output probabilities of the NN with \(P_{\text{off}}=(P_{\text{f}}-P_{\text{f,onehot}})\cdot[1,-\frac{1}{T}_{\text{ max}},-\frac{1}{T}_{\text{max}},\dots,-\frac{1}{T}_{\text{max}}]^{\top}\) after calculating the resulting weighted false alarm rate \(P_{\text{f,onehot}}\) (without using hard-decision to improve training stability). To ensure probability values in \([0,1]\), we clip at these extremes. Using one-hot encoding, we replace the binary cross-entropy loss for target detection with the cross-entropy loss, handling the optimization as a classification problem. ### _Sequence Ambiguity in Multiple Target Detection_ For simulation purposes, we face the fact that real and estimated angles exist as vectors in our system, while we need to compare distances of sets. The order in which our NN estimates the angles of different targets is practically not important, but we need to be able to match estimates to their valid counterpart. We have multiple approaches to handle this extension to sets during training of the NN. #### Iv-F1 Sortinput This simple approach sorts all input angles in our validation set. This corresponds to an additional task to the angle estimation NN: Not only estimating the correct angles but also returning them in order. This approach is effective if the angles are estimated correctly. #### Iv-F2 Sortall This extension of the first approach sorts the validation set and the outputs of the NN. If angle estimations are correct, this set behavior represents a translation to vectors. We expect the sortall approach to perform at least as well as sorti #### Ii-C3 Permute For this method, the angle permutation that minimizes the MSE is chosen as the correct permutation, and returned vectors are permuted according to it. This represents the best possible method concerning MSE but brings also significant overhead since \(T!\) angle permutations need to be considered. We calculate the average complexity for one sample for the different set approaches, shown in Tab. II. For sortiput and sortall, we assume a Quicksort algorithm. If the NN estimation in one of the sorting approaches contains angle estimates far away from the true angle, the overall MSE could be much larger than expected as the whole sorting is faulty. For example, if \(\hat{\theta}_{k}>\hat{\theta}_{k+1}\) but \(\theta_{k}<\theta_{k+1}\), the values are switched for evaluation even if \(\hat{\theta}_{k}\approx\theta_{k}\). During validation, we use the permute method for all trained NNs. ## III Simulation Results In our simulations, the communication receiver is situated at an angle of arrival (AoA) of \(\varphi\in[30^{\circ},50^{\circ}]\). The radar targets are found in \(\theta\in[-20^{\circ},20^{\circ}]\). Our monostatic sender and radar receiver are simulated as a linear array with 16 antennas. For the radar receiver, we target a weighted false alarm rate of \(P_{\mathrm{f}}=10^{-2}\) while optimizing the detection rate and the angle estimator. ### _Communication Results_ Previous works [6, 7] have shown that an AE approach to substitute modulation and demodulation is effective. In combination with sensing, constellation diagrams tend to assume a PSK-like form. This behavior can be explained intuitively, since sensing profits greatly from a constant signal amplitude. For \(M=8\) and a communication SNR of \(\sigma_{\mathrm{c}}^{2}/\sigma_{\mathrm{n}}^{2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(0.5\). We also reach a higher RMSE of \(0.06\) for the estimated angles. In Fig. 4, we plot the RMSE of the angle estimation for detected and present targets for \(10^{5}\) transmissions versus the upsampling factor \(u\). We compare the different set methods from Sec. II-F and also show the ESPRIT benchmark. For the multiple target case, the different set methods enable training of the NNs. The method labeled "None" denotes NN training without using any set method and shows that the implementation of a set method for multiple target estimation is necessary. The permute method performs the best, which was expected since it considers all possible set permutations while still using the MSE loss. The methods based on sorting perform relatively well and are only slightly outperformed by permuting. These set methods outperform the ESPRIT benchmark for small upsampling factors \(u\leq 3\). The specific single-snapshot ESPRIT implementation as used for \(u=1\) cannot outperform the proposed system. The detection probability is comparable for all set methods. The weighted false alarm rates are also similar for all methods and converge from the targeted \(P_{\text{f}}\) to zero with an increasing \(u\). The detection rate saturates to a value of \(0.83\) while increasing \(u\). For increased detection rate for rising \(u\), the detection threshold needs to be further modified. ## IV Conclusion In this work, we demonstrate the feasibility of the autoencoder (AE) approach to joint communication and sensing (JCAS) for multiple targets. We evaluated different set methods that enable training of angle estimation for multiple targets. Depending on the permissible system complexity, all three options remain contenders for application in future systems. We outperformed an ESPRIT benchmark for angle estimation for small upsampling factors \(u\). The novel counting encoding enables setting a design false alarm rate that constraints the detection rate of a neural network (NN) target detector. We see counting encoding as a promising alternative to classification using one-hot encoding for problems that include object recognition connected with counting. The proposed method is particularly suitable for JCAS systems, where the number of available snapshots is typically limited.
2303.10711
Asymptotic typicality degrees of properties over finite structures
In previous work we defined and studied a notion of typicality, originated with B. Russell, for properties and objects in the context of general infinite first-order structures. In this paper we consider this notion in the context of finite structures. In particular we define the typicality degree of a property $\phi(x)$ over finite $L$-structures, for a language $L$, as the limit of the probability of $\phi(x)$ to be typical in an arbitrary $L$-structure ${\cal M}$ of cardinality $n$, when $n$ goes to infinity. This poses the question whether the 0-1 law holds for typicality degrees for certain kinds of languages. One of the results of the paper is that, in contrast to the classical well-known fact that the 0-1 law holds for the sentences of every relational language, the 0-1 law fails for degrees of properties of relational languages containing unary predicates. On the other hand it is shown that the 0-1 law holds for degrees of some basic properties of graphs, and this gives rise to the conjecture that the 0-1 law holds for relational languages without unary predicates. Another theme is the ``neutrality'' degree of a property $\phi(x)$ ( i.e., the fraction of $L$-structures in which neither $\phi$ nor $\neg \phi$ is typical), and in particular the ``regular'' properties (i.e., those with limit neutrality degree $0$). All properties we dealt with, either of a relational or a functional language, are shown to be regular, but the question whether {\em every} such property is regular is open.
Athanassios Tzouvaras
2023-03-19T16:43:16Z
http://arxiv.org/abs/2303.10711v3
# Asymptotic typicality degrees of properties over finite structures ###### Abstract In previous work we defined and studied a notion of typicality, originated with B. Russell, for properties and objects in the context of general infinite first-order structures. In this paper we consider this notion in the context of finite structures. In particular we define the typicality degree of a property \(\phi(x)\) over finite \(L\)-structures, for a language \(L\), as the limit of the probability of \(\phi(x)\) to be typical in an arbitrary \(L\)-structure \(\mathcal{M}\) of cardinality \(n\), when \(n\) goes to infinity. This poses the question whether the 0-1 law holds for typicality degrees for certain kinds of languages. One of the results of the paper is that, in contrast to the classical well-known fact that the 0-1 law holds for the sentences of every relational language, the 0-1 law fails for degrees of properties of relational languages containing unary predicates. On the other hand it is shown that the 0-1 law holds for degrees of some basic properties of graphs, and this gives rise to the conjecture that the 0-1 law holds for relational languages without unary predicates. Another theme is the "neutrality" degree of a property \(\phi(x)\) ( i.e., the fraction of \(L\)-structures in which neither \(\phi\) nor \(\neg\phi\) is typical), and in particular the "regular" properties (i.e., those with limit neutrality degree 0). All properties we dealt with, either of a relational or a functional language, are shown to be regular, but the question whether _every_ such property is regular is open. Department of Mathematics Aristotle University of Thessaloniki 541 24 Thessaloniki, Greece. e-mail: [email protected] _Mathematics Subject Classification (2010)_: 03C98, 03D78 _Keywords:_ Russell's notion of typicality, typical property, regular property, 0-1 law, finite graph. ## 1 Typicality a la Russell In [6] we set out an investigation of a notion of typicality which is originated with B. Russell. Specifically in [5, p. 89], Russell defines a typical Englishman to be one "who possesses all the properties possessed by a majority of Englishmen." The notion seems captivating in its simplicity and naturalness, but in order to be formally defined one has to distinguish between properties of an object language and properties of the metalanguage, else typicality itself would be one of the properties we have to check about an Englishman, and thus circularity arises. Once we make the aforementioned distinction using a first-order language \(L\), given any \(L\)-structure \({\cal M}=\langle M,\ldots\rangle\) we can define typical elements of \(M\) in the spirit of Russell, provided we first define what a typical property is. Given a formula \(\phi(x)\) of \(L\) without parameters, let \(\phi({\cal M})\) denote the extension of \(\phi(x)\) in \({\cal M}\), i.e., \[\phi({\cal M})=\{a\in M:{\cal M}\models\phi(a)\}.\] **Definition 1.1**: Let \({\cal M}=\langle M,\ldots\rangle\) be an \(L\)-structure. A property \(\phi(x)\) of \(L\) is said to be _typical over \({\cal M}\),_ if \(|\phi({\cal M})|>|\neg\phi({\cal M})|=|M\backslash\phi({\cal M})|\). Then an element \(a\in M\) is said to be _typical_ if it satisfies every typical property over \({\cal M}\). In [6], among other things, we established the existence of typical elements in many _infinite_ structures. For example the standard structure of reals (or second-order arithmetic) contains \(|{\mathbb{R}}|\)-many typical reals, while only \(<|{\mathbb{R}}|\)-many nontypical ones. (A variant of the same notion of typicality, adjusted to fit to the context of set theory and generating a new inner model of ZF, has appeared in [7].) Instead, in the present paper we are interested only in typical _properties_ (not in typical objects), and only over the _finite_ structures of a (finite) language \(L\). Specifically, we set out to study the _probabilities_ for \(L\)-properties \(\phi(x)\), in one free variable, _to be typical_ over arbitrary \(L\)-structures \({\cal M}\) of cardinality \(n\), and further to compute the limits of these probabilities, as \(n\) tends to infinity. This study parallels the classical results of Finite Model Theory about the probabilities of _sentences_ of \(L\) to be _true_ over finite structures and the fundamental 0-1 law about these truth probabilities. ## 2 Typicality degrees of first-order properties over finite structures ### Asymptotic truth probabilities and 0-1 Laws Typicality degrees of properties over finite structures and their asymptotic behavior are, in some sense, generalizations of the truth degrees (or truth probabilities) of _sentences_. So we need to recall first some definitions and notation about the latter, see for example [1, SS3], or the original paper [2]. The terminology and notation here is mostly that of [2]. Let \(L\) be a first-order language consisting of a finite set of (non-logical) relational symbols. For every \(n\), let \({\bf S}_{n}(L)\) be the set of \(L\)-structures \({\cal M}=\langle M,\ldots\rangle\) with \(|M|=n\), or simply \(M=\{1,2,\ldots,n\}\). For every \(L\)-sentence \(\phi\), let \({\rm Mod}_{n}(\phi)\) be the subset of structures of \({\bf S}_{n}(L)\) which satisfy \(\phi\). Let also \[\mu_{n}(\phi)=\frac{|{\rm Mod}_{n}(\phi)|}{|{\bf S}_{n}(L)|},\mbox{ and }\mu( \phi)=\lim_{n\to\infty}\mu_{n}(\phi),\] if this limit exists. Given a class \(\Phi\) of \(L\)-sentences we say that \(\Phi\)_satisfies the 0-1 law_ if for every \(\phi\in\Phi\), \(\mu(\phi)=0\) or 1. The following is a fundamental result of Finite Model Theory. The following Theorem is independently due to Fagin [2] and Glebskii _et al._[3]. **Theorem 2.1**: (0-1 Law for FOL) _If \(L\) is a first-order language with no function or constant symbols, then the set of sentences of \(L\) satisfies the 0-1 law._ Nevertheless Theorem 2.1 fails when \(L\) contains function symbols. The following is the standard example used to prove this failure (see [2, SS4] and [1, Example 3.1.1]). **Example 2.2**: _Let \(L=\{F\}\), where \(F\) is a unary function symbol. If \(\phi\) is the \(L\)-sentence \(\phi:=\forall x(F(x)\neq x)\), then \(\mu(\phi)=e^{-1}\), thus the 0-1 law fails in general for the sentences of \(L\)._ _Proof._ Observe that for any \(n\geq 1\), the number of structures \({\cal M}=\langle M,f\rangle\in{\bf S}_{n}(L)\) that satisfy \(\phi:=(\forall x)(F(x)\neq x)\) is just the number of functions \(f:M\to M\), \(|M|=n\), such that \(f(x)\neq x\) for every \(x\in M\). This number is \((n-1)^{n}\) (since \(f(x)\) may take independently for each \(x\), \(n-1\) possible values). On the other hand, \(|{\bf S}_{n}(L)|=n^{n}\). Therefore \(\mu_{n}(\phi)=(1-1/n)^{n}\), and hence \(\lim_{n\to\infty}\mu_{n}(\phi)=e^{-1}\). ### Typicality degrees of properties Let us first elaborate a bit on the general definition 1.1. Recall that we denote by \(\phi({\cal M})\) the extension of \(\phi\) is \({\cal M}\), i.e., \(\phi({\cal M})=\{a\in M:{\cal M}\models\phi(a)\}\). When we deal with typicality of _elements_ of a structure, we naturally distinguish them into just typical and non-typical, but when we deal with typicality of _properties,_ especially over finite structures, we should distinguish them into three kinds, according to the size of their extension. **Definition 2.3**: Let \({\cal M}\) be an \(L\)-structure and let \(\phi(x)\) be a property of \(L\) (without parameters). We say that: \(\bullet\)\(\phi(x)\) is _typical for \({\cal M}\)_, if \(|\phi({\cal M})|>|\neg\phi({\cal M})|\). \(\bullet\)\(\phi(x)\) is _atypical for \({\cal M}\),_ if \(|\phi({\cal M})|<|\neg\phi({\cal M})|\). \(\bullet\)\(\phi(x)\) is _neutral for \({\cal M}\),_ if \(|\phi({\cal M})|=|\neg\phi({\cal M})|\) (i.e., if neither \(\phi(x)\) nor \(\neg\phi(x)\) is typical). The above distinction of properties is valid for all structures, infinite and finite, but is particularly useful when dealing with finite structures. If we apply the preceding definition to a structure \({\cal M}\) of \({\bf S}_{n}(L)\), then \(\phi(x)\) is typical, atypical and neutral for \({\cal M}\), if and only if \(|\phi({\cal M})|>n/2\), \(|\phi({\cal M})|<n/2\) and \(|\phi({\cal M})|=n/2\), respectively, the latter case of course being possible only for even \(n\). Since for every \(\phi\) and \({\cal M}\in{\bf S}_{n}(L)\), \[|\phi({\cal M})|<n/2\Leftrightarrow|\neg\phi({\cal M})|>n/2,\] that is, \[\phi(x)\mbox{ is atypical for }{\cal M}\Leftrightarrow\neg\phi(x)\mbox{ is typical for }{\cal M},\] to simplify terminology henceforth we shall not refer to "atypical \(\phi(x)\)" but to "typical \(\neg\phi(x)\)" instead. Let us also fix for every \(L\) and \(n\) the following subclasses of \({\bf S}_{n}(L)\). \[{\bf S}_{n}(\phi:\mbox{typ})=\{{\cal M}\in{\bf S}_{n}(L):\phi(x)\mbox{ is typical for }{\cal M}\}=\{{\cal M}:|\phi({\cal M})|>n/2\},\] \[{\bf S}_{n}(\phi:\mbox{ntr})=\{{\cal M}\in{\bf S}_{n}(L):\phi(x)\mbox{ is neutral for }{\cal M}\}=\{{\cal M}:|\phi({\cal M})|=n/2\}.\] The second of the above classes exists only for even \(n\), so for each property \(\phi(x)\), \({\bf S}_{n}(L)\) splits as follows for odd and even \(n\): \[{\bf S}_{2n+1}(L)={\bf S}_{2n+1}(\phi:\mbox{typ})\cup{\bf S}_{2n+1}(\neg\phi: \mbox{typ}), \tag{1}\] while \[{\bf S}_{2n}(L)={\bf S}_{2n}(\phi:\mbox{typ})\cup{\bf S}_{2n}(\neg\phi:\mbox {typ})\cup{\bf S}_{2n}(\phi:\mbox{ntr}). \tag{2}\] Then, by analogy with the probabilities \(\mu_{n}(\phi)\), and asymptotic probability \(\mu(\phi)=\lim_{n\to\infty}\mu_{n}(\phi)\) for the truth of \(L\)-sentences referred to in Section 2.1 above, we can naturally define the corresponding probabilities for a property \(\phi(x)\) to be typical or neutral over an arbitrary structure \({\cal M}\in{\bf S}_{n}(L)\). Specifically, for each \(n\), we set \[d_{n}(\phi:{\rm typ})=\frac{|{\bf S}_{n}(\phi:{\rm typ})|}{|{\bf S}_{n}(L)|}, \hskip 14.226378ptd_{2n}(\phi:{\rm ntr})=\frac{|{\bf S}_{2n}(\phi:{\rm ntr})|}{|{ \bf S}_{2n}(L)|},\] and, further, \[d(\phi:{\rm typ})=\lim_{n\to\infty}d_{n}(\phi:{\rm typ}),\hskip 14.226378ptd( \phi:{\rm ntr})=\lim_{n\to\infty}d_{2n}(\phi:{\rm ntr}),\] whenever these limits exist. \(d_{n}(\phi:{\rm typ})\) and \(d_{2n}(\phi:{\rm ntr})\) are the \(n\)_-typicality degree_ and \(n\)_-neutrality degree_ of \(\phi(x)\), respectively, while \(d(\phi:{\rm typ})\) and \(d(\phi:{\rm ntr})\) are the corresponding _asymptotic degrees_. Here are some first consequences of the definitions. **Fact 2.4**: _(i) If \(\vdash\phi(x)\to\psi(x)\), then \(d_{n}(\phi:{\it typ})\leq d_{n}(\psi:{\it typ})\), for all \(n\geq 1\). Therefore, if \(d(\psi:{\it typ})=0\), then \(d(\phi:{\it typ})=0\) too._ _(ii) If \(\vdash\phi(x)\leftrightarrow\psi(x)\), then \(d_{n}(\phi:{\it typ})=d_{n}(\psi:{\it typ})\), and also \(d_{2n}(\phi:{\it ntr})=d_{2n}(\psi:{\it ntr})\) for all \(n\geq 1\)._ _(iii) For all \(\phi(x)\) and \(n\), \(d_{2n}(\phi:{\it ntr})=d_{2n}(\neg\phi:{\it ntr})\)._ _(iv) If \(d(\phi:{\it typ})=a>0\) (resp. \(d(\phi:{\it ntr})=a>0\)), then the set \(\{n:{\bf S}_{n}(\phi:{\it typ})\neq\emptyset\}\) (resp. \(\{n:{\bf S}_{2n}(\phi:{\it ntr})\neq\emptyset\}\)) is cofinite._ _Proof._ For (i) and (ii) just note that if \(\vdash\phi(x)\to\psi(x)\), then for every structure \({\cal M}\), \(\phi({\cal M})\subseteq\psi({\cal M})\), hence \(|\phi({\cal M})|\leq|\psi({\cal M})|\), so \[{\cal M}\in{\bf S}_{n}(\phi:{\rm typ})\Leftrightarrow|\phi({\cal M})|>n/2 \Rightarrow|\psi({\cal M})|>n/2\Leftrightarrow{\cal M}\in{\bf S}_{n}(\psi:{ \rm typ}),\] therefore \({\bf S}_{n}(\phi:{\rm typ})\subseteq{\bf S}_{n}(\psi:{\rm typ})\). Moreover \(\vdash\phi(x)\leftrightarrow\psi(x)\) implies \({\bf S}_{n}(\phi:{\rm typ})={\bf S}_{n}(\psi:{\rm typ})\) and also \({\bf S}_{2n}(\phi:{\rm ntr})={\bf S}_{2n}(\psi:{\rm ntr})\), for every \(n\). (iii) If \(|M|=2n\), then for every \(\phi(x)\), obviously \(|\phi({\cal M})|=n\Leftrightarrow|\neg\phi({\cal M})|=n\), so \({\bf S}_{2n}(\phi:{\rm ntr})={\bf S}_{2n}(\neg\phi:{\rm ntr})\). (iv) Let \(d(\phi:{\rm typ})=a>0\). If we pick some \(0<\varepsilon<a\), then clearly there is \(n_{0}\) such that for all \(n\geq n_{0}\), \(d_{n}(\phi:{\rm typ})>a-\varepsilon\). Since \(d_{n}(\phi:{\rm typ})=\frac{|{\bf S}_{n}(\phi:{\rm typ})|}{|{\bf S}_{n}(L)|}\), it follows that for all \(n\geq n_{0}\)\(|{\bf S}_{n}(\phi:{\rm typ})|\neq 0\), and hence \({\bf S}_{n}(\phi:{\rm typ})\neq\emptyset\). The claim for \(\{n:{\bf S}_{2n}(\phi:{\rm ntr})\neq\emptyset\}\) is shown similarly. \(\dashv\) By the definition of \(\mu(\phi)\) in Section 2.1, it follows immediately that for any language \(L\) and any \(L\)-sentence \(\phi\), if \(\mu(\phi)\) exists, then so does \(\mu(\neg\phi)\) and \(\mu(\neg\phi)=1-\mu(\phi)\). What about typicality degrees? Is it true that \(d(\neg\phi:\mbox{typ})=1-d(\phi:\mbox{typ})\) whenever \(d(\phi:\mbox{typ})\) exists, for any property \(\phi(x)\)? The question is eventually open and the reason is the limit \(\lim_{n\to\infty}d_{2n}(\phi:\mbox{ntr})\). Namely, while by (1) \[d_{2n+1}(\phi:\mbox{typ})+d_{2n+1}(\neg\phi:\mbox{typ})=1,\] (2) implies that \[d_{2n}(\phi:\mbox{typ})+d_{2n}(\neg\phi:\mbox{typ})+d_{2n}(\phi:\mbox{ntr})=1.\] Thus in the second case we have \[\lim_{n\to\infty}d_{2n}(\neg\phi:\mbox{typ})=1-\lim_{n\to\infty}d_{2n}(\phi: \mbox{typ})-\lim_{n\to\infty}d_{2n}(\phi:\mbox{ntr}),\] and in order to infer that \(\lim_{n\to\infty}d_{2n}(\neg\phi:\mbox{typ})=1-\lim_{n\to\infty}d_{2n}(\phi: \mbox{typ})\), we must establish that \(\lim_{n\to\infty}d_{2n}(\phi:\mbox{ntr})=0\). We don't know if this is the case for every property \(\phi(x)\) of every language. So we shall give a name to properties satisfying this interesting and convenient condition. ## 3 Regularity of properties **Definition 3.1**: A property \(\phi(x)\) of \(L\) is said to be _regular_ if \(d(\phi:\mbox{ntr})=0\). **Fact 3.2**: _(i) \(\phi(x)\) is regular if and only if \(\neg\phi(x)\) is regular._ _(ii) If \(d(\phi:\mbox{typ})\) exists, then so does \(\lim_{n\to\infty}d_{2n+1}(\neg\phi:\mbox{typ})\) and_ \[\lim_{n\to\infty}d_{2n+1}(\neg\phi:\mbox{typ})=1-d(\phi:\mbox{typ}).\] _(iii) If \(\phi(x)\) is regular, then also_ \[\lim_{n\to\infty}d_{2n}(\neg\phi:\mbox{typ})=1-d(\phi:\mbox{typ}).\] _and therefore_ \[d(\neg\phi:\mbox{typ})=1-d(\phi:\mbox{typ}).\] _Proof._ (i) By Fact 2.4 (iii), for every \(n\), \(d_{2n}(\phi:\mbox{ntr})=d_{2n}(\neg\phi:\mbox{ntr})\), therefore \(d(\phi:\mbox{ntr})=0\) if and only if \(d(\neg\phi:\mbox{ntr})=0\). (ii) If \(d(\phi:\mbox{typ})=a\), then also \(\lim_{n\to\infty}d_{2n+1}(\phi:\mbox{typ})=a\), thus by (1), \(\lim_{n\to\infty}d_{2n+1}(\neg\phi:\mbox{typ})=1-a\). (iii) If \(d(\phi:\text{typ})=a\) and \(\phi(x)\) is regular, then \(\lim_{n\to\infty}d_{2n}(\phi:\text{ntr})=0\), so by (2) \[\lim_{n\to\infty}d_{2n}(\neg\phi:\text{typ})=1-a-\lim_{n\to\infty}d_{2n}(\phi: \text{ntr})=1-a.\] \(\dashv\) All specific properties we treat below are regular. So it is natural to ask whether every property is regular. The question is open for general languages. In the next subsection we show that it is true for a large class of properties of the language \(L=\{U_{1},\ldots,U_{k}\}\) which consists of an arbitrary number of unary predicates. ### Regularity of properties of \(L=\{U_{1},\ldots,U_{k}\}\) Let \(L=\{U_{1},\ldots,U_{k}\}\) be a language with \(k\) unary predicates. For each \(i\in\{1,\ldots,k\}\), let \(U_{i}^{1}(x)=U_{i}(x)\) and \(U_{i}^{0}(x)=\neg U_{i}(x)\). Then given a function \(e\in\{0,1\}^{k}\), we set \(\phi_{e}(x)=U_{1}^{e(1)}(x)\wedge U_{2}^{e(2)}(x)\wedge\cdots\wedge U_{k}^{e(k )}(x)\). The properties \(\phi_{e}(x)\), for \(e\in\{0,1\}^{k}\), form the \(2^{k}\) atoms of the (syntactic) Boolean algebra \(\mathcal{B}_{prop}\) generated by the properties \(U_{i}(x)\), \(i\in\{1,\ldots,k\}\), and any two distinct atoms \(\phi_{e_{1}}(x)\), \(\phi_{e_{2}}(x)\) are mutually inconsistent, i.e., \(\phi_{e_{1}}(x)\wedge\phi_{e_{2}}(x)\vdash\bot\). Besides each of the \(2^{2^{k}}\) elements of \(\mathcal{B}_{prop}\) has the form \(\phi_{E}(x)=\bigvee_{e\in E}\phi_{e}(x)\), for some \(E\subseteq\{0,1\}^{k}\). We shall generalize the class of formulas \(\phi_{e}(x)\) defined above, by relaxing the condition that for every \(i\leq k\), either \(U_{i}(x)\) or \(\neg U_{i}(x)\) must be a conjunct of \(\phi_{e}\). Namely for any \(p\) such that \(1\leq p\leq k\), a \(p\)-subsequence of \(\langle 1,\ldots,k\rangle\) is a \(p\)-tuple \(\langle i_{1},\ldots,i_{p}\rangle\) such that \(1\leq i_{1}<i_{2}<\cdots<i_{p}\leq k\). Given a \(p\)-subsequence \(\bar{s}=\langle i_{1},\ldots,i_{p}\rangle\) and an \(e\in\{0,1\}^{k}\), let \[\phi_{\bar{s},e}(x)=U_{i_{1}}^{e(i_{1})}(x)\wedge\cdots\wedge U_{i_{p}}^{e(i_ {p})}(x).\] We refer to formulas \(\phi_{\bar{s},e}(x)\) as _basic formulas_ of \(L\). The main result of this subsection is that every basic formula of \(L\) is regular. Given an \(L\)-structure \(\mathcal{M}=\langle M,W_{1},\ldots,W_{k}\rangle\), let \(W_{i}^{1}=W_{i}\) and \(W_{i}^{0}=M\backslash W_{i}\). For every \(e\in\{0,1\}^{k}\), let \(X_{e}=W_{1}^{e(1)}\cap W_{2}^{e(2)}\cap\cdots\cap W_{k}^{e(k)}\). Clearly the sets \(X_{e}\), for \(e\in\{0,1\}^{k}\), are pairwise disjoint, but their difference from \(\phi_{e}(x)\) is that not all of them need to be nonempty. Let \(\mathcal{B}_{set}\subseteq\mathcal{P}(M)\) be the Boolean algebra generated by the sets \(W_{i}\), \(i\in\{1,\ldots,k\}\) with at most \(2^{k}\) atoms. As before for each \(E\subseteq\{0,1\}^{k}\), we set \(X_{E}=\bigcup_{e\in E}X_{e}\). Further for any \(p\)-subsequence \(\bar{s}=\langle i_{1},\ldots,i_{p}\rangle\) of \(\langle 1,\ldots,k\rangle\), with \(1\leq p\leq k\), and any \(e\in\{0,1\}^{k}\), we write \(X_{\bar{s},e}=W_{i_{1}}^{e(i_{1})}\cap\cdots\cap W_{i_{p}}^{e(i_{p})}\). Finally each set \(X_{E}\) is defined in \({\cal M}\) by the property \(\phi_{E}(x)\), i.e., \(X_{E}=\phi_{E}({\cal M})\), and each \(X_{\bar{s},e}\) is defined by \(\phi_{\bar{s},e}(x)\), i.e., \(\phi_{\bar{s},e}({\cal M})=X_{\bar{s},e}\). **Lemma 3.3**: _Given an \(L\)-structure \({\cal M}=\langle M,W_{1},\ldots,W_{k}\rangle\) as above, the definable (without parameters) subsets of \(M\) are exactly the sets \(X_{E}\). Therefore every \(L\)-property \(\phi(x)\) is equivalent over \({\cal M}\) to a \(\phi_{e}(x)\), for some \(E\subseteq\{0,1\}^{k}\)._ _Proof._ Towards reaching a contradiction assume that there is an \(L\)-property \(\phi(x)\) such that \(\phi({\cal M})=A\) and \(A\neq X_{E}\) for every \(E\subseteq\{0,1\}^{k}\). Since the sets \(X_{e}\) form a partition of \(M\), in order for a set \(Y\subseteq M\) to have the property \[(\forall e)(\forall a,b\in X_{e})(a\in Y\Leftrightarrow b\in Y),\] it is necessary and sufficient that \(Y\in{\cal B}\), where \({\cal B}\) is the Boolean algebra mentioned above, i.e., \(Y=X_{E}\) for some \(E\subseteq\{0,1\}^{k}\). So since by assumption \(A\notin{\cal B}\) it follows that there exists \(e\in\{0,1\}^{k}\) such that \[(\exists a,b\in X_{e})(a\in A\Leftrightarrow b\notin A), \tag{3}\] Now observe that every bijection \(f:M\to M\) which preserves the sets \(W_{i}\), i.e., such that \(f[W_{i}]=W_{i}\) for every \(i=1,\ldots,k\), is an automorphism of \({\cal M}=\langle M,W_{1},\ldots,W_{k}\rangle\). By (3) we can pick \(X_{e}\) and \(a,b\in X_{e}\) such that \(a\in A\Leftrightarrow b\notin A\) and take the bijection \(f_{1}:X_{e}\to X_{e}\) which interchanges \(a\) and \(b\). Let also \(id_{M\setminus X_{e}}\) be the identity on the complement of \(X_{e}\) and \(f=f_{1}\cup id_{M\setminus X_{e}}\). Then \(f\) is an automorphism of \({\cal M}\) such that \(f(a)=b\). Since we assumed that there is \(\phi(x)\) such that \(A=\phi({\cal M})\), \(f\) must preserve \(A\). But \[a\in A\Leftrightarrow a\in\phi({\cal M})\Leftrightarrow{\cal M}\models\phi(a )\Leftrightarrow{\cal M}\models\phi(f(a))\Leftrightarrow f(a)\in A \Leftrightarrow b\in A,\] a contradiction. This completes the proof. \(\dashv\) Let us notice here a result which is involved in all proofs of regularity of properties. Whenever we try to check the regularity of a property over a structure \(M\) with \(2n\) elements, we shall necessarily deal with the number \({2n\choose n}\) which counts the subsets \(M\) having half of its elements. The numbers \({2n\choose n}\) are known as "central binomial coefficients" and several useful combinatorial facts are known about them (see e.g. [8]). In particular the following upper bound is particularly helpful and will be used below. **Fact 3.4**: ([8]) _For every \(n\geq 1\), \({2n\choose n}\leq{4^{n}\over\sqrt{\pi n}}\)._ For \(k\leq n\), we shall denote by \((n)_{k}\) the number of \(k\)-tuples of distinct elements chosen from a set of \(n\) elements. It is well known that \[(n)_{k}=\frac{n!}{(n-k)!}=(n-k+1)(n-k+2)\cdots n.\] Then \((n)_{1}=n\), \((n)_{n}=n!\), \((n)_{k}=0\) for \(k>n\). In particular we set \((n)_{0}=1\). The numbers \((n)_{m}\) are called _falling factorials_. Notations \(n^{\underline{m}}\) and \(P(n,m)\) are often used in the bibliography instead of \((n)_{m}\). Below we shall employ the relation \(f(n)\sim g(n)\) of _asymptotic equality_ between functions \(f,g:\mathbb{N}\rightarrow\mathbb{R}\), as well as that of _asymptotic inequality_\(f(n)\lesssim g(n)\). \(f(n)\sim g(n)\) means, by definition, \(\lim_{n\rightarrow\infty}\frac{f(n)}{g(n)}=1\), or equivalently, \(f(n)=g(n)+o(g(n))\), where \(o(g(n))\) is a function such that \(\lim_{n\rightarrow\infty}\frac{o(g(n))}{g(n)}=0\). \(f(n)\lesssim g(n)\) means \(f(n)\leq g(n)+o(g(n))\). The properties of \(\sim\) and \(\lesssim\) we shall need are the following, and are either well-known or easily verified. **Fact 3.5**: _(i) The relation \(\sim\) is preserved by the usual operations, i.e. if \(f_{1}\sim g_{1}\) and \(f_{2}\sim g_{2}\), then \(f_{1}+f_{2}\sim g_{1}+g_{2}\), \(f_{1}\cdot f_{2}\sim g_{1}\cdot g_{2}\), and \(\frac{f_{1}}{f_{2}}\sim\frac{g_{1}}{g_{2}}\)._ _(ii) If \(f(n)\sim g(n)\) and \(\lim_{n\rightarrow\infty}f(n)=a\in\mathbb{R}\), then \(\lim_{n\rightarrow\infty}g(n)=a\)._ _(iii) If \(f(n)\sim g(n)\) and \(g(n)\leq h(n)\), then \(f(n)\lesssim h(n)\)._ _(iv) If \(0\leq f(n)\lesssim g(n)\) and \(\lim_{n\rightarrow\infty}g(n)=0\), then \(\lim_{n\rightarrow\infty}f(n)=0\)._ [For completeness we sketch the proof of (ii). Let \(f(n)\sim g(n)\) and \(\lim_{n\rightarrow\infty}f(n)=a\in\mathbb{R}\). Then clearly both \(f,g\) are bounded, and let \(b\) be a bound for \(g\), i.e., \(\forall n\ |g(n)|\leq b\). We have \(f(n)=g(n)+o(g(n))\), where \(\lim_{n\rightarrow\infty}\frac{o(g(n))}{g(n)}=0\). So for all \(n\): (1) \(|g(n)-a|\leq|g(n)-f(n)|+|f(n)-a|=|o(g(n))|+|f(n)-a|\). Fix some \(\varepsilon>0\). There is \(n_{1}\) such that \(\forall n\geq n_{1}\ |\frac{o(g(n))}{g(n)}|\leq\frac{\varepsilon}{2b}\), hence: (2) \(\forall n\geq n_{1}\ |o(g(n))|\leq\frac{\varepsilon}{2b}|g(n)|\leq\frac{ \varepsilon}{2}\). Also there is \(n_{2}\) such that: (3) \(\forall n\geq n_{2}\ |f(n)-a|<\frac{\varepsilon}{2}\). If \(n_{0}=\max(n_{1},n_{2})\), (1), (2) and (3) yield \(\forall n\geq n_{0}\ |g(n)-a|<\varepsilon\).] For example, for any fixed \(k\) such that \(1\leq k<n\), \((n)_{k}\) is a polynomial in \(n\) of degree \(k\) with leading coefficient \(1\), so \[(n)_{k}\ \sim\ n^{k}. \tag{4}\] We shall apply relation (4) several times without explicit reference to that. We shall also make use of the following result. **Fact 3.6**: \(\sum_{k<n/2}{n\choose k}=\sum_{k>n/2}{n\choose k}\ \sim 2^{n-1}\)_._ _Proof._ The first equality is well-known. Moreover for every \(n\), \(\sum_{k=0}^{n}{n\choose k}=2^{n}\). If \(n\) is odd, \(\sum_{k=0}^{n}{n\choose k}=\sum_{k<n/2}{n\choose k}+\sum_{k>n/2}{n\choose k}\), so \(\sum_{k<n/2}{n\choose k}=\frac{2^{n}}{2}=2^{n-1}\). If \(n\) is even and \(n=2m\), then \(2(\sum_{k<n/2}{n\choose k})+{2m\choose m}=2^{n}\), that is, \(\sum_{k<n/2}{n\choose k}=\frac{1}{2}(2^{n}-{2m\choose m})=2^{n-1}-\frac{1}{2}{ 2m\choose m}\). Therefore \[\frac{\sum_{k<n/2}{n\choose k}}{2^{n-1}}=1-\frac{1}{2^{2m}}{2m\choose m}.\] So it suffices to see that \(\lim_{m\to\infty}\frac{1}{2^{2m}}{2m\choose m}=0\). But this follows from Fact 3.4, since \(\frac{1}{2^{2m}}{2m\choose m}\leq\frac{1}{2^{2m}}\frac{4^{m}}{\sqrt{\pi m}}= \frac{1}{\sqrt{\pi m}}\longrightarrow_{m}0\). \(\dashv\) **Theorem 3.7**: _Every basic property \(\phi_{\bar{s},e}(x)\) of \(L=\{U_{1},\ldots,U_{k}\}\) is regular. In particular every property \(\phi_{e}(x)\), as well as every \(U_{i}(x)\) and \(\neg U_{i}(x)\), for \(i=1,\ldots,k\), is regular._ _Proof._ Let us fix the ground set \(M\) of all \({\cal M}=\langle M,W_{1},\ldots,W_{k}\rangle\in{\bf S}_{2n}(L)\), i.e. \(|M|=2n\), and fix also a basic formula \(\phi_{\bar{s},e}(x)\), for some \(p\)-subsequence \(\bar{s}=\langle i_{1},\ldots,i_{p}\rangle\) of \(\langle 1,\ldots,k\rangle\) and some \(e\in\{0,1\}^{k}\). We have to compute the limit of the probability \[d_{2n}(\phi_{\bar{s},e}:{\rm ntr})=\frac{|{\bf S}_{2n}(\phi_{\bar{s},e}:{\rm n tr })|}{|{\bf S}_{2n}(L)|}.\] Note that each \(L\)-structure \({\cal M}\) is determined by a \(k\)-tuple \(\langle W_{1},\ldots,W_{k}\rangle\) of elements of \({\cal P}(M)\), rather than a \(k\)-element subset \(\{W_{1},\ldots,W_{k}\}\). This is because an interpretation of \(L\) in \({\cal M}\) is a mapping \(I:\{U_{1},\ldots,U_{k}\}\to{\cal P}(M)\), or \(I:\{1,\ldots,k\}\to{\cal P}(M)\), such that \(I(i)=W_{i}=U_{i}^{\cal M}\). Each such \(I\) determines a \(k\)-tuple \(\langle W_{1},\ldots,W_{k}\rangle\). To be precise, each \(W_{i}\) must be different from \(\emptyset\) and \(M\), but this does not affect the asymptotic behavior of the neutrality degree. Namely, by (4), \[|{\bf S}_{2n}(L)|=(2^{2n-2})_{k}\ \sim\ 2^{2kn}. \tag{5}\] In order to compute \(|{\bf S}_{2n}(\phi_{\bar{s},e}:{\rm ntr})|\) we fix temporarily a set \(A\subseteq M\) such that \(|A|=n\). Since \(\phi_{\bar{s},e}({\cal M})=W_{i_{1}}^{e(i_{1})}\cap\cdots\cap W_{i_{p}}^{e(i_{ p})}\), we set \[Z(A)=\{\langle W_{1},\ldots,W_{k}\rangle:W_{i_{1}}^{e(i_{1})}\cap\cdots\cap W_ {i_{p}}^{e(i_{p})}=A\}.\] Then clearly \[|{\bf S}_{2n}(\phi_{\bar{s},e}:{\rm ntr})|=|Z(A)|\cdot{2n\choose n}. \tag{6}\] Now \(|Z(A)|=|Z_{1}(A)|\cdot|Z_{2}(A)|\), where \[Z_{1}(A)=\{\langle W^{e(i_{1})}_{i_{1}},\cdots,W^{e(i_{p})}_{i_{p}}\rangle:W^{e (i_{1})}_{i_{1}}\cap\cdots\cap W^{e(i_{p})}_{i_{p}}=A\},\] and \[Z_{2}(A)=\{\langle W^{e(j_{1})}_{j_{1}},\ldots,W^{e(j_{k-p})}_{j_{k-p}}\rangle: \{W^{e(j_{1})}_{j_{1}},\ldots,W^{e(j_{k-p})}_{j_{k-p}}\}\subseteq\] \[\subseteq{\cal P}(M)\backslash\{W^{e(i_{1})}_{i_{1}},\ldots,W^{e(i_{p})}_{i_{ p}}\}\}.\] In order to compute (or find upper bounds for) \(|Z_{1}(A)|\) and \(|Z_{2}(A)|\), we must distinguish the cases \(p=1\) and \(2\leq p\leq k\). _Case 1._\(p=1\). Without loss of generality we may assume that \(\bar{s}\) is the 1-subsequence (1), i.e., \(\phi_{\bar{s},e}(x)\) is either \(U_{1}(x)\) or \(\neg U_{1}(x)\). By Fact 3.2 (i), it suffices to consider only \(U_{1}(x)\). Then \(U_{1}({\cal M})=W_{1}\), so \(Z_{1}(A)=\{W_{1}:W_{1}=A\}\), and hence \(|Z_{1}(A)|=1\). Also \(Z_{2}(A)=\{\langle W_{2},\ldots,W_{k}\rangle:\{W_{2},\ldots,W_{k}\}\subseteq{ \cal P}(M)\backslash\{W_{1}\}\}\), so, in view of (4), \[|Z_{2}(A)|=(2^{2n}-1)_{k-1}\ \sim\ 2^{2(k-1)n}.\] Thus also \(|Z(A)|=|Z_{2}(A)|\ \sim\ 2^{2(k-1)n}\), and therefore, letting \(A\) range over all sets of cardinality \(n\), \[{\bf S}_{2n}(U_{1}(x):{\rm ntr})\sim{2n\choose n}\cdot 2^{2(k-1)n}.\] From the last equality, (5) and Fact 3.4 we get \[d_{2n}(U_{1}(x):{\rm ntr})\sim{2n\choose n}\cdot{2^{2(k-1)n}\over 2^{kn}} \sim{2n\choose n}\cdot{1\over 2^{2n}}\leq{4^{n}\over\sqrt{\pi n}}\cdot{1 \over 2^{2n}}={1\over\sqrt{\pi n}}\longrightarrow_{n}=0.\] Therefore \(U_{1}(x)\) is regular. _Case 2._\(2\leq p\leq k\). Fix a \(p\)-subsequence of \(\langle 1,\ldots,k\rangle\) and an \(e\in\{0,1\}^{p}\). Then \(\phi_{\bar{s},e}({\cal M})=W^{e(i_{1})}_{i_{l}}\cap\cdots\cap W^{e(i_{p})}_{i_{ p}}\). Fixing temporarily a set \(A\subseteq M\), as before \(Z_{2}(A)\) consists of the \((k-p)\)-tuples of \({\cal P}(M)\backslash\{W^{e(i_{1})}_{i_{1}},\ldots,W^{e(i_{p})}_{i_{p}}\}\), so \[|Z_{2}(A)|=(2^{2n}-p)_{k-p}\sim 2^{2(k-p)n}.\] The main difference of this case from the previous one lies in the computation of \(Z_{1}(A)\). Observe that \(W^{e(i_{1})}_{i_{l}}\cap\cdots\cap W^{e(i_{p})}_{i_{p}}=A\) implies \(A\subseteq W^{e(i_{j})}_{i_{j}}\), for each \(j=1,\ldots,p\). For each \(i_{j}\) we consider the cases \(e(i_{j})=1\) and \(e(i_{j})=0\). (a) If \(e(i_{j})=1\), then \(A\subseteq W_{i_{j}}\), so \(W_{i_{j}}=A\cup Y_{j}\), for some \(Y_{j}\subseteq M\backslash A\). (b) If \(e(i_{j})=0\), then \(A\subseteq M\backslash W_{i_{j}}\), so \(M\backslash W_{i_{j}}=A\cup Y_{j}\) for some \(Y_{j}\subseteq M\backslash A\). Thus in both cases for each \(j=1,\ldots,p\) there are at most as many possible choices for \(W_{i_{j}}\) as are the choices for \(Y_{j}\subseteq M\backslash A\), i.e., \(2^{|M\backslash A|}=2^{n}\). In fact the choices of such \(Y_{i_{j}}\)'s are not independent. They must satisfy the condition \(\bigcap_{j=1}^{p}Y_{i_{j}}=\emptyset\) (since otherwise \(\bigcap_{j=1}^{p}W^{e(i_{j})}_{i_{j}}\neq A\)). Nevertheless, the number of possible \(p\)-tuples of \(Z_{1}(A)\) are at most as many as the \(p\)-tuples of \({\cal P}(M\backslash A)\), i.e., \((2^{n})_{p}\sim 2^{pn}\). Therefore \[|Z_{1}(A)|\leq(2^{n})_{p}\sim 2^{pn}.\] So \[|Z(A)|=|Z_{1}(A)|\cdot|Z_{2}(A)|\lesssim 2^{2(k-p)n}\cdot 2^{pn}\sim 2^{(2k-p)n}.\] Letting \(A\) range over all sets of cardinality \(n\), we have \[|{\bf S}_{2n}(\phi_{\bar{s},e}:{\rm ntr})|\lesssim 2^{(2k-p)n}\cdot{2n\choose n}.\] So, by (5) and Fact 3.4, \[d_{2n}(\phi_{\bar{s},e}:{\rm ntr})\lesssim\frac{2^{(2k-p)n}\cdot{2n\choose n }}{2^{2kn}}\ \sim\ \frac{{2n\choose n}}{2^{pn}}\leq\frac{4^{n}}{2^{pn}\cdot\sqrt{\pi n}}.\] Since \(p\geq 2\), \(\frac{4^{n}}{2^{pn}\cdot\sqrt{\pi n}}\leq\frac{4^{n}}{4^{n}\cdot\sqrt{\pi n}}= \frac{1}{\sqrt{\pi n}}\). So \(d_{2n}(\phi_{\bar{s},e}:{\rm ntr})\longrightarrow_{n}0\), according to Fact 3.5. This completes the proof. \(\dashv\) It is still open however whether _every_ property of \(L\), i.e., every \(\phi_{E}(x)=\bigvee_{e\in E}\phi_{e}(x)\), for \(E\subseteq\{0,1\}^{k}\), is regular. **Question 3.8**: _Is every property \(\phi_{E}(x)\) of \(L=\{U_{1},\ldots,U_{k}\}\) regular?_ More generally: **Question 3.9**: _Is every property of a finite relational language regular?_ ### A necessary condition for regularity Next we shall give a necessary condition in order for a property \(\phi(x)\), (a) to have typicality degree \(0\), and (b) to be regular. Let \(L\) be a language not necessarily relational. For any \(L\)-property \(\phi(x)\) and every \(m\geq 1\) let us set \[\phi^{(m)}:=(\exists x_{1}\cdots\exists x_{m})\left((\bigwedge_{i\neq j}x_{i} \neq x_{j})\wedge(\bigwedge_{i=1}^{m}\phi(x_{i}))\right).\] \(\phi^{(m)}\) is a sentence and says that \(\phi(x)\) is satisfied by at least \(m\) objects. So \(m<k\) implies \(\phi^{(k)}\to\phi^{(m)}\), and thus for every \(m<k\leq n\), \[\mbox{Mod}_{n}(\phi^{(k)})\subseteq\mbox{Mod}_{n}(\phi^{(m)}). \tag{7}\] As usual for every \(n\) let \(\lfloor\frac{n}{2}\rfloor\) be the greatest integer \(\leq n/2\). Then notice that by the definition of \({\bf S}_{n}(\phi:\mbox{typ})\), for every \({\cal M}\in{\bf S}_{n}(L)\), \[{\cal M}\in{\bf S}_{n}(\phi:\mbox{typ})\Leftrightarrow{\cal M}\models\phi^{ (\lfloor\frac{n}{2}\rfloor+1)},\] or \[{\bf S}_{n}(\phi:\mbox{typ})=\mbox{Mod}_{n}(\phi^{(\lfloor\frac{n}{2}\rfloor+ 1)}). \tag{8}\] On the other hand, for every \({\cal M}\) and \(k\leq|M|\), \(|\phi({\cal M})|=k\Rightarrow{\cal M}\models\phi^{(k)}\), so \[{\cal M}\in{\bf S}_{2n}(\phi:\mbox{ntr})\Rightarrow{\cal M}\models\phi^{(n)},\] or \[{\bf S}_{2n}(\phi:\mbox{ntr})\subseteq\mbox{Mod}_{2n}(\phi^{(n)}). \tag{9}\] **Lemma 3.10**: _Let \(L\) be any language and \(\phi(x)\) be an \(L\)-property. If there is \(m\geq 1\) such that \(\mu(\phi^{(m)})=0\), then:_ _(i) \(d(\phi:\mbox{typ})=0\), and_ _(ii) \(\phi(x)\) is regular._ _Proof._ Let \(\mu(\phi^{(m)})=0\) for some fixed \(m\). It means that \[\mu_{n}(\phi^{(m)})=\frac{|\mbox{Mod}_{n}(\phi^{(m)})|}{|{\bf S}_{n}(L)|} \longrightarrow_{n}0. \tag{10}\] (i) For all \(n\geq 2m\) we have \(m<\lfloor\frac{n}{2}\rfloor+1\), so by (7) and (8) \[{\bf S}_{n}(\phi:\mbox{typ})=\mbox{Mod}_{n}(\phi^{(\lfloor\frac{n}{2}\rfloor+ 1)})\subseteq\mbox{Mod}_{n}(\phi^{(m)}).\] Consequently, for every \(n\geq 2m\), \[\frac{|{\bf S}_{n}(\phi:\mbox{typ})|}{|{\bf S}_{n}(L)|}\leq\frac{|\mbox{Mod}_{n} (\phi^{(m)})|}{|{\bf S}_{n}(L)|}. \tag{11}\] Then (11) combined with (10) yields \[\frac{|{\bf S}_{n}(\phi:\mbox{typ})|}{|{\bf S}_{n}(L)|}\longrightarrow_{n}0,\] i.e. \(d(\phi:\mbox{typ})=0\). (ii) For every \(n\geq m\), by (7), (9) and (10) we have \[d_{2n}(\phi:\mbox{ntr})=\frac{|{\bf S}_{2n}(\phi:\mbox{ntr})|}{|{\bf S}_{2n}(L )|}\leq\frac{|\mbox{Mod}_{2n}(\phi^{(n)})|}{|{\bf S}_{2n}(L)|}\leq\frac{|\mbox {Mod}_{2n}(\phi^{(m)})|}{|{\bf S}_{2n}(L)|}\longrightarrow_{n}0.\] Thus \(\phi(x)\) is regular. \(\dashv\) **Corollary 3.11**: _If \(d(\phi:\mbox{typ})=1\), then \((\forall m\geq 1)(\mu(\phi^{(m)})=1)\)._ _Proof._ Assume \(d(\phi:\mbox{typ})=1\). First note that if \(L\) is relational, then the claim follows immediately from 3.10 by the help of Theorem 2.1 about the 0-1 law for sentences of a relational \(L\). For \(d(\phi:\mbox{typ})=1\) implies \(d(\phi:\mbox{typ})\neq 0\), so by 3.10 \((\forall m\geq 1)(\mu(\phi^{(m)})\neq 0)\) is true, and hence by 2.1 \((\forall m\geq 1)(\mu(\phi^{(m)})=1)\). However the claim can be shown without appeal to Theorem 2.1, by a direct argument similar to that of 3.10. Namely, assuming \(d(\phi:\mbox{typ})=1\) we have \[\frac{|{\bf S}_{n}(\phi:\mbox{typ})|}{|{\bf S}_{n}(L)|}\longrightarrow_{n}1, \tag{12}\] and since by (11) above, we have that for every \(n\geq 2m\), \[\frac{|{\bf S}_{n}(\phi:\mbox{typ})|}{|{\bf S}_{n}(L)|}\leq\frac{|\mbox{Mod}_ {n}(\phi^{(m)})|}{|{\bf S}_{n}(L)|},\] it follows that for all \(m\geq 1\), \[\frac{|\mbox{Mod}_{n}(\phi^{(m)})|}{|{\bf S}_{n}(L)|}\longrightarrow_{n}1,\] i.e., \((\forall m\geq 1)(\mu(\phi^{(m)})=1)\). \(\dashv\) The 0-1 law for typicality degrees and its failure for languages with unary properties Given a language \(L\), the 0-1 law for typicality degrees of properties of \(L\) can be defined by complete analogy with the corresponding law for sentences described in Section 2.1, as follows. **Definition 4.1**: Let \(L\) be a finite language. We say that _the 0-1 law holds for typicality degrees of properties of \(L\),_ if for every \(L\)-property \(\phi(x)\), either \(d(\phi:\mbox{\rm typ})=0\) or \(d(\phi:\mbox{\rm typ})=1\). We show in this Section that the 0-1 law fails for the language \(L=\{U_{1},\ldots,U_{k}\}\) with \(k\) unary predicates. Recall from the previous section the definition of a basic property \(\phi_{\bar{s},e}(x)\) of \(L\), for a \(p\)-subsequence \(\bar{s}\) of \(\langle 1,\ldots,k\rangle\) and an \(e\in\{0,1\}^{k}\). **Theorem 4.2**: _Let \(L=\{U_{1},\ldots,U_{k}\}\) with \(k\geq 1\), and \(\phi_{\bar{s},e}(x)\) be a basic property of \(L\)._ _(i) If \(\bar{s}\) is a \(1\)-sequence, then \(d(\phi_{\bar{s},e}:\mbox{\rm typ})=1/2\)._ _(ii) If \(\bar{s}\) is a \(p\)-sequence for \(p\geq 2\), then \(d(\phi_{\bar{s},e}:\mbox{\rm typ})=0\)._ The subcase for \(p=2\) of this Theorem requires special treatment, and it is more convenient to consider it separately. So we shall split Theorem 4.2 into Lemmas 4.3 and 4.4 below. The proof of the Theorem is an immediate consequence of these Lemmas. **Lemma 4.3**: _(i) If \(\bar{s}\) is a \(1\)-sequence, then \(d(\phi_{\bar{s},e}:\mbox{\rm typ})=1/2\)._ _(ii) If \(\bar{s}\) is a \(p\)-sequence for \(p\geq 3\), then \(d(\phi_{\bar{s},e}:\mbox{\rm typ})=0\)._ _Proof._ Let \({\cal M}=\langle M,W_{1},\ldots,W_{k}\rangle\) be an \(L\)-structure with \(|M|=n\), and fix a property \(\phi_{\bar{s},e}(x)\) as above. Recall that given \(\phi_{\bar{s},e}({\cal M})=W_{i_{1}}^{e(i_{1})}\cap\cdots\cap W_{i_{p}}^{e(i_ {p})}\). The proof has many similarities with the proof of Theorem 3.7. Given any set \(A\subseteq M\) let us define \(Z(A)\), \(Z_{1}(A)\), \(Z_{2}(A)\) exactly as in the aforementioned proof. Then, as before, \(|Z(A)|=|Z_{1}(A)|\cdot|Z_{2}(A)|\). Setting \(Z(m)=\bigcup\{Z(A):|A|=m\}\), we have \(|Z(m)|={n\choose m}\cdot|Z(A)|\), for any \(A\) with \(|A|=m\), and also \[|{\bf S}_{n}(\phi_{\bar{s},e}:\mbox{\rm typ})|=\sum_{m>n/2}|Z(m)|. \tag{13}\] (i) Let \(\bar{s}\) be a \(1\)-sequence. Without loss of generality we assume that \(\phi_{\bar{s},e}(x)=U_{1}(x)\), so \(\phi_{\bar{s},e}({\cal M})=W_{1}\). Arguing exactly as in the proof of 3.7, we see that \(|Z_{1}(A)|=1\), and \[|Z_{2}(A)|=(2^{n}-1)_{k-1}\sim 2^{(k-1)n}.\] Thus also \(|Z(A)|=|Z_{2}(A)|\sim 2^{(k-1)n}\), and therefore \[{\bf S}_{n}(\phi_{\bar{s},e}(x):\mbox{typ})\sim\sum_{m>n/2}{n\choose m}\cdot 2 ^{(k-1)n}\sim 2^{n-1}\cdot 2^{(k-1)n},\] hence by (5), \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\sim\frac{2^{n-1}\cdot 2^{(k-1)n}}{2^{kn}} \sim\frac{2^{n-1}}{2^{n}}=\frac{1}{2}.\] Thus \(d(\phi_{\bar{s},e}:\mbox{typ})=\frac{1}{2}\), according to Fact 3.5. (ii) Let \(\bar{s}\) be a \(p\)-sequence with \(3\leq p\leq k\). Arguing as in the corresponding part of the proof of 3.7, for each \(A\) with \(|A|=m\), the \(p\)-tuples of \(Z_{1}(A)\) are as many as the \(p\)-tuples \(\langle Y_{j_{1}},\ldots,Y_{j_{p}}\rangle\) of elements of \({\cal P}(M\backslash A)\) whose members are pairwise disjoint. The number of all such sequences is \((2^{n-m})_{p}\) and constitutes an upper bound of \(|Z_{1}(A)|\), that is \[|Z_{1}(A)|\leq(2^{n-m})_{p}.\] So \[|Z_{1}(A)|\leq(2^{n-m})_{p},\mbox{ while }|Z_{2}(A)|=(2^{n}-p)_{k-p}.\] Therefore \[|Z(A)|=|Z_{1}(A)|\cdot|Z_{2}(A)|\leq(2^{n}-p)_{k-p}\cdot(2^{n-m})_{p},\] and \[|Z(m)|\leq{n\choose m}\cdot(2^{n}-p)_{k-p}\cdot(2^{n-m})_{p}.\] So \[{\bf S}_{n}(\phi_{\bar{s},e}:\mbox{typ})\leq\sum_{m>n/2}{n\choose m}[(2^{n}-p )_{k-p}\cdot(2^{n-m})_{p}]=\] \[=(2^{n}-p)_{k-p}\cdot\sum_{m>n/2}{n\choose m}(2^{n-m})_{p}\ \sim\ 2^{(k-p)n}\cdot\sum_{m>n/2}{n\choose m}(2^{n-m})_{p}.\] Since \(|{\bf S}_{n}(L)|=(2^{n})_{k}\ \sim\ 2^{kn}\), it follows from the last relation that \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\lesssim\frac{1}{2^{pn}}\cdot\sum_{m>n/2}{n \choose m}(2^{n-m})_{p}.\] Setting \(n-m=i\), this is written \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\lesssim\frac{1}{2^{pn}}\cdot\sum_{i<n/2}{n \choose i}(2^{i})_{p}. \tag{14}\] [Note that for \(p>2^{i}\), i.e., for \(i<\log_{2}p\), \((2^{i})_{p}=0\), so the above sum is equal to \(\sum_{\log_{2}p\leq i<n/2}{n\choose i}(2^{i})_{p}\), however for notational simplicity we let \(i\) range over all \(i<n/2\).] Now for every \(n\), \(k\), besides the relation \((n)_{k}\sim n^{k}\), the relation \((n)_{k}\leq n^{k}\) holds too. In particular \((2^{i})_{p}\leq 2^{pi}\), and also for \(i<n/2\), \(2^{pi}<2^{\frac{pn}{2}}\), so (14) implies \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\lesssim\frac{2^{\frac{pn}{2}}}{2^{pn}}\cdot \sum_{i<n/2}{n\choose i}=\frac{1}{2^{\frac{pn}{2}}}\sum_{i<n/2}{n\choose i}.\] Now by Fact 3.6, \(\sum_{i<n/2}{n\choose i}\ \sim\ 2^{n-1}\), therefore \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\lesssim\frac{2^{n-1}}{2^{\frac{pn}{2}}}. \tag{15}\] Since \(p\geq 3\), we have \[\frac{2^{n-1}}{2^{\frac{pn}{2}}}\leq\frac{2^{n-1}}{2^{\frac{3n}{2}}}=\frac{1}{ 2^{\frac{n}{2}+1}}\longrightarrow_{n}0,\] so, by Fact 3.5, (15) yields \(d_{n}(\phi_{\bar{s},e}:\mbox{typ})\longrightarrow_{n}0\). This proves clause (ii) and completes the proof of the Lemma. \(\dashv\) In the proof of the next Lemma we shall make use of Stirling numbers of the second kind. Recall that they are denoted \({n\choose k}\), or \(S(n,k)\), where, for \(1\leq k\leq n\), \({n\choose k}\) counts the number of partitions of \(\{1,\ldots,n\}\) into \(k\) nonempty parts. The explicit formula for \({n\choose k}\) is (see [4, p. 231] or [9]): \[{n\choose k}=\frac{1}{k!}\sum_{i=0}^{k}(-1)^{k-i}{k\choose i}i^{n}. \tag{16}\] **Lemma 4.4**: _If \(\bar{s}\) is a \(2\)-sequence then \(d(\phi_{\bar{s},e}:\mbox{typ})=0\)._ _Proof._ Fix a 2-subsequence \(\bar{s}=\langle i_{1},i_{2}\rangle\) of \(\langle 1,\ldots,k\rangle\) and an \(e\in\{0,1\}^{k}\). For any set \(A\subseteq M\), \[Z_{1}(A)=\{\langle W^{e(i_{1})}_{i_{1}},W^{e(i_{2})}_{i_{2}}\rangle:W^{e(i_{1}) }_{i_{1}}\cap W^{e(i_{2})}_{i_{2}}=A\},\] so \(W^{e(i_{1})}_{i_{1}}=A\cup Y_{1}\) and \(W^{e(i_{2})}_{i_{2}}=A\cup Y_{2}\), where \(Y_{1},Y_{2}\subseteq M\backslash A\) and \(Y_{1}\cap Y_{2}=\emptyset\). It follows that \(|Z_{1}(A)|=|P(A)|\), where \[P(A)=\{\langle Y_{1},Y_{2}\rangle:Y_{1},Y_{2}\subseteq M\backslash A\ \wedge\ Y_{1}\cap Y_{2}=\emptyset\}.\] The pairs \(\langle Y_{1},Y_{2}\rangle\) are of two kinds: those for which \(Y_{1},Y_{2}\) form a partition of \(M\backslash A\), i.e., \(Y_{1}\cup Y_{2}=M\backslash A\), and those for which \(Y_{1}\cup Y_{2}\neq M\backslash A\). That is, \(P(A)=P_{1}(A)\cup P_{2}(A)\), where \[P_{1}(A)=\{\langle Y_{1},Y_{2}\rangle:Y_{1}\cap Y_{2}=\emptyset\ \wedge\ Y_{1}\cup Y_{2}=M \backslash A\},\] \[P_{2}(A)=\{\langle Y_{1},Y_{2}\rangle:Y_{1}\cap Y_{2}=\emptyset\ \wedge\ Y_{1}\cup Y_{2}\neq M \backslash A\},\] and \[|Z_{1}(A)|=|P(A)|=|P_{1}(A)|+|P_{2}(A)|. \tag{17}\] Now \(|P_{1}(A)|\) and \(|P_{2}(A)|\) can be easily calculated in terms of the 2-partitions and 3-partitions of \(M\backslash A\), respectively. Let \(\Pi(M\backslash A,2)\), \(\Pi(M\backslash A,3)\) denote the sets of partitions of \(M\backslash A\) into 2 and 3 nonempty parts, respectively. Then \(|\Pi(M\backslash A,2)|=\left\{\genfrac{}{}{0.0pt}{}{n-m}{2}\right\}\) and \(|\Pi(M\backslash A,3)|=\left\{\genfrac{}{}{0.0pt}{}{n-m}{3}\right\}\). Now a member of \(\Pi(M\backslash A,2)\) is a 2-element subset of \(M\backslash A\), while \(P_{1}(A)\) consists of _ordered pairs_ of such subsets, therefore \[|P_{1}(A)|=2\cdot|\Pi(M\backslash A,2)|=2\cdot\left\{\genfrac{}{}{0.0pt}{}{n-m }{2}\right\}\!.\] Analogously every member of \(\Pi(M\backslash A,3)\) is a 3-element subset of \(M\backslash A\), and each such subset provides \((3)_{2}\) pairs that belong to \(P_{2}(A)\) so \[|P_{2}(A)|=(3)_{2}\cdot|\Pi(M\backslash A,3)|=6\cdot\left\{\genfrac{}{}{0.0pt }{}{n-m}{3}\right\}\!.\] For every \(n\geq 2\), it is easy to see (without appealing to (16)) that \(\left\{\genfrac{}{}{0.0pt}{}{n}{2}\right\}=2^{n-1}-1\), so for every \(A\) with \(|A|=m\leq n-2\), \[|P_{1}(A)|=2\cdot(2^{n-m-1}-1)=2^{n-m}-2.\] If \(P_{1}(m)=\bigcup\{P_{1}(A):|A|=m\}\), then for \(m\leq n-2\), \[|P_{1}(m)|=\binom{n}{m}\cdot(2^{n-m}-2). \tag{18}\] On the other hand, for every \(n\geq 3\), \(\genfrac{\{}{\}}{0.0pt}{}{n}{3}\) is calculated by the help of formula (16) which yields: \[\genfrac{\{}{\}}{0.0pt}{}{n}{3}=\frac{1}{6}[3-3\cdot 2^{n}+3^{n}].\] Therefore, for \(|A|=m\leq n-3\), \[|P_{2}(A)|=3-3\cdot 2^{n-m}+3^{n-m}.\] So setting as before \(P_{2}(m)=\bigcup\{P_{2}(A):|A|=m\}\), we have for \(m\leq n-3\), \[|P_{2}(m)|=\binom{n}{m}\cdot(3-3\cdot 2^{n-m}+3^{n-m}). \tag{19}\] Finally by (17), (18) and (19) above we obtain, for \(|A|=m\leq n-3\), \[|Z_{1}(m)|=|P_{1}(m)|+|P_{2}(m)|=\binom{n}{m}(3^{n-m}-2^{n-m+1}+1).\] Recall also from above that for \(p=2\), \[|Z_{2}(m)|\sim\binom{n}{m}2^{(k-2)n},\] so for \(m\leq n-3\), \[|Z(m)|=|Z_{1}(m)|\cdot|Z_{2}(m)|\sim\binom{n}{m}\cdot[2^{(k-2)n}\cdot(3^{n-m}- 2^{n-m+1}+1)]. \tag{20}\] Now it is easy to see that \[\mathbf{S}_{n}(\phi_{\bar{s},e}:\text{typ})=\sum_{n\geq m>n/2}|Z(m)|\sim\sum_{ n-3\geq m>n/2}|Z(m)|,\] so by (20), \[\mathbf{S}_{n}(\phi_{\bar{s},e}:\text{typ})\sim 2^{(k-2)n}\cdot\sum_{n-3\geq m>n/ 2}\binom{n}{m}\cdot(3^{n-m}-2^{n-m+1}+1),\] and, given that \(\mathbf{S}_{n}(L)=2^{kn}\), \[d_{n}(\phi_{\bar{s},e}:\text{typ})\sim\frac{1}{2^{2n}}\cdot\sum_{n-3\geq m>n/ 2}\binom{n}{m}\cdot(3^{n-m}-2^{n-m+1}+1).\] Setting \(n-m=i\), this is written \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\sim\frac{1}{2^{2n}}\cdot\sum_{3\leq i<n/2}{n \choose i}\cdot(3^{i}-2^{i+1}+1)\leq\frac{1}{2^{2n}}\cdot\sum_{3\leq i<n/2}{n \choose i}\cdot(3^{\frac{n}{2}}-2^{\frac{n}{2}+1}+1)=\] \[=\frac{3^{\frac{n}{2}}-2^{\frac{n}{2}+1}+1}{2^{2n}}\cdot\sum_{3\leq i<n/2}{n \choose i}\leq\frac{3^{\frac{n}{2}}-2^{\frac{n}{2}+1}+1}{2^{2n}}\cdot\sum_{0 \leq i<n/2}{n\choose i}.\] By Fact 3.6, the last quantity is \[\sim\frac{3^{\frac{n}{2}}-2^{\frac{n}{2}+1}+1}{2^{2n}}\cdot 2^{n-1}\sim \frac{3^{\frac{n}{2}}-2^{\frac{n}{2}+1}+1}{2^{n+1}}\sim\frac{1}{2}\left( \frac{3^{\frac{n}{2}}}{4^{\frac{n}{2}}}-\frac{1}{2^{\frac{n}{2}-1}}+\frac{1}{2 ^{n}}\right).\] So finally, \[d_{n}(\phi_{\bar{s},e}:\mbox{typ})\lesssim\frac{1}{2}\left(\frac{3^{\frac{n}{2 }}}{4^{\frac{n}{2}}}-\frac{1}{2^{\frac{n}{2}-1}}+\frac{1}{2^{n}}\right) \longrightarrow_{n}0.\] This completes the proof. \(\dashv\) An immediate consequence of clause (i) of Theorem 4.2 is the following. **Corollary 4.5**: _The 0-1 law for typicality degrees of properties of a relational language fails in general._ Let us consider at this point another question related to property "\(U(x)\)", namely the question about the probability of the sentences \(U(x)\)", for \(m\geq 1\). We can see that, in contrast to the typicality degree \(1/2\) of \(U(x)\), the truth probability of the sentences \(U(x)\)" is 1. **Proposition 4.6**: _For every \(m\geq 1\), \(\mu(U(x)^{(m)})=1\)._ _Proof._ The sentence \(U(x)^{(m)}\) says that "\(U(x)\) is satisfied by at least \(m\) elements". Therefore \(\mbox{Mod}_{n}(U(x)^{(m)})=\{A\subseteq\{1,\ldots,n\}:|A|\geq m\}\), or \(|\mbox{Mod}_{n}(U(x)^{(m)})|=2^{n}-\sum_{0\leq i<m}{n\choose i}\), and therefore \[\mu_{n}(U(x)^{(m)})=\frac{2^{n}-\sum_{0\leq i<m}{n\choose i}}{2^{n}}=1-\frac{ \sum_{0\leq i<m}{n\choose i}}{2^{n}}.\] The nominator \(\sum_{0\leq i<m}{n\choose i}\) of the fraction on the right-hand side is a polynomial in \(n\) of degree \(m-1\), so its quotient by the exponential \(2^{n}\) goes to 0 an \(n\) grows. Thus \(\mu(U(x)^{(m)})=\lim_{n\to\infty}\mu_{n}(U(x)^{(m)})=1\). \(\dashv\) **Question 4.7**: _Does the 0-1 law about typicality degrees hold for every property of a relational language without unary predicates?_ We close this section with a remark about Question 4.7. Some people believe that the answer to this question must be positive on the basis that the rather general method of "extension axioms" that was used by R. Fagin in [2] to prove the 0-1 law for truth degrees of _sentences_ of a relational language, could be also applied somehow to the case of typicality degrees of properties.1 The problem however is that this specific method works for sentences of _every_ relational language, including those that contain unary predicates, while, as we saw above, in the case of typicality degrees the method should not work when unary predicates are included. I don't know how this gap could be bridged and if Fagin's method is actually applicable to the present case. Footnote 1: In contrast the method of Glebskii _et al._ in [3] seems to be rather ad hoc. ## 5 Some results about regularity and degrees of properties of the language L={F} The failure of 0-1 law for typicality degrees of properties of languages with unary predicates is a divergence from the behavior of truth probabilities of sentences. In this Section we consider some properties of the language \(L=\{F\}\) where \(F\) is a unary function symbol. We saw in Example 2.2 that \(\mu(\forall x(F(x)\neq x))=e^{-1}\), which means that the 0-1 law does not hold for sentences of \(L\). The question is whether the 0-1 law fails also for typicality degrees of properties of this language. In this section we consider two properties: 1) \(\phi(x):=(F(x)\neq x)\) and 2) \(\psi(x):=\exists y(F(y)=x)\). It is shown that both are regular, the degree of \(\phi(x)\) is 1, while the degree of \(\psi(x)\) is not known, although we give evidence that it is 1 too. ### The property \(\phi(\mathbf{x})\):=(F(x)\neq\mathbf{x})\) and its negation **Proposition 5.1**: _Let \(L=\{F\}\) and let \(\phi(x):=(F(x)\neq x)\). Then_ \[d(\phi:\mathit{typ})=1.\] _Proof._ Let \(\mathcal{M}=\langle M,f\rangle\) with \(|M|=n\) and \(A\subseteq M\) such that \(A=\phi(\mathcal{M})\). Then \(a\in A\Leftrightarrow f(a)\neq a\) and \(a\notin A\Leftrightarrow f(a)=a\). Thus if \(G(A)=\{f\in M^{M}:\phi(\mathcal{M})=A\}\), every \(f\in G(A)\) can be identified with the pair \((f\!\upharpoonright,id_{(M\setminus A)})\), or since \(id_{(M\setminus A)}\) is unique, with \(f\!\upharpoonright\!(M\setminus A)\). As we argued in Example 2.2, if \(|A|=m\) then \(|G(A)|=(n-1)^{m}\). Let us set \(G(m)=\bigcup\{G(A):A\subseteq M\ \&\ |A|=m\}\). Then, since there are \({n\choose m}\) sets of cardinality \(m\), \(|G(m)|={n\choose m}(n-1)^{m}\). Also by definition, \[{\bf S}_{n}(\phi:\mbox{typ})=\bigcup\{G(m):m>n/2\},\] therefore \[|{\bf S}_{n}(\phi:\mbox{typ})|=\sum_{m>n/2}|G(m)|=\sum_{m>n/2}{n\choose m}(n -1)^{m}. \tag{21}\] It is more convenient to set \(m=n-k\) and write this sum in the form: \[|{\bf S}_{n}(\phi:\mbox{typ})|=\sum_{k<n/2}{n\choose n-k}(n-1)^{n-k}=\sum_{k< n/2}{n\choose k}(n-1)^{n-k},\] or \[|{\bf S}_{n}(\phi:\mbox{typ})|=\sum_{k<n/2}a_{k}(n), \tag{22}\] where \(a_{k}(n)={n\choose k}(n-1)^{n-k}\). Then \[d_{n}(\phi:\mbox{typ})=\frac{|{\bf S}_{n}(\phi:\mbox{typ})|}{n^{n}}=\sum_{k< n/2}\frac{a_{k}(n)}{n^{n}}.\] Setting for simplicity, \(a_{n}=\sum_{k<n/2}\frac{a_{k}(n)}{n^{n}}\), we have to compute the limit \[d(\phi:\mbox{typ})=\lim_{n\to\infty}a_{n}. \tag{23}\] Now \(\frac{a_{k}(n)}{n^{n}}=\frac{1}{n^{n}}\cdot{n\choose k}(n-1)^{n-k}\), or, multiplying and dividing the right-hand side by \((n-1)^{k}\), this is written \[\frac{a_{k}(n)}{n^{n}}=\frac{(n-1)^{n}}{n^{n}}\cdot{n\choose k}\frac{1}{(n-1) ^{k}}.\] Therefore \[a_{n}=\sum_{k<n/2}\left(\frac{n-1}{n}\right)^{n}\cdot{n\choose k}\frac{1}{(n- 1)^{k}}=\left(\frac{n-1}{n}\right)^{n}\cdot\sum_{k<n/2}{n\choose k}\frac{1}{(n -1)^{k}}=b_{n}\cdot c_{n},\] where \(b_{n}=\left(\frac{n-1}{n}\right)^{n}\) and \(c_{n}=\sum_{k<n/2}{n\choose k}\frac{1}{(n-1)^{k}}\), and \[\lim_{n\to\infty}a_{n}=(\lim_{n\to\infty}b_{n})\cdot(\lim_{n\to\infty}c_{n}). \tag{24}\] Now \(\lim_{n\to\infty}b_{n}=e^{-1}\). So it remains to compute \[\lim_{n\to\infty}c_{n}=\lim_{n\to\infty}\sum_{k<n/2}{n\choose k}\frac{1}{(n-1)^{k}}. \tag{25}\] We shall show that \(\lim_{n\to\infty}c_{n}=e\). It suffices to show that \[\lim_{n\to\infty}c_{2n}=\lim_{n\to\infty}c_{2n+1}=e.\] (a) Proof of \(\lim_{n\to\infty}c_{2n}=e.\) We have \[c_{2n}=\sum_{k<n}{2n\choose k}\frac{1}{(2n-1)^{k}}=\sum_{k=0}^{n-1}{2n\choose k }\frac{1}{(2n-1)^{k}}.\] However it is easy to see that \(\lim_{n\to\infty}c_{2n}=\lim_{n\to\infty}c_{2n}^{\prime}\), where \[c_{2n}^{\prime}=\sum_{k=0}^{n}{2n\choose k}\frac{1}{(2n-1)^{k}}.\] This is because \(c_{2n}^{\prime}-c_{2n}={2n\choose n}\frac{1}{(2n-1)^{n}}\), which goes to \(0\) when \(n\) goes to infinity, as follows easily from Fact 3.4. Therefore it suffices to show that \[\lim_{n\to\infty}c_{2n}^{\prime}=e.\] To show that, we compare each term \(c_{2n}^{\prime}\) with the term \[g_{n}=\sum_{k=0}^{n}{n\choose k}\frac{1}{(n-1)^{k}}=\left(1+\frac{1}{n-1} \right)^{n}=\left(\frac{n}{n-1}\right)^{n},\] for which it is well-known that \(\lim_{n\to\infty}g_{n}=e\). Specifically we compare each summand \(A_{k}(n)={2n\choose k}\frac{1}{(2n-1)^{k}}\) of \(c_{2n}^{\prime}\), for \(k\leq n\), with the corresponding summand \(B_{k}(n)={n\choose k}\frac{1}{(n-1)^{k}}\) of \(g_{n}\). Then we have \[A_{k}(n)=\frac{1}{k!}\frac{(2n-k+1)(2n-k+2)\cdots 2n}{(2n-1)^{k}}=\frac{1}{k!} \frac{P(2n)}{Q(2n)},\] while \[B_{k}(n)=\frac{1}{k!}\frac{(n-k+1)(n-k+2)\cdots n}{(n-1)^{k}}=\frac{1}{k!} \frac{P(n)}{Q(n)},\] where \(P(x)\) and \(Q(x)\) are polynomials of degree \(k\). If \(\alpha_{k}\) and \(\beta_{k}\) are the leading coefficients of \(P(x)\) and \(Q(x)\), respectively, then \[\lim_{n\to\infty}\frac{P(2n)}{Q(2n)}=\lim_{n\to\infty}\frac{P(n)}{Q(n)}=\frac{ \alpha_{k}}{\beta_{k}},\] therefore \(\lim_{n\to\infty}(A_{k}(n)-B_{k}(n))=0\). Besides, \[c_{2n}^{\prime}-g_{n}=\sum_{k=0}^{n}A_{k}(n)-\sum_{k=0}^{n}B_{k}(n)=\sum_{k=0} ^{n}(A_{k}(n)-B_{k}(n)),\] so \[\lim_{n\to\infty}(c_{2n}^{\prime}-g_{n})=\sum_{k=0}^{n}\lim_{n\to\infty}(A_{k} (n)-B_{k}(n))=0.\] Therefore \(\lim_{n\to\infty}c_{2n}^{\prime}=\lim_{n\to\infty}g_{n}=e\), as required. (b) Proof of \(\lim_{n\to\infty}c_{2n+1}=e.\) This is almost the same as in (a). First notice that \(k<(2n+1)/2\Leftrightarrow k\leq n\), so \[c_{2n+1}=\sum_{k<(2n+1)/2}{2n+1\choose k}\frac{1}{(2n)^{k}}=\sum_{k=0}^{n}{2n +1\choose k}\frac{1}{(2n)^{k}}.\] Then we set for every \(k\leq n\), \[A_{k}^{\prime}(n)={2n+1\choose k}\frac{1}{(2n)^{k}}=\frac{1}{k!}\frac{P(2n+1) }{Q(2n+1)}\] and compare it again with \[B_{k}(n)=\frac{1}{k!}\frac{P(n)}{Q(n)}\] of the preceding case. As before we have \[\lim_{n\to\infty}\frac{P(2n+1)}{Q(2n+1)}=\lim_{n\to\infty}\frac{P(n)}{Q(n)}= \frac{\alpha_{k}}{\beta_{k}},\] therefore \(\lim_{n\to\infty}(A_{k}^{\prime}(n)-B_{k}(n))=0\), and hence \[\lim_{n\to\infty}(c_{2n+1}-g_{n})=\sum_{k=0}^{n}\lim_{n\to\infty}(A_{k}^{ \prime}(n)-B_{k}(n))=0.\] So \(\lim_{n\to\infty}c_{2n+1}=\lim_{n\to\infty}g_{n}=e\). This completes the proof. **Proposition 5.2**: _The property \(\phi(x):=(F(x)\neq x)\) is regular._ _Proof._ Let \({\cal M}=\langle M,f\rangle\) be an \(L\)-structure with \(|M|=2n\). Then \({\cal M}\in{\bf S}_{2n}(\phi:{\rm ntr})\) if and only if \(|\phi({\cal M})|=n\). Let \(A=\phi({\cal M})\). Then \(A=\{a\in M:f(a)\neq a\}\) and hence \(a\notin A\Leftrightarrow f(a)=a\), that is, \(f\!\upharpoonright\!(M\backslash A)=id_{M\backslash A}\). Since \(id_{M\backslash A}\) is unique, it follows that if \(G(A)\) is the set of \(f\in M^{M}\) such that \(\phi({\cal M})=A\), then \[|G(A)|=|\{f\!\upharpoonright\!A:\phi({\cal M})=A\}|=|\{g\in M^{A}:\forall x(g( x)\neq x)\}|.\] Since \(|M|=2n\) and \(|A|=n\) we have, as argued in Example 2.2, \(|G(A)|=(2n-1)^{n}\). Since there are \({2n\choose n}\) such sets \(A\), it follows that \[|{\bf S}_{2n}(\phi:{\rm ntr})|=|\bigcup\{G(A):|A|=n\}|={2n\choose n}\cdot(2n- 1)^{n}.\] On the other hand \(|{\bf S}_{2n}(L)|=(2n)^{2n}\), hence \[{|{\bf S}_{2n}(\phi:{\rm ntr})|\over|{\bf S}_{2n}(L)|}={{2n\choose n}\cdot(2n- 1)^{n}\over(2n)^{2n}}\leq{{2n\choose n}\cdot(2n)^{n}\over(2n)^{2n}}={{2n\choose n }\over(2n)^{n}}.\] By Fact 3.4, \({2n\choose n}\leq{4^{n}\over\sqrt{\pi n}}\), so the preceding inequality implies \[{|{\bf S}_{2n}(\phi:{\rm ntr})|\over|{\bf S}_{2n}(L)|}\leq{4^{n}\over(2n)^{n} \cdot\sqrt{\pi n}}={2^{n}\over n^{n}\cdot\sqrt{\pi n}}\longrightarrow_{n}0.\] \(\dashv\) **Corollary 5.3**: \(d(F(x)=x:typ)=0\)_._ _Proof._ By Proposition 5.2, \(F(x)=x\) is regular, while by Proposition 5.1\(d(F(x)\neq x:{\rm typ})=1\). Therefore in view of Fact 3.2 (iii), \(d(F(x)=x:{\rm typ})=1-d(F(x)\neq x:{\rm typ})=0\). \(\dashv\) Next Proposition shows that the criterion of Lemma 3.10 (ii), which allows one to deduce the regularity of property \(\phi(x)\) whenever for some \(m\geq 1\), \(\mu(\phi^{(m)})=0\), cannot be used for properties \(F(x)\neq x\) and \(F(x)=x\). **Proposition 5.4**: _For every \(m\geq 1\),_ _(i) \(\mu((F(x)\neq x)^{(m)})=1\)._ _(ii) \({e^{-1}\over m!}\leq((F(x)=x)^{(m)})\leq{1\over m!}\). In particular, \(\mu((F(x)=x)^{(1)})=1-e^{-1}\)._ _Proof._ (i) By definition \((F(x)\neq x)^{(m)}\) holds in \(\langle M,f\rangle\) if and only if there are distinct \(x_{1},\ldots,x_{m}\in M\) such that \(f(x_{i})\neq x_{i}\) for every \(i=1,\ldots,m\). Equivalently, if and only if for every \(A\subseteq M\) such that \(f\!\upharpoonright\!A=id\), \(|A|\leq|M|-m\). Fixing \(M\) with \(|M|=n\), for a given \(A\subseteq M\) the totality of \(f\) such that \(f\!\upharpoonright\!A=id\) are \(n^{n-|A|}\), while when \(A\) ranges over all subsets with \(|A|\leq n-m\), the totality of \(f\) for which \(\langle M,f\rangle\) satisfies \((F(x)\neq x)^{(m)}\) has cardinality \[|{\rm Mod}_{n}((F(x)\neq x)^{(m)})|=\sum_{|A|\leq n-m}n^{n-|A|}=\sum_{m\leq i \leq n}n^{i}.\] Therefore \(\mu_{n}((F(x)\neq x)^{(m)})=\frac{\sum_{m<i\leq n}n^{i}}{n^{n}}\longrightarrow _{n}1\). (ii) Let us show first the second claim, that \(\mu((F(x)=x)^{(1)})=1-e^{-1}\). This follows from Example 2.2, and the fact that for every sentence \(\phi\), \(\mu(\neg\phi)=1-\mu(\phi)\). So \[\mu((F(x)=x)^{(1)})=\mu(\exists x(F(x)=x))=\mu(\neg(\forall x)(F(x)\neq x))=\] \[=1-\mu(\forall x(F(x)\neq x))=1-\frac{1}{e}=\frac{e-1}{e}.\] Now consider the sentence \((F(x)=x)^{(m)}\) for \(m\geq 1\). \((F(x)=x)^{(m)}\) holds in \(\langle M,f\rangle\) if and only if there are distinct \(x_{1},\ldots,x_{m}\in M\) such that \(f(x_{i})=x_{i}\) for every \(i=1,\ldots,m\), i.e., if there is \(A\subseteq M\) with \(|A|=m\) such that \(f\!\upharpoonright\!A=id\). If \(|M|=n\), for each \(A\) with \(|A|=m\) there are \(n^{n-m}\) functions such that \(f\!\upharpoonright\!A=id\). Since there are \({n\choose m}\) such sets \(A\) with \(|A|=m\), the totality of functions of this kind is at most \({n\choose m}n^{n-m}\) (because this totality possibly contains repetitions). So \[|{\rm Mod}_{n}((F(x)=x)^{(m)})|\leq{n\choose m}n^{n-m}.\] On the other hand, for a fixed \(A\) with \(|A|=m\), let \(X_{A}\) be the collection of functions \(f\) such that \(f\!\upharpoonright\!A=id\) and \(f(x)\neq x\) for every \(x\in M\backslash A\). Since for every \(f\in X_{A}\), \(f(x)\) may take independently \(n-1\) possible values on \(M\backslash A\), it follows that \(|X_{A}|=(n-1)^{n-m}\) and, moreover, if \(A\neq A^{\prime}\) then \(X_{A}\cap X_{A^{\prime}}=\emptyset\). So \[|{\rm Mod}_{n}((F(x)=x)^{(m)})|\geq{n\choose m}(n-1)^{n-m}.\] Therefore \[\frac{{n\choose m}(n-1)^{n-m}}{n^{n}}\leq\mu_{n}((F(x)=x)^{(m)})\leq\frac{{n \choose m}n^{n-m}}{n^{n}}. \tag{26}\] Denoting by \(l_{n}\) the preceding lower bound in (26) we have \[l_{n}=\frac{{n\choose m}(n-1)^{n-m}}{n^{n}}=\frac{{n\choose m}}{(n-1)^{m}}\cdot \frac{(n-1)^{n}}{n^{n}}=\] \[=\frac{1}{m!}\frac{(n-m+1)(n-m+2)\cdots(n-1)n}{(n-1)^{m}}\cdot\left(\frac{n-1}{ n}\right)^{n}.\] So \[\lim_{n\to\infty}l_{n}=\frac{1}{m!}\lim_{n\to\infty}\frac{(n-m+1)(n-m+2)\cdots (n-1)n}{(n-1)^{m}}\cdot\lim_{n\to\infty}\left(\frac{n-1}{n}\right)^{n}.\] Now \(\lim_{n\to\infty}\frac{(n-m+1)(n-m+2)\cdots(n-1)n}{(n-1)^{m}}=1\), because both the nominator and the denominator are polynomials of degree \(m\) with leading coefficients \(1\), while \(\lim_{n\to\infty}\left(\frac{n-1}{n}\right)^{n}=e^{-1}\). Therefore \(\lim_{n\to\infty}l_{n}=\frac{e^{-1}}{m!}\). Similarly, if \(u_{n}=\frac{{n\choose m}n^{n-m}}{n^{n}}\) is the upper bound in (26) above, then \[u_{n}=\frac{{n\choose m}}{n^{m}}=\frac{1}{m!}\cdot\frac{(n-m+1)(n-m+2)\cdots(n -1)n}{n^{m}},\] and comparing as before the polynomials of the fraction we find \(\lim_{n\to\infty}u_{n}=\frac{1}{m!}\). So finally \[\frac{e^{-1}}{m!}\leq\mu((F(x)=x)^{(m)})=\lim_{n\to\infty}\mu_{n}((F(x)=x)^{( m)})\leq\frac{1}{m!}.\] Notice that the value \(1-e^{-1}=\frac{e-1}{e}\) of \(\mu((F(x)=x)^{(1)})\) conforms with the general bounds given above, since \(e^{-1}\leq\frac{e-1}{e}\leq 1\). \(\dashv\) **Corollary 5.5**: _The converse of Lemma 3.10 (i) is false for the language \(L=\{F\}\). Namely, there is \(\phi(x)\) of \(L\) such that \(d(\phi:\mbox{typ})=0\) while \(\forall m\geq 1\ \mu(\phi^{(m)})>0\)._ _Proof._ Take \(\phi(x):(F(x)=x)\). By Corollary 5.3, \(d(F(x)=x:\mbox{typ})=0\), while by Proposition 5.4, for every \(m\geq 1\ \mu((F(x)=x)^{(m)})\geq\frac{e^{-1}}{m!}\). \(\dashv\) ## 6 Degrees of some properties of graphs In this Section we examine the typicality degree of some natural first-order properties of finite undirected graphs. The language of graphs is \(L=\{E\}\), where \(E\) is the symbol of a binary symmetric and irreflexive relation. Thus an \(L\)-structure is a pair \(G=\langle A,E\rangle\), where \(A\) is the set of nodes of \(G\) and \(E\) is its _adjacency_ relation. \(xEy\) means that "the nodes \(x,y\) are adjacent", i.e., connected with an edge. By assumption \(xEy\Leftrightarrow yEx\) and for all \(x\in A\)\(\neg(xEx)\). So \(E\) is a set of \(2\)-element subsets of \(A\), that is \(E\subseteq[A]^{2}\), and \(G\models xEy\Leftrightarrow\{x,y\}\in E\). So if \(|A|=n\), then \(|[A]^{2}|={n\choose 2}\), therefore \(|{\bf S}_{n}(L)|=2^{{n\choose 2}}\). Let us consider the following examples of first-order properties of \(L\): (1) \(\phi_{\rm none}(x)\): "\(x\) is an isolated node" [\((\forall y)\neg(xEy)\))]. (2) \(\phi_{\rm all}(x)\): "\(x\) is adjacent to every node" [\((\forall y)(xEy)\)]. (3) \(\phi_{\rm one}(x)\): "\(x\) is adjacent to exactly one node" [\((\exists!y)(xEy)\)]. **Proposition 6.1**: _Properties \(\phi_{\rm none}(x)\), \(\phi_{\rm all}(x)\) and \(\phi_{\rm one}(x)\) have typicality degree \(0\). That is, \(d(\phi_{\rm none}:\mbox{typ})=d(\phi_{\rm all}:\mbox{typ})=d(\phi_{\rm one}: \mbox{typ})=0\)._ _Proof._ We shall prove the claim using Lemma 3.10 (i), namely it suffices to prove that \(\mu(\phi_{\rm none}^{(1)})=\mu(\phi_{\rm all}^{(1)})=\mu(\phi_{\rm one}^{(1)} )=0\). (i) \(\mu(\phi_{\rm none}^{(1)})=0\): We have \(\phi_{\rm none}^{(1)}:=(\exists x)(\forall y)\neg(xEy)\). We must show that \[\frac{|{\rm Mod}_{n}(\phi_{\rm none}^{(1)})|}{|{\bf S}_{n}(L)|}\longrightarrow_ {n}0. \tag{27}\] We saw above that \(|{\bf S}_{n}(L)|=2^{{n\choose 2}}\), so we have to estimate also \(|{\rm Mod}_{n}(\phi_{\rm none}^{(1)})|\). Let \(G=\langle A,E\rangle\) be a graph with \(n\) nodes satisfying \(\phi_{\rm none}^{(1)}\). Then \(G\) contains at _least one_ isolated node \(a\) (see Figure 1). \(G\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a\)\(a^{\prime}\ on \(A\) in which \(a\) is an isolated node. Letting \(a\) range over every node of \(A\), it follows that the total number of graphs in \({\bf S}_{n}(L)\) that satisfy \(\phi_{\rm none}^{(1)}\), i.e., \(|{\rm Mod}_{n}(\phi_{\rm none}^{(1)})|\), is at most \(n2^{{n-1\choose 2}}\). Thus \[\frac{|{\rm Mod}_{n}(\phi_{\rm none}^{(1)})|}{|{\bf S}_{n}(L)|}\leq\frac{n2^{{n -1\choose 2}}}{2^{{n\choose 2}}}=\frac{n2^{{(n-2)(n-1)\over 2}}}{2^{{n(n-1) \over 2}}}=\frac{n}{2^{n-1}}\longrightarrow_{n}0,\] so (27) is true. (ii) \(\mu(\phi_{\rm all}^{(1)})=0\): We have \(\phi_{\rm all}^{(1)}:=(\exists x)(\forall y)(xEy)\) and we must show that \[\frac{|{\rm Mod}_{n}(\phi_{\rm all}^{(1)})|}{|{\bf S}_{n}(L)|}\longrightarrow_ {n}0. \tag{28}\] The argument is quite similar to that of the previous case. If \(G=\langle A,E\rangle\) is a graph with \(n\) nodes satisfying \(\phi_{\rm all}^{(1)}\), there is \(a\in A\) which is adjacent to all other nodes of \(G\) (see Figure 2). Figure 2 It means that if \(A^{\prime}=A\backslash\{a\}\) and \(G^{\prime}=\langle A^{\prime},E^{\prime}\rangle\) is as before, then \(G^{\prime}\) can be again of any possible form, so as before \[\frac{|{\rm Mod}_{n}(\phi_{\rm all}^{(1)})|}{|{\bf S}_{n}(L)|}\leq\frac{n}{2^ {n-1}}\longrightarrow_{n}0.\] (iii) \(\mu(\phi_{\rm one}^{(1)})=0\): We have \(\phi_{\rm one}^{(1)}:=(\exists x)(\exists!y)(xEy)\) and we have to show that \[\frac{|{\rm Mod}_{n}(\phi_{\rm one}^{(1)})|}{|{\bf S}_{n}(L)|}\longrightarrow_ {n}0. \tag{29}\] Let \(G=\langle A,E\rangle\) be a graph, with \(|A|=n\), satisfying \(\phi_{\rm one}^{(1)}\). Pick an \(a\in A\) witnessing \(\phi_{\rm one}^{(1)}\). If again \(A^{\prime}=A\backslash\{a\}\), \(a\) is adjacent to a unique element \(b\in A^{\prime}\) (see Figure 3), so there are \(n-1\) choices for the given \(a\), to be connected. \(a\)\(b\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\(a\)\(a\)\(a^{\prime}\)\(a\)\(a^{\prime}\)\( with \(|A|=n\) which satisfies \(\phi_{k}^{(1)}\), let \(a\in A\) witness \(\phi_{k}^{(1)}\). Then \(a\) is adjacent exactly to \(k\) elements of \(A\backslash\{a\}\). Since there are \({n-1\choose k}\)\(k\)-element subsets of \(A\backslash\{a\}\), there exist at most \({n-1\choose k}2^{{n-1\choose 2}}\) graphs on \(A\) in which \(a\) witnesses \(\phi_{k}^{(1)}\). Therefore \(|\mathrm{Mod}_{n}(\phi_{k}^{(1)})|\leq n{n-1\choose k}2^{{n-1\choose 2}}\), and hence \[\mu_{n}(\phi_{k}^{(1)})=\frac{|\mathrm{Mod}_{n}(\phi_{k}^{(1)})|}{|\mathbf{S} _{n}(L)|}\leq\frac{n{n-1\choose k}2^{{n-1\choose 2}}}{2^{{n\choose 2}}}=\frac{n{n-1 \choose k}}{2^{n-1}}.\] In \(\frac{n{n-1\choose k}}{2^{n-1\choose 2}}\) the nominator is a polynomial in \(n\) of degree \(k+1\), so by l' Hospital rule \(\frac{n{n-1\choose k}}{2^{n-1}}\longrightarrow_{n}0\). \(\dashv\) **Corollary 6.3**: _All properties \(\phi_{\mathrm{none}}(x)\), \(\phi_{\mathrm{all}}(x)\) and \(\phi_{k}(x)\), for \(k\geq 1\), are regular._ _Proof._ We showed in the proofs of Propositions 6.1 and 6.2 that for each of the aforementioned properties \(\phi(x)\), \(\mu(\phi^{(1)})=0\). This condition implies, according to 3.10 (ii), that \(\phi(x)\) is regular. \(\dashv\)
2310.18275
The Pak--Postnikov and Naruse skew hook length formulas: a new proof
The classical hook length formula of enumerative combinatorics expresses the number of standard Young tableaux of a given partition shape as a single fraction. In recent years, two generalizations of this formula have emerged: one by Pak and Postnikov, replacing the number by a (rational) generating function, and one by Naruse, which generalizes the setting from a partition to a skew partition. Both generalizations appear to lie significantly deeper, with no simple proofs known. We combine them into a generating-function identity for skew partitions, and prove it in a fairly elementary way using recursion, determinants and simple combinatorics.
Darij Grinberg, Nazar Korniichuk, Kostiantyn Molokanov, Severyn Khomych
2023-10-27T17:02:29Z
http://arxiv.org/abs/2310.18275v1
# The Pak-Postnikov and Naruse skew hook length formulas: a new proof # The Pak-Postnikov and Naruse skew hook length formulas: a new proof [Yulia's Dream research paper, 2023] Darij Grinberg1, Nazar Korniichuk2, Kostiantyn Molokanov3, and Severyn Khomych4 Footnote 1: Department of Mathematics, Drexel University, Philadelphia, U.S.A. ([email protected]) Footnote 2: Kyiv Natural-Scientific Lyceum No. 145, Kyiv, Ukraine ([email protected]) Footnote 3: Kyiv Natural-Scientific Lyceum No. 145, Kyiv, Ukraine ([email protected]) Footnote 4: Brucknergymnasium Wels, Austria ([email protected]) version 1.0, 27 October 2023 **Abstract. The classical hook length formula of enumerative combinatorics expresses the number of standard Young tableaux of a given partition shape as a single fraction. In recent years, two generalizations of this formula have emerged: one by Pak and Postnikov, replacing the number by a (rational) generating function, and one by Naruse, which generalizes the setting from a partition to a skew partition. Both generalizations appear to lie significantly deeper, with no simple proofs known. We combine them into a generating-function identity for skew partitions, and prove it in a fairly elementary way using recursion, determinants and simple combinatorics.** **Keywords: Young tableaux, hook length formulas, partitions, enumerative combinatorics, excited diagrams, Jacobi-Trudi formulas, Schur polynomials, determinants.** **Mathematics Subject Classification 2020: 05A17, 05E05, 15A15, 05A19.** ###### Contents * 1 Introduction 2. **Notations and terminology** 13.Proof of the Konvalinka recursion13.1 More on partitions, Delta-sets and transposes13.2 A bijection13.3 Proof of the Konvalinka recursion14.Hints15.Proofs15.1 To Section 215.2 To Section 415.3 To Section 515.4 To Section 615.5 To Section 715.6 To Section 815.7 To Section 915.8 To Section 1015.9 To Section 115.10To Section 12 and Theorem 3.115.11To Section 13 and Theorem 5.816.Appendix: Odds and ends16.1 To Section 816.2 To Section 916.3 To Section 1317.Appendix: Deriving Proposition 8.2 from the literature17.1 Deriving Proposition 8.2 as a consequence of Gessel-Viennot17.2 Deriving Proposition 8.2 from Chen-Li-Louck ## 1 Introduction The hook length formula is one of the most prominent results in algebraic combinatorics. Since its discovery by Frame, Robinson and Thrall in 1953, it has seen several proofs and various applications. It expresses the number of standard tableaux of shape \(\lambda\), where \(\lambda\) is a given integer partition, as the fraction \[\frac{n!}{\prod\limits_{c\in Y\left(\lambda\right)}h_{\lambda}\left(c\right)}, \tag{1}\] where \(n\) is the size (i.e., number of boxes) of \(\lambda\), where \(Y\left(\lambda\right)\) is the Young diagram of \(\lambda\) (as a set of boxes), and where \(h_{\lambda}\left(c\right)\) denotes the so-called hook length of a given box \(c\) in \(\lambda\). Viewed in isolation, this is a surprising enough result, yet it has also found applications in probability theory [10], representation theory [11, SS9.5.2], [12], and enumerative algebraic geometry [17, SS4.6]. A natural generalization of a partition \(\lambda\) is a skew partition \(\lambda/\mu\). For a long time, no analogue of the hook length formula was known for skew partitions, until Hiroshi Naruse announced one in 2014. He expressed the number of standard tableaux of shape \(\lambda/\mu\) not as a simple fraction (such an expression is easily seen to be impossible), but as a sum \[\sum\limits_{E\in\mathcal{E}\left(\lambda/\mu\right)}\frac{n!}{\prod\limits_{c \in Y\left(\lambda\right)\backslash E}h_{\lambda}\left(c\right)} \tag{2}\] over all _excitations_\(E\) of \(Y\left(\mu\right)\) that fit inside \(Y\left(\lambda\right)\) (see Theorem 2.11 for a precise statement). The latter excitations (also known as _excited diagrams_) are a (finite) set of diagrams obtained from \(Y\left(\mu\right)\) through a simple combinatorial transition rule. When \(\mu=\varnothing\), there is only one such excitation, and the sum (2) boils down to the single fraction (1). Thus, Naruse's formula generalizes the classical hook length formula. Naruse's formula has garnered a reputation of being hard to prove. Naruse never published his proof, which is believed to have used Schubert calculus. Several proofs have since been published by Morales, Pak and Panova in [14] and [14] and by Konvalinka in [15] and [16], all relying on intricate combinatorics or fairly deep symmetric function theory. Compared with the many elementary proofs of the original hook length formula, these paint a rather desolate picture. Another generalization of the hook length formula has surfaced in a rather different place. In his 2001 work [17], Igor Pak has found a new approach to the original hook length formula using discrete geometry - specifically, using volume-preserving maps on certain polytopes associated to any poset. Alexander Postnikov observed that, as a side effect of his approach, the formula could be refined to an equality between two rational functions (one being defined as a sum over all standard tableaux, and the other being a multivariate generalization of (1)). This generalization, too, has withstood many attempts at a simple proof; the only proofs known so far are Pak's original geometric proof and Sam Hopkins's proof [15] using P-partitions.1 Footnote 1: The two proofs are more akin than they might seem: The P-partitions can be viewed as the integer points in the polytopes considered by Pak, and Hopkins’s toggle-based map \(\mathcal{RSK}\) is essentially Pak’s geometric bijection \(\xi_{\lambda}\). The present work aims to ameliorate the situation by proving both generalizations (the Naruse one and the Pak-Postnikov one) in an elementary way that requires no combinatorial virtuosity from the reader. In fact, we combine them into a single result, which generalizes both: a generating-function hook length formula for skew diagrams. This result has been discovered by the first author and proved by Konvalinka, who outlined his proof in [16, Section 5], but nothing more has been published on it so far. The ingredients of our proof are a bijection between excitations and flagged semi-standard tableaux (essentially the same bijection that appears in all existing proofs of the formula); a refined version of the Jacobi-Trudi formula (already known to Gessel and Viennot); a few elementary determinantal identities; and a handful of recursions and elementary properties of partitions. To keep this paper self-contained, we prove all of them, trusting the expert reader to skip anything that is known or clear. We hope that the level of detail makes this paper accessible to the undergraduate or the non-combinatorialist. This paper is formatted as a sequence of intermediate results (lemmas, propositions and corollaries), each of which is not too hard to prove using what has been shown before. Some hints to these proofs can be found in 14, while detailed proofs are given in Section 15. We believe the latter are most useful as a backup, as the reader will attain a better understanding of the material by constructing these proofs on his own. (Thus, our intermediate results serve a similar role as the "Examples" in Macdonald's text [14].) ### Related questions Naruse's formula is not the only formula for the number of standard tableaux of a given shape \(\lambda/\mu\). A different formula was found by Okounkov and Olshanski in 1996 [10, Theorem 8.1], and remarkably can also be reformulated in a similar vein as Naruse's formula, using _reverse excitations_ instead of excitations. We refer to [11] for the details of this formula and various ways to state it. We are not aware of any Pak-Postnikov-type generalizations of this formula so far, but it appears natural to try extending it in this direction. Yet another Naruse-like formula, true however only for a narrower class of skew partitions ("slim diagrams"), can be found in [13, SS9.1]. ### Acknowledgments We thank the Yulia's Dream program and, in particular, its organizers Pavel Etingof, Slava Gerovich, Vasily Dolgushev and Dmytro Matvieievskyi. The first author further appreciates conversations with Alexander Postnikov (who made him aware of the Pak-Postnikov formula), Matjaz Konvalinka (who first confirmed his conjecture that has become Theorem 3.1) and Igor Pak (for several relevant pointers to the literature). ## 2 Notations and terminology The hook length formula concerns _standard tableaux_. What follows is a brief introduction to the subject, sufficient for the purposes of the present work. More comprehensive treatments of the theory of standard tableaux can be found in [12], [15, Chapter 7] and [16, Chapter 7]. ### Partitions and skew partitions We let \(\mathbb{N}:=\{0,1,2,\ldots\}\). The size of a set \(S\) is denoted by \(|S|\). A _partition_ means an infinite sequence \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3},\ldots)\) of nonnegative integers such that \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\cdots\) and \(\lambda_{i}=0\) for all sufficiently large \(i\). For instance, \((5,2,2,1,0,0,0,\ldots)\) is a partition. We usually omit the zeroes when we write down a partition; e.g., the partition we just mentioned can be rewritten as \((5,2,2,1)\). In particular, the sequence \((0,0,0,\ldots)\) is a partition, denoted by \(\varnothing\). We write \(\lambda_{i}\) for the \(i\)-th entry of any partition \(\lambda\). Thus, \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3},\ldots)\) for any partition \(\lambda\). Two partitions \(\lambda\) and \(\mu\) are said to satisfy \(\lambda\supseteq\mu\) if \(\lambda_{i}\geq\mu_{i}\) for all \(i\). We also write \(\mu\subseteq\lambda\) for \(\lambda\supseteq\mu\). A pair \((\lambda,\mu)\) of two partitions satisfying \(\lambda\supseteq\mu\) is called a _skew partition_, and is denoted by \(\lambda/\mu\). **Convention 2.1**.: For Sections 2 and 3, we fix two partitions \(\lambda\) and \(\mu\). (We shall sometimes - but not always - require that \(\lambda\supseteq\mu\).) ### Diagrams and Young diagrams A _box_ (or _cell_) will mean a pair \((i,j)\in\mathbb{Z}^{2}\) of two integers. A _diagram_ will mean a finite set of boxes, i.e., a finite subset of \(\mathbb{Z}^{2}\). _Young diagrams_ are a particularly crucial type of diagrams, defined as follows: If \(\lambda\supseteq\mu\), then the _(skew) Young diagram_\(Y\left(\lambda/\mu\right)\) means the set of all pairs \((i,j)\) of positive integers satisfying \(\lambda_{i}\geq j>\mu_{i}\). For instance, \[Y\left(\left(4,2,1,1\right)/\left(2,1,1\right)\right)=\left\{\left(1,3\right),\ \left(1,4\right),\ \left(2,2\right),\ \left(4,1\right)\right\}.\] The Young diagram \(Y\left(\lambda/\mu\right)\) is clearly a diagram. But there are also diagrams that are not Young diagrams (for instance, \(\left\{\left(1,1\right),\ \left(2,3\right)\right\}\)). If \(\lambda\supseteq\mu\), then the elements \((i,j)\) of the skew Young diagram \(Y\left(\lambda/\mu\right)\) will be called the _boxes_ (or _cells_) of \(Y\left(\lambda/\mu\right)\). Boxes and diagrams will be visually represented in a specific way. Namely, we visualize each box \((i,j)\) as a square box of sidelength 1, placed in the Cartesian plane in such a way that its center has Cartesian coordinates \((i,j)\). However, we let the x-axis go north-to-south and the y-axis go west-to-east2 (so that the eastern neighbor of the box \((i,j)\) is \((i,j+1)\), whereas its southern neighbor is \((i+1,j)\)). Thus, for instance, the skew Young diagram \(Y\left(\left(5,4,3,3,1\right)/\left(2,1,1\right)\right)\) is the follow ing conglomeration of boxes: (where, e.g., the three boxes in the topmost row correspond to the pairs \(\left(1,3\right)\), \(\left(1,4\right)\) and \(\left(1,5\right)\) from left to right). This way of representing boxes and diagrams is known as the _English notation_ or the _matrix notation_ (as it imitates the way that the entries of a matrix are commonly indexed). It makes each diagram look like a table (although usually not of rectangular shape), and thus allows us to speak of "rows" and "columns" of a diagram (e.g., the set of all boxes \(\left(i,j\right)\) with a given \(i\) is called the _\(i\)-th row_), and also to fill numbers into the boxes (which we shall do later). Any partition \(\lambda\) satisfies \(\lambda\supseteq\varnothing\), and thus the skew partition \(\lambda/\varnothing\) is well-defined. We will write \(Y\left(\lambda\right)\) for the skew Young diagram \(Y\left(\lambda/\varnothing\right)\). We observe that it consists of all pairs \(\left(i,j\right)\) of positive integers satisfying \(\lambda_{i}\geq j\). Young diagrams of the form \(Y\left(\lambda\right)\) are called _straight Young diagrams_. In a straight Young diagram, the rows are "left-aligned" (i.e., each row has its westernmost box in the 1-st column). We note that any skew partition \(\lambda/\mu\) satisfies \(Y\left(\lambda/\mu\right)=Y\left(\lambda\right)\setminus Y\left(\mu\right)\). Furthermore, two partitions \(\lambda\) and \(\mu\) satisfy \(\mu\subseteq\lambda\) if and only if \(Y\left(\mu\right)\subseteq Y\left(\lambda\right)\). ### Hooks and their lengths If \(c=\left(i,j\right)\) is a box of a Young diagram \(Y\left(\lambda\right)\), then its _hook_\(H_{\lambda}\left(c\right)\) is defined to be the set of all boxes of \(Y\left(\lambda\right)\) that lie due east or due south of \(c\), including \(c\) itself. Formally speaking, \(H_{\lambda}\left(c\right)\) is defined by \[H_{\lambda}\left(c\right):=\left\{\left(i,k\right)\in Y\left(\lambda\right)\ \mid\ k\geq j\right\}\cup\left\{\left(k,j\right)\in Y\left(\lambda\right)\ \mid\ k\geq i\right\}.\] For example, if \(\lambda=\left(5,4,3,3,1\right)\), then the hook \(H_{\lambda}\left(3,2\right)\) of the box \(\left(3,2\right)\) consists of three boxes, which are marked with asterisks in the picture below: (3) Formally speaking, this hook is the set \(\left\{\left(3,2\right),\left(3,3\right),\left(4,2\right)\right\}\). The _hook length_\(h_{\lambda}\left(c\right)\) of a box \(c\in Y\left(\lambda\right)\) is defined to be \(\left|H_{\lambda}\left(c\right)\right|\), that is, the number of all boxes in the hook of \(c\). Thus, \(h_{\lambda}\left(3,2\right)=3\) in our above example (3). Likewise, in the same example, \(h_{\lambda}\left(2,2\right)=5\) and \(h_{\lambda}\left(1,3\right)=6\). ### Standard tableaux If \(\lambda\supseteq\mu\), then a _standard tableau_ of shape \(\lambda/\mu\) is defined to be a way to put a positive integer into each box of \(Y\left(\lambda/\mu\right)\) such that * the integers are \(1,2,\ldots,n\) (where \(n\) is the number of boxes of \(Y\left(\lambda/\mu\right)\)), and each appears exactly once; * the integers increase left-to-right in each row; * the integers increase top-to-bottom in each column. Formally speaking, this means that a standard tableau of shape \(\lambda/\mu\) is a bijection \(T:Y\left(\lambda/\mu\right)\rightarrow\left\{1,2,\ldots,n\right\}\) (where \(n=\left|Y\left(\lambda/\mu\right)\right|\)) such that \[T\left(i,j\right)<T\left(i,j+1\right)\qquad\quad\text{for all}\;\left(i,j \right)\in Y\left(\lambda/\mu\right)\;\text{satisfying}\;\left(i,j+1\right)\in Y \left(\lambda/\mu\right)\] and \[T\left(i,j\right)<T\left(i+1,j\right)\qquad\quad\text{for all}\;\left(i,j \right)\in Y\left(\lambda/\mu\right)\;\text{satisfying}\;\left(i+1,j\right)\in Y \left(\lambda/\mu\right).\] We imagine each value \(T\left(i,j\right)\) of this bijection \(T\) to be written into the box \(\left(i,j\right)\), so that \(T\) is visualized as a filling of the diagram \(Y\left(\lambda/\mu\right)\) with the numbers \(1,2,\ldots,n\). For example, here is a standard tableau of shape \(\left(5,4,3,3,1\right)/\left(2,1,1\right)\): \[\begin{array}{|c|c|c|}\cline{2-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4 }\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4}\cline{4-4} \ Much about this theorem is surprising, starting with the fact that the right hand side is not obviously an integer. Various proofs are known; the easiest are probably the ones in [Glass-Ng] (see [Zhang10] for a simplification) and in [Sagan20, SS7.3]. A zoology of different proofs of Theorem 2.2 can be found in Pak's recent work [Pak22, SS11.2]. In the following, we agree to abbreviate \(\operatorname{SYT}\left(\lambda/\varnothing\right)\) as \(\operatorname{SYT}\left(\lambda\right)\). Likewise, a "standard tableau of shape \(\lambda\)" will mean a standard tableau of shape \(\lambda/\varnothing\). ### The Pak-Postnikov refined hook length formula A recurring motif in algebraic combinatorics is to replace counting by summing. In particular, one might hope to replace the \(\left|\operatorname{SYT}\left(\lambda\right)\right|\) on the left hand side of Theorem 2.2 by a sum of some functions, one for each standard tableau in \(\operatorname{SYT}\left(\lambda\right)\), such that the sum of these functions simplifies to a sufficiently neat expression, and such that substituting \(1\) for all variables in the resulting equality would recover the hook length formula. Such a generalization has been found by Igor Pak and Alexander Postnikov. To state it, we need one more piece of notation: If \(T\) is a standard tableau (of any shape), and if \(k\) is a positive integer, then \(c_{T}\left(k\right)\) shall denote the difference \(j-i\), where \(\left(i,j\right)\) is the box of \(T\) that contains the entry \(k\) (assuming that such a box exists3). For example, if \(T\) is the tableau given in (4), then \(c_{T}\left(1\right)=3-1=2\) and \(c_{T}\left(2\right)=2-2=0\) and \(c_{T}\left(3\right)=4-1=3\) and so on. Now we can state the formula: Footnote 3: Such box is certainly unique, since \(T\) is a standard tableau. **Theorem 2.3** (Pak-Postnikov hook length formula).: Let \(\lambda\) be a partition such that \(Y\left(\lambda\right)\) has \(n\) boxes. Let \(\ldots,z_{-2},z_{-1},z_{0},z_{1},z_{2},\ldots\) be an infinite family of commuting indeterminates. For any standard tableau \(T\) of shape \(\lambda/\varnothing\), we define the fraction \[\mathbf{z}_{T}:=\frac{1}{\prod\limits_{k=1}^{n}\left(z_{c_{T}\left(k\right)}+ z_{c_{T}\left(k+1\right)}+\cdots+z_{c_{T}\left(n\right)}\right)}\] (a rational function in our indeterminates). Furthermore, for any box \(c=\left(i,j\right)\) in \(Y\left(\lambda\right)\), we define the _algebraic hook length_\(h_{\lambda}\left(c;z\right)\) by \[h_{\lambda}\left(c;z\right):=\sum\limits_{\left(i,j\right)\in H_{\lambda} \left(c\right)}z_{j-i}.\] Then, \[\sum\limits_{T\in\operatorname{SYT}\left(\lambda\right)}\mathbf{z}_{T}=\prod \limits_{c\in Y\left(\lambda\right)}\frac{1}{h_{\lambda}\left(c;z\right)}. \tag{5}\] For example, if \(\lambda=(2,2)\), then (5) says that \[\frac{1}{\left(z_{0}+z_{1}+z_{-1}+z_{0}\right)\left(z_{1}+z_{-1}+z_ {0}\right)\left(z_{-1}+z_{0}\right)z_{0}}\] \[\qquad\qquad+\frac{1}{\left(z_{0}+z_{-1}+z_{1}+z_{0}\right)\left( z_{-1}+z_{1}+z_{0}\right)\left(z_{1}+z_{0}\right)z_{0}}\] \[=\frac{1}{\left(z_{0}+z_{1}+z_{-1}\right)\left(z_{-1}+z_{0}\right) \left(z_{1}+z_{0}\right)z_{0}},\] since there are only two standard tableaux of shape \(\lambda\) (namely, \(\begin{array}{ ### Excited moves and excitations Another recent development in hook length formulas is the discovery of a _skew hook length formula_ by Naruse, later proved by Morales, Pak and Panova in two ways ([13, Theorem 1.2], [13, Theorem 1.2]). It gives an expression for \(\left|\operatorname{SYT}\left(\lambda/\mu\right)\right|\) rather than merely for \(\left|\operatorname{SYT}\left(\lambda/\varnothing\right)\right|\). The expression is more intricate, relying on a notion of _excitations_ (also known as _excited diagrams_). Let us start by defining this notion. For any box \(c=(i,j)\in\mathbb{Z}^{2}\), we define * its _southern neighbor_\(c_{\downarrow}:=(i+1,j)\); * its _eastern neighbor_\(c_{\rightarrow}:=(i,j+1)\); * its _southeastern neighbor_\(c_{\searrow}:=(i+1,j+1)\). These neighbors are arranged as follows: \[\begin{array}{|c|c|}\hline c&c_{\rightarrow}\\ \hline c_{\downarrow}&c_{\searrow}\\ \hline\end{array}.\] If \(D\) is a diagram that contains some box \(c\) but contains none of its three neighbors \(c_{\downarrow},c_{\rightarrow},c_{\searrow}\), then we can replace the box \(c\) by its southeastern neighbor \(c_{\searrow}\) in \(D\). The resulting diagram \((D\setminus\{c\})\cup\left\{c_{\searrow}\right\}\) is denoted by \(\operatorname{exc}_{c}D\), and we say that \(\operatorname{exc}_{c}D\) is obtained from \(D\) by an _excited move_. Thus, \[\operatorname{exc}_{c}D=(D\setminus\{c\})\cup\left\{c_{\searrow}\right\}.\] **Example 2.4**.: Let \(D\) be the diagram \(\{(1,1)\,,\,(1,3)\,,\,(2,1)\}\). Then, we can obtain the diagram \[\operatorname{exc}_{(1,3)}D =(D\setminus\{(1,3)\})\cup\{(2,4)\}\] \[=\{(1,1)\,,\,(2,4)\,,\,(2,1)\}\] by an excited move from \(D\). However, the diagram \((D\setminus\{(1,1)\})\cup\{(2,2)\}\) cannot be obtained by an excited move from \(D\), since the southern neighbor \((2,1)\) of \((1,1)\) is in \(D\) (so that \(\operatorname{exc}_{(1,1)}D\) is not defined). Another way to apply an excited move to \(D\) results in \[\operatorname{exc}_{(2,1)}D =(D\setminus\{(2,1)\})\cup\{(3,2)\}\] \[=\{(1,1)\,,\,(1,3)\,,\,(3,2)\}\,.\] We can visualize an excited move as a diagonal (bishop) move4 by one unit to the east and one unit to the south simultaneously; it is only allowed if the two "intermediate" boxes (i.e., the southern and eastern neighbors of the original box) as well as the target box are not in the diagram. If a diagram \(E\) is obtained from a diagram \(D\) by a sequence of excited moves, then we say that \(E\) is an _excitation_ of \(D\). (This sequence can be empty, so that \(D\) itself is an excitation of \(D\).) **Example 2.5**.: Let \(D\) be the diagram \(\{\left(1,1\right),\ \left(1,3\right),\ \left(2,1\right)\}\), and let \(E\) be the diagram \(\{\left(1,3\right),\ \left(2,2\right),\ \left(4,3\right)\}\). Then, \(E\) is an excitation of \(D\). In fact, it is easy to check that \(E=\operatorname{exc}_{\left(1,1\right)}\left(\operatorname{exc}_{\left(3,2 \right)}\left(\operatorname{exc}_{\left(2,1\right)}D\right)\right)\). Now, recall that \(\lambda\) and \(\mu\) are two partitions. **Definition 2.6**.: We define \(\mathcal{E}\left(\lambda/\mu\right)\) to be the set of all excitations \(E\) of \(Y\left(\mu\right)\) that satisfy \(E\subseteq Y\left(\lambda\right)\). **Example 2.7**.: Let \(\lambda=\left(4,4,3\right)\) and \(\mu=\left(3,1\right)\). Then, the skew diagram \(Y\left(\lambda/\mu\right)\) looks as follows: The set \(\mathcal{E}\left(\lambda/\mu\right)\) consists of the following diagrams: \[Y\left(\mu\right) =\left\{\left(1,1\right),\ \left(1,2\right),\ \left(1,3\right),\ \left(2,1\right)\right\},\] \[\operatorname{exc}_{\left(2,1\right)}\left(Y\left(\mu\right)\right) =\left\{\left(1,1\right),\ \left(1,2\right),\ \left(1,3\right),\ \left(3,2\right)\right\},\] \[\operatorname{exc}_{\left(1,3\right)}\left(Y\left(\mu\right)\right) =\left\{\left(1,1\right),\ \left(1,2\right),\ \left(2,4\right),\ \left(2,1\right)\right\},\] \[\operatorname{exc}_{\left(1,3\right)}\left(\operatorname{exc}_{ \left(2,1\right)}\left(Y\left(\mu\right)\right)\right) =\left\{\left(1,1\right),\ \left(1,2\right),\ \left(2,4\right),\ \left(3,2\right)\right\},\] \[\operatorname{exc}_{\left(1,2\right)}\left(\operatorname{exc}_{ \left(1,3\right)}\left(Y\left(\mu\right)\right)\right) =\left\{\left(1,1\right),\ \left(2,3\right),\ \left(2,4\right),\ \left(2,1\right)\right\},\] \[\operatorname{exc}_{\left(1,2\right)}\left(\operatorname{exc}_{ \left(1,3\right)}\left(\operatorname{exc}_{\left(2,1\right)}\left(Y\left(\mu \right)\right)\right)\right) =\left\{\left(1,1\right),\ \left(2,3\right),\ \left(2,4\right),\ \left(3,2\right)\right\},\] \[\operatorname{exc}_{\left(1,1\right)}\left(\operatorname{exc}_{ \left(1,2\right)}\left(\operatorname{exc}_{\left(1,3\right)}\left(\operatorname{ exc}_{\left(2,1\right)}\left(Y\left(\mu\right)\right)\right)\right)\right) =\left\{\left(2,2\right),\ \left(2,3\right),\ \left(2,4\right),\ \left(3,2\right)\right\}.\] (The order in which the excited moves are made can sometimes be altered, but this does not affect the resulting diagrams.) Here are these diagrams, drawn inside the shape \(Y\left(\lambda\right)\) (we mark the boxes in each diagram with asterisks): \[\begin{array}{ It is easy to see that no other excitations are possible (without outgrowing the diagram \(Y\left(\lambda\right)\)). We will soon (in Section 6) learn of an equivalent model for the excitations in \(\mathcal{E}\left(\lambda/\mu\right)\) which is easier to handle. Note that, despite the notation, the set \(\mathcal{E}\left(\lambda/\mu\right)\) is not directly related to the skew diagram \(Y\left(\lambda/\mu\right)\); it could just as well be reasonably called \(\mathcal{E}\left(\mu,\lambda\right)\). Nevertheless, we call it \(\mathcal{E}\left(\lambda/\mu\right)\) in order to follow the existing literature. At this point, let us state three easy observations, which will be useful later. Proofs for these observations (just as for all others in this paper) are given in Section 15 below. **Lemma 2.8**.: Let \(\lambda\) be any partition. Then, \(\mathcal{E}\left(\lambda/\varnothing\right)=\{\varnothing\}\). **Lemma 2.9**.: Let \(\lambda\) and \(\mu\) be two partitions that don't satisfy \(\lambda\supseteq\mu\). Then, \(\mathcal{E}\left(\lambda/\mu\right)=\varnothing\). **Lemma 2.10**.: Let \(\lambda\) be any partition. Then, \(\mathcal{E}\left(\lambda/\lambda\right)=\{Y\left(\lambda\right)\}\). ### The Naruse skew hook length formula In a conference talk in 2014, Hiroshi Naruse stated the following generalization of the hook length formula to skew diagrams (see [15], [11, Theorem 1.2], [12, Theorem 1.2], [13, (2)]): **Theorem 2.11** (Naruse skew hook length formula).: For any skew partition \(\lambda/\mu\), we have \[\left|\mathrm{SYT}\left(\lambda/\mu\right)\right|=n!\sum_{E\in\mathcal{E}\left( \lambda/\mu\right)}\ \prod_{c\in Y\left(\lambda\right)\setminus E}\frac{1}{h_{\lambda}\left(c \right)},\] where \(n\) is the number of boxes in \(Y\left(\lambda/\mu\right)\). For \(\mu=\varnothing\), this theorem simplifies to the classical hook length formula (Theorem 2.2), since the set \(\mathcal{E}\left(\lambda/\varnothing\right)\) consists of the single diagram \(\varnothing=\{\}\). Once again, the generality comes at a cost: All known proofs of Theorem 2.11 are complex, using either deep combinatorics or (for Naruse's original proof) algebraic geometry. We hope that the proof we give in the present paper will be somewhat more accessible. Note that the expression (2) is just a rewritten form of the right hand side of Theorem 2.11. We remark that Matjaz Konvalinka has generalized Theorem 2.11 further to shifted skew partitions of types B and D in [14]. We shall not follow this direction of generalization here; the applicability of our method to these settings is an interesting question that deserves further work. ## 3 The main theorem The main protagonist of this paper is a common generalization of Theorem 2.11 and Theorem 2.3: **Theorem 3.1**.: Let \(\lambda/\mu\) be a skew partition. Let \(\ldots,z_{-2},z_{-1},z_{0},z_{1},z_{2},\ldots\) be an infinite family of commuting indeterminates. For any standard tableau \(T\) of shape \(\lambda/\mu\), we define the fraction \[\mathbf{z}_{T}:=\frac{1}{\prod\limits_{k=1}^{n}\left(z_{c_{T}(k)}+z_{c_{T}(k+1 )}+\cdots+z_{c_{T}(n)}\right)}\] (a rational function in our indeterminates), where \(n=\left|Y\left(\lambda/\mu\right)\right|\) is the number of boxes of \(Y\left(\lambda/\mu\right)\). Furthermore, for any box \(c=(i,j)\) in \(Y\left(\lambda\right)\), we define the _algebraic hook length_\(h_{\lambda}\left(c;z\right)\) by \[h_{\lambda}\left(c;z\right):=\sum\limits_{(i,j)\in H_{\lambda}(c)}z_{j-i}. \tag{6}\] (Note that this does not depend on \(\mu\).) Then, \[\sum\limits_{T\in\mathrm{SYT}(\lambda/\mu)}\mathbf{z}_{T}=\sum\limits_{E\in \mathcal{E}(\lambda/\mu)}\ \prod\limits_{c\in Y(\lambda)\setminus E}\frac{1}{h_{\lambda}\left(c;z \right)}. \tag{7}\] **Example 3.2**.: Let \(\lambda=(3,2)\) and \(\mu=(1)\). Thus, \(Y\left(\lambda/\mu\right)\) looks as follows: (8) Then, there are five standard tableaux of shape \(\lambda/\mu\), namely \[A=\begin{array}{c|c}\hline 1&2\\ \hline 3&4\\ \hline\end{array}\,\hskip 28.452756ptB=\begin{array}{c|c}\hline 1&3\\ \hline 2&4\\ \hline\end{array}\,\hskip 28.452756ptC=\begin{array}{c|c}\hline 2&3\\ \hline 1&4\\ \hline\end{array}\,\] \[D=\begin{array}{c|c}\hline 1&4\\ \hline 2&3\\ \hline\end{array}\,\hskip 28.452756ptE=\begin{array}{c|c}\hline 2&4\\ \hline 1&3\\ \hline\end{array}\.\] Thus, \(\text{SYT}\left(\lambda/\mu\right)=\left\{A,B,C,D,E\right\}\). Furthermore, \[\mathbf{z}_{A} =\frac{1}{\left(z_{1}+z_{2}+z_{-1}+z_{0}\right)\left(z_{2}+z_{-1}+ z_{0}\right)\left(z_{-1}+z_{0}\right)z_{0}},\] \[\mathbf{z}_{B} =\frac{1}{\left(z_{1}+z_{-1}+z_{2}+z_{0}\right)\left(z_{-1}+z_{2} +z_{0}\right)\left(z_{2}+z_{0}\right)z_{0}},\] \[\mathbf{z}_{C} =\frac{1}{\left(z_{-1}+z_{1}+z_{2}+z_{0}\right)\left(z_{1}+z_{2} +z_{0}\right)\left(z_{2}+z_{0}\right)z_{0}},\] \[\mathbf{z}_{D} =\frac{1}{\left(z_{1}+z_{-1}+z_{0}+z_{2}\right)\left(z_{-1}+z_{0} +z_{2}\right)\left(z_{0}+z_{2}\right)z_{2}},\] \[\mathbf{z}_{E} =\frac{1}{\left(z_{-1}+z_{1}+z_{0}+z_{2}\right)\left(z_{1}+z_{0} +z_{2}\right)\left(z_{0}+z_{2}\right)z_{2}}.\] Meanwhile, \[\mathcal{E}\left(\lambda/\mu\right)=\left\{\left\{\left(1,1\right)\right\}, \,\left\{\left(2,2\right)\right\}\right\},\] since the only excitations of \(Y\left(\mu\right)=\left\{\left\{1,1\right\}\right\}\) that are subsets of \(Y\left(\lambda\right)\) are \(Y\left(\mu\right)=\left\{\left(1,1\right)\right\}\) itself and \(\text{exc}_{\left(1,1\right)}\left(Y\left(\mu\right)\right)=\left\{\left(2,2 \right)\right\}\). Furthermore, the algebraic hook lengths of the boxes of \(\lambda=\left(3,2\right)\) are \[h_{\lambda}\left(\left(1,1\right);z\right) =z_{0}+z_{1}+z_{2}+z_{-1},\] \[h_{\lambda}\left(\left(2,1\right);z\right) =z_{-1}+z_{0},\] \[h_{\lambda}\left(\left(1,2\right);z\right) =z_{1}+z_{2}+z_{0},\] \[h_{\lambda}\left(\left(2,2\right);z\right) =z_{0},\] \[h_{\lambda}\left(\left(1,3\right);z\right) =z_{2}.\] Now, in view of \(\text{SYT}\left(\lambda/\mu\right)=\left\{A,B,C,D,E\right\}\) and \(\mathcal{E}\left(\lambda/\mu\right)=\left\{\left\{\left(1,1\right)\right\}, \,\left\{\left(2,2\right)\right\}\right\}\), the claim of Theorem 3.1 says that \[\mathbf{z}_{A}+\mathbf{z}_{B}+\mathbf{z}_{C}+\mathbf{z}_{D}+ \mathbf{z}_{E}\] \[=\prod_{c\in Y\left(\lambda\right)\backslash\left\{\left\{\left(1,1\right)\right\}\right\}}\frac{1}{h_{\lambda}\left(c;z\right)}+\prod_{c\in Y \left(\lambda\right)\backslash\left\{\left(2,2\right)\right\}}\frac{1}{h_{ \lambda}\left(c;z\right)}\] \[=\frac{1}{h_{\lambda}\left(\left(2,1\right);z\right)}\cdot\frac{ 1}{h_{\lambda}\left(\left(1,2\right);z\right)}\cdot\frac{1}{h_{\lambda}\left( \left(2,2\right);z\right)}\cdot\frac{1}{h_{\lambda}\left(\left(1,3\right);z \right)}\] \[\qquad+\frac{1}{h_{\lambda}\left(\left(1,1\right);z\right)}\cdot \frac{1}{h_{\lambda}\left(\left(2,1\right);z\right)}\cdot\frac{1}{h_{\lambda} \left(\left(1,2\right);z\right)}\cdot\frac{1}{h_{\lambda}\left(\left(1,3 \right);z\right)}\] \[=\frac{1}{\left(z_{-1}+z_{0}\right)\left(z_{1}+z_{2}+z_{0}\right) z_{0}z_{2}}\] \[\qquad+\frac{1}{\left(z_{0}+z_{1}+z_{2}+z_{-1}\right)\left(z_{-1} +z_{0}\right)\left(z_{1}+z_{2}+z_{0}\right)z_{2}}.\] This is indeed easily checked using our above expressions for \(\mathbf{z}_{A},\mathbf{z}_{B},\mathbf{z}_{C},\mathbf{z}_{D},\mathbf{z}_{E}\). We note how Theorem 3 generalizes the previous hook length formulas: * If we set \(\mu=\varnothing\) in Theorem 3, then the sum on the right hand side of (7) simplifies to \(\prod\limits_{c\in Y(\lambda)}\frac{1}{h_{\lambda}\left(c;z\right)}\) (by Lemma 2), whereas the set \(\mathrm{SYT}\left(\lambda/\mu\right)\) on the left hand side becomes \(\mathrm{SYT}\left(\lambda\right)\). Thus, the equality (7) turns into (5), and we recover Theorem 2. * Let \(n=\left|Y\left(\lambda/\mu\right)\right|\). If we set \(z_{i}=1\) for all \(i\in\mathbb{Z}\) in Theorem 3, then each \(\mathbf{z}_{T}\) becomes \(\frac{1}{n!}\) (since the denominator \(\prod\limits_{k=1}^{n}\left(z_{c_{T}(k)}+z_{c_{T}(k+1)}+\cdots+z_{c_{T}(n)}\right)\) becomes \(\prod\limits_{k=1}^{n}\left(n-k+1\right)=n\cdot\left(n-1\right)\cdot\left(n-2 \right)\cdot\cdots\cdot 1=n!\)). Hence, the sum on the left hand side of (7) becomes \(\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\frac{1}{n!}=\left| \mathrm{SYT}\left(\lambda/\mu\right)\right|\cdot\frac{1}{n!}\), whereas each algebraic hook length \(h_{\lambda}\left(c;z\right)\) on the right hand side becomes \(h_{\lambda}\left(c\right)\). Thus, the equality (7) becomes \[\left|\mathrm{SYT}\left(\lambda/\mu\right)\right|\cdot\frac{1}{n!}=\sum \limits_{E\in\mathcal{E}\left(\lambda/\mu\right)}\ \prod\limits_{c\in Y \left(\lambda\right)\setminus E}\frac{1}{h_{\lambda}\left(c\right)},\] and we recover Theorem 2 (after multiplying both sides by \(n!\)). * Likewise, if we set both \(\mu=\varnothing\) and \(z_{i}=1\) for all \(i\in\mathbb{Z}\) in Theorem 3, then Theorem 2 results. Thus, a new proof of Theorem 3 will yield new proofs of all three previously known hook length formulas. Theorem 3 was originally conjectured by one of the authors in 2018, and was proved by Matjaz Konvalinka; his proof (in a type-B variant) has been sketched in [14, Section 5]. Our below proof of Theorem 3 will have some commonalities with his; in particular, we shall derive it from the same recursion as he did, although the recursion will be proved quite differently. ## 4 A recursion for the \(\mathbf{z}_{T}\) Let \(\mu\) and \(\nu\) be two partitions. Then, we write \(\mu\lessdot\nu\) if \(\mu\subseteq\nu\) and \(\left|Y\left(\nu/\mu\right)\right|=1\). Equivalently, \(\mu\lessdot\nu\) holds if and only if \(\nu\) can be obtained from \(\mu\) by increasing one entry by \(1\). For example, \(\left(4,2,2,1\right)\lessdot\left(4,3,2,1\right)\) and \(\left(5,2\right)\lessdot\left(5,2,1\right)\). (In the latter example, the "invisible" third entry is being increased from \(0\) to \(1\).) But neither \(\left(2,1\right)\lessdot\left(3,2\right)\) nor \(\left(2,1\right)\lessdot\left(4\right)\) holds. For the rest of this section, we shall use the following notations: **Convention 4.2**.: Fix an infinite family of commuting indeterminates \(\ldots,z_{-2},z_{-1},z_{0},z_{1},z_{2},\ldots\). We shall be working in the field of rational functions in these indeterminates (with rational coefficients). We shall furthermore use the fractions \(\mathbf{z}_{T}\) defined in Theorem 3.1. **Lemma 4.3**.: Let \(\lambda/\mu\) be a skew partition with \(\lambda\neq\mu\) (so that \(Y\left(\lambda/\mu\right)\) has at least one box). Let \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\). If we remove the box with the entry \(1\) from \(T\), and subtract \(1\) from all remaining entries, then we obtain a new standard tableau \(T^{\prime}\), which has shape \(\lambda/\nu\) for some partition \(\nu\) satisfying \(\mu\preccurlyeq\lambda\). It satisfies \[\mathbf{z}_{T}=\frac{1}{\sum\limits_{(i,j)\in Y\left(\lambda/\mu\right)}z_{j- i}}\cdot\mathbf{z}_{T^{\prime}}. \tag{9}\] **Example 4.4**.: Let \(\lambda=(3,3,2)\) and \(\mu=(2,1)\). Let \(T\) be the standard tableau \[\begin{array}{|c|c|}\cline{2-3}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\clineine{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\clineine{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\clineine{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4} \cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\cline{3-4}\clineine{3-4}\cline{3-4} \cline{3-4}\ **Lemma 4.5**.: Let \(\lambda/\mu\) be a skew partition. 1. If \(\lambda=\mu\), then \[\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=1.\] 2. If \(\lambda\neq\mu\), then \[\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=\frac{1}{ \sum\limits_{\left(i,j\right)\in Y\left(\lambda/\mu\right)}z_{j-i}}\cdot\sum \limits_{\mu<\nu\subseteq\lambda}\ \ \sum\limits_{T\in\mathrm{SYT}\left(\lambda/\nu\right)} \mathbf{z}_{T}.\] Here, the second sum on the right hand side is a sum over all partitions \(\nu\) satisfying \(\mu<\nu\subseteq\lambda\). Note that the skew partitions \(\lambda/\nu\) on the right hand side of Lemma 4.5**(b)** have one fewer box than the skew partition \(\lambda/\mu\) (in the sense that \(\left|Y\left(\lambda/\nu\right)\right|=\left|Y\left(\lambda/\mu\right)\right|-1\)). Thus, Lemma 4.5 yields a recursive algorithm for computing the left hand side \(\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}\) of (7) for all skew partitions \(\lambda/\mu\). We shall now show that the right hand side \(\sum\limits_{E\in\mathcal{E}\left(\lambda/\mu\right)}\ \prod\limits_{c\in Y \left(\lambda\right)\setminus E}\frac{1}{h_{\lambda}\left(c;z\right)}\) of (7) satisfies an analogous recursion (Lemma 12.5). The proof of this recursion is much subtler, and the preparations for it will occupy most of this paper. Once it is proved, (7) will easily follow. ## 5 Conjugates, Delta-sets and the Konvalinka recursion We shall next explore some fundamental features of partitions, which will become basic ingredients in our proof. ### The conjugate partition We recall one of the most fundamental concepts in the theory of partitions: **Definition 5.1**.: Let \(\lambda\) be a partition. Then, \(\lambda^{t}\) is the partition whose Young diagram is the reflection of the diagram of \(\lambda\) across the main diagonal - i.e., it is given by \[Y\left(\lambda^{t}\right)=\left\{\left(j,i\right)\mid\left(i,j\right)\in Y \left(\lambda\right)\right\}. \tag{10}\] Explicitly, it is given by \[\lambda_{k}^{t}=\left|\left\{i\geq 1\mid\lambda_{i}\geq k\right\}\right| \qquad\text{for all }k\geq 1. \tag{11}\] Equivalently, \[\lambda_{k}^{t}=\left(\text{number of boxes in the $k$-th column of }Y\left(\lambda\right)\right) \tag{12}\] for all \(k\geq 1\). Equivalently, \[\lambda_{k}^{t}=\max\left\{i\geq 1\mid\lambda_{i}\geq k\right\}\qquad\text{ for all $k\geq 1$,} \tag{13}\] with the understanding that the maximum of an empty set is \(0\) here. The partition \(\lambda^{t}\) is called the _conjugate_ (or _transpose_) of \(\lambda\). **Example 5.2**.: Let \(\lambda\) be the partition \(\left(5,2,2,1\right)\). Then, \(\lambda^{t}=\left(4,3,1,1,1\right)\). The Young diagrams \(Y\left(\lambda\right)\) and \(Y\left(\lambda^{t}\right)\) are The following lemma follows easily from this definition: **Lemma 5.3**.: Let \(\lambda\) be a partition. Let \(i\) and \(j\) be two positive integers. Then, we have the logical equivalence \[\left(\lambda_{i}^{t}\geq j\right)\iff\left(\lambda_{j}\geq i\right).\] Note that the conjugate partition \(\lambda^{t}\) is denoted \(\lambda^{\prime}\) in [10] and denoted \(\widetilde{\lambda}\) in [12]. ### Delta-sets The following definition is less standard but no less important to us: **Definition 5.4**.: Let \(\lambda\) be any partition. Then, we define a set \(\Delta\left(\lambda\right)\) by \[\Delta\left(\lambda\right):=\left\{\lambda_{i}-i\mid i\geq 1\right\}.\] **Example 5.5**.: Let \(\lambda\) be the partition \(\left(5,2,2,1\right)\). Then, \[\Delta\left(\lambda\right)=\left\{4,0,-1,-3,-5,-6,-7,\ldots\right\}\] (where the "\(\ldots\)" are just the negative integers from \(-8\) on downwards). ### The \(\mathbf{s}_{\lambda}\left[\nu\right]\) polynomial **Definition 5.6**.: Let \(\lambda\) be a partition. Let \(x_{1},x_{2},x_{3},\ldots\) and \(y_{1},y_{2},y_{3},\ldots\) be two infinite families of commuting indeterminates. For any partition \(\nu\), we set \[\mathbf{s}_{\lambda}\left[\nu\right]:=\sum_{D\in\mathcal{E}\left(\lambda/\nu \right)}\ \prod_{(i,j)\in D}\left(x_{i}+y_{j}\right).\] Also, if \(\nu\) is not a partition, then we set \[\mathbf{s}_{\lambda}\left[\nu\right]:=0.\] **Example 5.7**.: Let \(\lambda=(3,3,1)\) and \(\mu=(1)\). Then, \[\mathcal{E}\left(\lambda/\mu\right)=\left\{\left\{\left(1,1\right),\,\left(1, 2\right)\right\},\ \left\{\left(1,1\right),\,\left(2,3\right)\right\},\ \left\{\left(2,2\right),\,\left(2,3\right)\right\}\right\}.\] Drawn using asterisks as in Example 2.7, these three excitations look as follows: Definition 5.6 yields \[\begin{array}{l}\mathbf{s}_{\lambda}\left[\mu\right]\\ =\sum_{D\in\mathcal{E}\left(\lambda/\mu\right)}\ \ \prod_{(i,j)\in D} \left(x_{i}+y_{j}\right)\\ =\prod_{(i,j)\in\left\{\left\{\left(1,1\right),\,\left(1,2\right)\right\} \right\}}\left(x_{i}+y_{j}\right)+\prod_{(i,j)\in\left\{\left\{\left(1,1 \right),\,\left(2,3\right)\right\}\right\}}\left(x_{i}+y_{j}\right)+\prod_{(i,j)\in\left\{\left\{\left(2,2\right),\,\left(2,3\right)\right\}\right\}} \left(x_{i}+y_{j}\right)\\ =\left(x_{1}+y_{1}\right)\left(x_{1}+y_{2}\right)+\left(x_{1}+y_{1}\right) \left(x_{2}+y_{3}\right)+\left(x_{2}+y_{2}\right)\left(x_{2}+y_{3}\right)\\ =x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}+x_{1}y_{1}+x_{1}y_{2}+x_{2}y_{1}+x_{1}y_{3}+x_ {2}y_{2}+x_{2}y_{3}\\ \qquad\qquad+y_{1}y_{2}+y_{1}y_{3}+y_{2}y_{3}.\end{array}\] In the lingo of symmetric functions, the polynomials \(\mathbf{s}_{\lambda}\left[\nu\right]\) can be called _flagged factorial Schur polynomials_. They turn into the usual flagged Schur polynomials if we set all the \(y_{i}\) to \(0\), and into the factorial Schur polynomials if we let \(\lambda_{i}\rightarrow+\infty\). (This follows from Corollary 6.22 below.) ### The Konvalinka recursion Now, we can state the _Konvalinka recursion_ ([18, Theorem 5]), which was used by Konvalinka in his proof of Naruse's formula: **Theorem 5.8** (Konvalinka recursion).: Let \(\lambda/\mu\) be any skew partition, and let \(x_{1},x_{2},x_{3},\ldots\) and \(y_{1},y_{2},y_{3},\ldots\) be two infinite families of commuting indeterminates. Set \[\ell_{i}:=\lambda_{i}-i\qquad\text{ and }\qquad\ell_{i}^{t}:=\lambda_{i}^{t}-i \qquad\text{ for all }i\geq 1.\] Then, \[\left(\sum_{\begin{subarray}{c}k\geq 1;\\ \ell_{k}\not\in\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}k \geq 1;\\ \ell_{k}^{t}\not\in\Delta(\mu^{t})\end{subarray}}y_{k}\right)\mathbf{s}_{ \lambda}\left[\mu\right]=\sum_{\mu<v\subseteq\lambda}\mathbf{s}_{\lambda}\left[ \nu\right].\] Here, the sum on the right hand side ranges over all partitions \(v\) that satisfy \(\mu<v\subseteq\lambda\). Our proof of Theorem 3.1 will not make direct use of the Konvalinka recursion as we just stated it, but instead use a simpler (if less aesthetically pleasant) variant (Lemma 11.11, which we will prove below). We will then (in Section 13) derive the Konvalinka recursion from this variant (with some extra work). ## 6 Flagged semistandard tableaux We next define the notion of _flagged semistandard tableaux_, which will (in a specific case) be a more manageable model for excitations. ### Semistandard tableaux We begin with the concept of semistandard tableaux (a relative of that of standard tableaux): **Definition 6.1**.: Let \(\mu\) be a partition. A _semistandard tableau_ of shape \(\mu\) means a way to put a positive integer into each box of \(Y\left(\mu\right)\) (that is, formally speaking, a map \(T:Y\left(\mu\right)\rightarrow\left\{1,2,3,\ldots\right\}\)) such that * the integers weakly increase left-to-right in each row (i.e., we have \(T\left(i,j\right)\leq T\left(i,j+1\right)\) whenever \(\left(i,j\right)\) and \(\left(i,j+1\right)\) belong to \(Y\left(\mu\right)\)); * the integers strictly increase top-to-bottom in each column (i.e., we have \(T\left(i,j\right)<T\left(i+1,j\right)\) whenever \(\left(i,j\right)\) and \(\left(i+1,j\right)\) belong to \(Y\left(\mu\right)\)). (Just as before, the value \(T\left(i,j\right)\) is regarded as the entry of \(T\) in the box \(\left(i,j\right)\).) We let \(\text{SSYT}\left(\mu\right)\) denote the set of all semistandard tableaux of shape \(\mu\). **Example 6.2**.: If \(\mu=\left(4,3,3\right)\), then \(T\in\text{SSYT}\left(\mu\right)\) can be: \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 2 & 3 \\ \hline 3 & 4 & 7 \\ \hline 8 & 8 & 8 \\ \hline \end{tabular} but cannot be \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 2 & 3 \\ \hline 1 & 4 & 7 \\ \hline 8 & 8 & 8 \\ \hline \end{tabular} because its property \(T\left(1,1\right)=T\left(2,1\right)\) would violate the second condition in Definition 6.1. The following lemma exposes two basic properties of semistandard tableaux, which will be used later on and also make for good warmup exercises: **Lemma 6.3**.: Let \(\mu\) be a partition. Let \(T\in\text{SSYT}\left(\mu\right)\) be a semistandard tableau. Then: * We have \(T\left(i,j\right)\geq i\) for each \(\left(i,j\right)\in Y\left(\mu\right)\). * Let \(\left(i,j\right)\) and \(\left(u,v\right)\) be two boxes in \(Y\left(\mu\right)\) such that \(i\leq u\) and \(j\leq v\). Then, \[T\left(u,v\right)-u\geq T\left(i,j\right)-i.\] ### Flagged semistandard tableaux in general Flagged semistandard tableaux are semistandard tableaux in which, for each \(i\), the entries in the \(i\)-th row are bounded from above by a given integer \(b_{i}\): **Definition 6.4**.: * A _flagging_ means a sequence \(\left(b_{1},b_{2},b_{3},\ldots\right)\), where each \(b_{i}\) is a positive integer. * A flagging \(\left(b_{1},b_{2},b_{3},\ldots\right)\) is said to be _weakly increasing_ if \(b_{1}\leq b_{2}\leq b_{3}\leq\cdots\). * Let \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be a flagging, and let \(\mu\) be a partition. A semistandard tableau \(T\) of shape \(\mu\) is said to be \(\mathbf{b}\)_-flagged_ if and only if it satisfies \[T\left(i,j\right)\leq b_{i}\qquad\text{for all }\left(i,j\right)\in Y\left(\mu\right)\] (that is, all entries in row \(i\) are \(\leq b_{i}\)). We let \(\text{FSSYT}\left(\mu,\mathbf{b}\right)\) be the set of all \(\mathbf{b}\)-flagged semistandard tableaux of shape \(\mu\). **Example 6.5**.: Let \(\mu=(3,2,1)\) and \(\mathbf{b}=(2,3,3,3,3,\ldots)\). Then, FSSYT\(\left(\mu,\mathbf{b}\right)\) consists of the five semistandard tableaux \[\begin{array}{|c|c|}\hline 1&1&1\\ \hline 2&2\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&1&1\\ \hline 2&3\\ \hline 3\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&1&2\\ \hline 2&2\\ \hline 3\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&1&2\\ \hline 2&3\\ \hline 3\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&2&2\\ \hline 2&3\\ \hline 3\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline 1&2&2\\ \hline 2&3\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&2&2\\ \hline 2&3\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline 1&2&2\\ \hline 2&3\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&2&3\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 1&2&3\\ \hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline\end{array}\qquad\begin{array}{|c|c|}\hline 3&\\ \hline\end{array}\] **Definition 6.6**.: In the following, for any \(k\in\mathbb{Z}\), we define the \(k\)_-th diagonal_ to be the set of all boxes \(\left(i,j\right)\in\mathbb{Z}^{2}\) such that \(j-i=k\). For instance, the 1-st diagonal consists of the boxes \(\ldots,\ \left(-2,-1\right),\ \left(-1,0\right),\ \left(0,1\right),\ \left(1,2\right),\ \ldots\). We note that if \(\lambda\) is a partition and \(i\geq 1\), then the \(i\)-th row of \(Y\left(\lambda\right)\) has boxes in the \(\left(1-i\right)\)-th, \(\left(2-i\right)\)-th,..., \(\left(\lambda_{i}-i\right)\)-th diagonals. ### The flagging induced by \(\lambda/\mu\) **Convention 6.7**.: **For the rest of this section**, we fix two partitions \(\lambda\) and \(\mu\). (We do not require that \(\lambda\supseteq\mu\).) **Definition 6.8**.: Let \(\lambda\) and \(\mu\) be two partitions (not necessarily satisfying \(\mu\subseteq\lambda\)). For each \(i\geq 1\), we set \[b_{i}:=\max\left\{k\geq 0\ |\ \lambda_{k}-k\geq\mu_{i}-i\right\},\] where we understand \(\lambda_{0}\) to be \(+\infty\) (so that the set on the right hand side always includes 0). The maximum here is well-defined because of Lemma 6.10 below. We define the flagging \(\mathbf{b}\) to be \(\left(b_{1},b_{2},b_{3},\ldots\right)\), and we call it the _flagging induced by \(\lambda/\mu\)_. Note that if \(\mu\subseteq\lambda\) and \(\mu_{i}>0\), then \(b_{i}\) is \(i\) plus the maximum number of diagonal moves that the box \(\left(i,\mu_{i}\right)\) could make along its diagonal without leaving the Young diagram \(Y\left(\lambda\right)\). (A _diagonal move_ is a move that takes a box to its southeastern neighbor on the same diagonal, without regard for any other boxes in the diagram.) This description applies to the general case as well, if we extend \(Y\left(\lambda\right)\) by all boxes \(\left(i,j\right)\) with \(i<0\) or \(j<0\), and allow "negative" diagonal moves (which are just reverse diagonal moves). We define \(\mathcal{F}\left(\lambda/\mu\right)\) to be the set FSSYT\(\left(\mu,\mathbf{b}\right)\), with \(\mathbf{b}\) defined as above. **Example 6.9**.: Let \(\mu=(3,2,1,1)\) and \(\lambda=(7,6,6,5,5,3,1)\). Then, the flagging induced by \(\lambda/\mu\) is \(\mathbf{b}=(3,5,5,6,6,7,8,9,\ldots)\). (Note that \(b_{i}=i\) for all sufficiently large \(i\), by Lemma 6.13.) Visually, \(b_{2}=5\) can be seen by applying diagonal moves to the box \(g=(2,\mu_{2})\), and \(b_{3}=5\) can be seen by applying diagonal moves to the box \(h=(3,\mu_{3})\) in the following picture: The flagging induced by \(\lambda/\mu\) has a few basic properties: **Lemma 6.10**.: The maximum \(\max\left\{k\geq 0\mid\lambda_{k}-k\geq\mu_{i}-i\right\}\) in Definition 6.8 is well-defined. **Lemma 6.11**.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(i\) and \(j\) be two positive integers. Then, we have the logical equivalence \[(j\leq b_{i})\iff\left(\lambda_{j}-j\geq\mu_{i}-i\right).\] **Lemma 6.12**.: The flagging \(\mathbf{b}\) induced by \(\lambda/\mu\) is weakly increasing. **Lemma 6.13**.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(i\geq 1\) satisfy \(\mu_{i}=0\) and \(\lambda_{i+1}=0\). Then, \(b_{i}=i\). ### Flagged semistandard tableaux vs. excitations Now we get to the core of this section. There is a connection between flagged semistandard tableaux and excitations. This connection was first noticed by Kreiman [10, SS6], who stated its main properties but left them unproved5. Essentially the same properties (in a slightly modified form6) later appeared in [14, Proposition 3.6] with a sketched proof and in [13, Section 4] with an assurance of the proof being "easy". We shall give complete proofs of these properties, although they are indeed highly intuitive and easy to the combinatorially trained reader (even as their simplicity gets lost in writing). Footnote 5: Kreiman refers to the excited moves as “ladder moves”. We suspect that his \(\mathrm{SSYT}_{\mu,\lambda}\) is our \(\mathcal{F}\left(\lambda/\mu\right)\). **Definition 6.14**.: Let \(T\in\mathrm{SSYT}\left(\mu\right)\) be a semistandard tableau. Then: * If \(c=\left(i,j\right)\) is any box in \(Y\left(\mu\right)\), then we define a new box \[c_{+T}:=\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)\in\mathbb{Z}^{2}.\] (Recall that \(T\left(i,j\right)\) denotes the entry of \(T\) in the box \(\left(i,j\right)\).) The box \(c_{+T}\) can be equivalently characterized as the unique box in the \(T\left(c\right)\)-th row that lies on the same diagonal as \(c\). * The diagram \(\mathbf{D}\left(T\right)\) is defined by \[\mathbf{D}\left(T\right):=\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\right\}.\] (14) **Example 6.15**.: Let \(\mu=\left(4,2,1\right)\), and let \(T\in\mathrm{SSYT}\left(\mu\right)\) be the following semistandard tableau: \[T=\begin{array}{|c|c|c|c|}\hline 1&1&2&3\\ \hline 2&3&\\ \hline 4&\\ \hline\end{array}.\] Then, the corresponding boxes \(c_{+T}\) are \[\left(1,1\right)_{+T} =\left(1,1\right),\quad\left(1,2\right)_{+T} =\left(1,2\right),\quad\left(1,3\right)_{+T} =\left(2,4\right),\quad\left(1,4\right)_{+T}=\left(3,6\right),\] \[\left(2,1\right)_{+T} =\left(2,1\right),\quad\left(2,2\right)_{+T} =\left(3,3\right),\quad\left(3,1\right)_{+T} =\left(4,2\right).\] Thus, the diagram \(\mathbf{D}\left(T\right)\) is the following collection of boxes (the northwesternmost of which is \(\left(1,1\right)\)): **Example 6.16**.: Let \(\mu=\left(3,2,1\right)\) and \(\lambda=\left(4,4,3\right)\). Then, the flagging \(\mathbf{b}\) induced by \(\lambda/\mu\) is \(\left(2,3,3,4,5,6,\ldots\right)\). Here are all flagged semistandard tableaux \(T\in\mathcal{F}\left(\lambda/\mu\right)\) and the corresponding diagrams \(\mathbf{D}\left(T\right)\): \[\begin{array}{|c|c|c|c|c|}\hline 1&1&1\\ \hline 2&2\\ \hline 3&\\ \hline\end{array}\quad\longleftrightarrow\begin{array}{|c|c|c|c|c|}\hline \ast&\ast&\ast\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline 1&1&1\\ \hline 2&3&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|c|}\hline\ast&\ast&\ast \\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\ast\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\ast\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\\ \hline\ast&\ast&\ast\\ \hline\end{array}\quad\quad\begin{array}{|c|c|c|}\hline\ast&\ast&\ast&\\ \ \(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig1}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig2}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig3}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig4}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig5}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig5}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig3}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56.905512pt]{Fig3}\end{array}\)\(\longleftrightarrow\)\(\begin{array}{c}\includegraphics[height=56. * We have \[\prod_{(i,j)\in\mathbf{D}(T)}\left(x_{i}+y_{j}\right)=\prod_{(i,j)\in Y(\mu)} \left(x_{T(i,j)}+y_{T(i,j)+j-i}\right).\] * We have \(\mathbf{D}\left(T\right)\in\mathcal{E}\left(\lambda/\mu\right)\) if and only if \(T\in\mathcal{F}\left(\lambda/\mu\right)\). **Lemma 6.20**.: The map \[\mathrm{SSYT}\left(\mu\right) \to\left\{\text{all excitations of }Y\left(\mu\right)\right\},\] \[T \mapsto\mathbf{D}\left(T\right)\] is well-defined and is a bijection. **Lemma 6.21**.: The map \[\mathcal{F}(\lambda/\mu) \to\mathcal{E}(\lambda/\mu),\] \[T \mapsto\mathbf{D}\left(T\right)\] is well-defined and is a bijection. As a consequence of the above, we can rewrite the polynomial \(\mathbf{s}_{\lambda}\left[\mu\right]\) from Definition 5.6 in terms of flagged semistandard tableaux: **Corollary 6.22**.: We have \[\mathbf{s}_{\lambda}\left[\mu\right]=\sum_{T\in\mathcal{F}(\lambda/\mu)}\ \prod_{(i,j)\in Y(\mu)}\left(x_{T(i,j)}+y_{T(i,j)+j-i}\right).\] ## 7 The \(h\left(a,b,c\right)\) polynomials Now we introduce three further pieces of notation. **Definition 7.1**.: The _Iverson bracket_ is the function that assigns to each statement its truth value (i.e., the number \(1\) if the statement is true and \(0\) otherwise). More formally: If \(X\) is a statement, then \[\left[X\right]=\begin{cases}1,&\text{if }X\text{ is true;}\\ 0,&\text{if }X\text{ is false.}\end{cases}\] For example, \(\left[2=3\right]=0\) and \(\left[5\neq 1\right]=1\). **Definition 7.2**.: If \(N\) is any integer, then \([N]\) will denote the set \(\{1,2,\ldots,N\}\). This set is empty when \(N\leq 0\). **Definition 7.3**.: 1. Let \(R\) be the polynomial ring over \(\mathbb{Z}\) in countably many commuting indeterminates \[\begin{array}{c}x_{1},x_{2},x_{3},\ldots,\\ y_{1},y_{2},y_{3},\ldots.\end{array}\] 2. We furthermore set \(x_{i}=0\) and \(y_{i}=0\) for any integer \(i\leq 0\). Thus, \(x_{i}\) and \(y_{i}\) are defined for any integer \(i\). 3. For all \(a,c\in\mathbb{Z}\) and \(b\in\mathbb{N}\), we set \[h\left(a,b,c\right):=\sum_{\begin{subarray}{c}(i_{1},j_{2},\ldots,i_{a})\in[ b]^{a};\\ i_{1}\leq i_{2}<\cdots\leq i_{a}\end{subarray}}\ \prod_{j=1}^{a}\left(x_{i_{j}}+y_{i_{j}+(j-1)+c} \right).\] This is a polynomial in \(R\), and will be called an _\(h\)-polynomial_. It is clear that \(h\left(0,b,c\right)=1\) for any \(b\) and \(c\) (since the set \([b]^{0}\) has only one element). We furthermore understand \(h\left(a,b,c\right)\) to be \(0\) when \(a<0\). Thus, we have \[h(a,b,c)=[a=0]\qquad\text{whenever $a\leq 0$}. \tag{16}\] But we also have \[h(a,b,c)=0\qquad\text{whenever $a>0$ and $b=0$}\] (since \([b]^{a}=[0]^{a}=\varnothing^{a}=\varnothing\) in this case). Combining these two equalities, we obtain \[h(a,b,c)=[a=0]\qquad\text{whenever $b=0$}. \tag{17}\] Another simple example of \(h\)-polynomials are the \(h\left(1,b,c\right)\): **Lemma 7.4**.: Let \(c\in\mathbb{Z}\) and \(b\in\mathbb{N}\). Then, \[h\left(1,b,c\right)=\sum_{i=1}^{b}x_{i}+\sum_{j=c+1}^{c+b}y_{j}.\] Let us now derive three recursive formulas for \(h\)-polynomials. **Lemma 7.5**.: For all integers \(a\) and \(c\) and all positive integers \(b\), we have \[h\left(a,b,c\right)=\left(x_{b}+y_{a+b+c-1}\right)\cdot h\left(a-1,b,c\right)+h \left(a,b-1,c\right). \tag{18}\] **Lemma 7.6**.: For all integers \(a\) and \(c\) and all nonnegative integers \(b\), we have \[h\left(a,b,c\right)-h\left(a,b,c-1\right)=\left(y_{a+b+c-1}-y_{c}\right)\cdot h \left(a-1,b,c\right). \tag{19}\] In what follows, we will use two corollaries that follow easily from the above lemmas: **Corollary 7.7**.: For all integers \(a\) and \(c\) and all positive integers \(b\), we have \[h\left(a,b-1,c\right)=h\left(a,b,c-1\right)-\left(x_{b}+y_{c}\right)\cdot h \left(a-1,b,c\right). \tag{20}\] **Corollary 7.8**.: For all integers \(a\) and \(c\) and all nonnegative integers \(b\), we have \[h\left(a+1,b,c+1\right)=h\left(a+1,b,c\right)+\left(y_{a+b+c+1}-y_{c+1}\right) \cdot h\left(a,b,c+1\right).\] ## 8 The flagged Jacobi\(-\)Trudi identity ### A flagged Jacobi\(-\)Trudi identity for \(\operatorname{FSSYT}(\mu,\mathbf{b})\) **Definition 8.1**.: Let \(n\in\mathbb{N}\). Then, the notation \(\left(a_{i,j}\right)_{i,j\in[n]}\) shall mean the \(n\times n\)-matrix whose \((i,j)\)-th entry is \(a_{i,j}\) for all \(i,j\in[n]\). We now state a crucial formula that helps us rewrite certain sums over flagged semistandard tableaux as determinants: **Proposition 8.2**.: Let \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) be a partition. Let \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be a weakly increasing flagging. Then, \[\sum_{T\in\operatorname{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{\left(i,j\right)\in Y (\mu)}\left(x_{T(i,j)}+y_{T(i,j)+j-i}\right)=\det\left(h\left(\mu_{i}-i+j,\ \ b_{i},\ \ 1-j\right)\right)_{i,j\in[n]}\] (where \(x_{1},x_{2},x_{3},\ldots\) and \(y_{1},y_{2},y_{3},\ldots\) are indeterminates). This proposition is a variant of the flagged Jacobi-Trudi identity [21, Theorem 1.3], and in fact is a particular case of Gessel's and Viennot's generalization of the latter ([11, Theorem 3]). (See Subsection 17.1 for how Proposition 8.2 can be derived from [10, Theorem 3].) It can also be derived from [11, Theorem 4.2] (see Subsection 17.2 for a hint). However, the sake of completeness, we shall give a standalone proof (which essentially comes from [10, proof of Theorem 11]). Our proof is a close relative of the well-known proof of the Jacobi-Trudi identities using the Lindstrom-Gessel-Viennot (LGV) lemma (see, e.g., [2, First proof of Theorem 7.16.1]), and indeed a reader familiar with the latter lemma will find it easy to adapt the latter proof to Proposition 8.2. In order to keep the present work self-contained, we are abstaining from the use of the LGV lemma in favor of a direct combinatorial argument using a sign-reversing involution. ### Proof of Proposition 8.2 We will actually prove a slight generalization of Proposition 8.2: **Theorem 8.3**.: Let \(R\) be a commutative ring. Let \(u_{i,j}\) be an element of \(R\) for each pair \((i,j)\in\mathbb{Z}\times\mathbb{Z}\). For each \(b\in\mathbb{N}\) and \(q,d\in\mathbb{Z}\), we define an element \(h_{b;\ q}\left[d\right]\in R\) by \[h_{b;\ q}\left[d\right]:=\sum_{\begin{subarray}{c}\left(i_{1},i_{2},\ldots,i_ {q}\right)\in\left[b\right]^{q};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{q}\end{subarray}}\ \prod_{j=1}^{q}u_{i_{j},\ j-d}.\] This sum is understood to be \(0\) if \(q<0\), and to be \(1\) if \(q=0\). Let \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n})\) be a partition. Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be a weakly increasing flagging. Then, \[\sum_{T\in\mathrm{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)}u_{T(i,j),\ j-i}= \det\left(h_{b_{i};\ \mu_{i}-i+j}\left[j\right]\right)_{i,j\in\left[n\right]}.\] Our proof of this theorem will follow [10, proof of Theorem 11]. We begin with some notations:7 Footnote 7: We note that Theorem 8.3 can be generalized even further (see Theorem 16.2 below), but we shall not need this generality. **Convention 8.4**.: For the rest of Subsection 8.2, we fix a number \(n\in\mathbb{N}\), a partition \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n})\) and a weakly increasing flagging \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\). We also fix a commutative ring \(R\) and an element \(u_{i,j}\) of \(R\) for each pair \((i,j)\in\mathbb{Z}\times\mathbb{Z}\). We let \(S_{n}\) denote the \(n\)-th symmetric group, i.e., the group of permutations of \(\left\{1,2,\ldots,n\right\}\). For any permutation \(\sigma\in S_{n}\), we let \(\left(-1\right)^{\sigma}\) denote the sign of \(\sigma\). Thus, the Leibniz formula for a determinant says that \[\det\left(a_{i,j}\right)_{i,j\in\left[n\right]}=\sum_{\sigma\in S_{n}}\left(- 1\right)^{\sigma}\prod_{i=1}^{n}a_{i,\sigma\left(i\right)} \tag{21}\] for any \(n\times n\)-matrix \(\left(a_{i,j}\right)_{i,j\in\left[n\right]}\in R^{n\times n}\). **Definition 8.5**.: Let \(\sigma\in S_{n}\) be any permutation. Then: 1. We say that \(\sigma\) is _legitimate_ if each \(i\in[n]\) satisfies \(\mu_{\sigma(i)}-\sigma\left(i\right)+i\geq 0\). 2. If \(\sigma\) is legitimate, then we define \(P\left(\sigma\right)\) to be the set of all pairs \(\left(i,j\right)\) of positive integers satisfying \(i\in[n]\) and \(j\leq\mu_{\sigma(i)}-\sigma\left(i\right)+i\). This set \(P\left(\sigma\right)\) is a diagram. (It has \(\mu_{\sigma(i)}-\sigma\left(i\right)+i\) boxes in the \(i\)-th row for each \(i\in[n]\); all rows are left-aligned.) (If \(\sigma\) is not legitimate, then we don't define the diagram \(P\left(\sigma\right)\), since it would have "negative-length rows".) 3. If \(\sigma\) is legitimate, then a \(\sigma\)_-array_ will mean a filling \(T\) of the diagram \(P\left(\sigma\right)\) with positive integers (i.e., a map \(T:P\left(\sigma\right)\rightarrow\left\{1,2,3,\ldots\right\}\)) that weakly increase left-to-right along each row (i.e., that satisfy \(T\left(i,j\right)\leq T\left(i,j+1\right)\) whenever \(\left(i,j\right)\) and \(\left(i,j+1\right)\) are two elements of \(P\left(\sigma\right)\)). Note that we do not require the entries of \(T\) to strictly increase down the columns. If \(\sigma\) is not legitimate, then we agree that there are no \(\sigma\)-arrays. 4. If \(T\) is a \(\sigma\)-array, then the _weight_ of \(T\) is defined to be the product \(\prod\limits_{\left(i,j\right)\in P\left(\sigma\right)}u_{T\left(i,j\right), \ j-i}\). We denote it by \(w\left(T\right)\). 5. We say that a \(\sigma\)-array \(T\) is _\(\mathbf{b}\)-flagged_ if it has the property that for each \(i\in[n]\), every entry of \(T\) in the \(i\)-th row is \(\leq b_{\sigma(i)}\) (that is, if we have \(T\left(i,j\right)\leq b_{\sigma(i)}\) for every \(\left(i,j\right)\in P\left(\sigma\right)\)). **Example 8.6**.: Assume that \(n=3\) and \(\mu=(4,2,1)\). We write each permutation \(\sigma\in S_{3}\) as the triple \(\left[\sigma\left(1\right),\;\sigma\left(2\right),\;\sigma\left(3\right)\right]\) (delimited by square brackets instead of parentheses in order to avoid confusion with a partition). The symmetric group \(S_{3}\) is generated by the two permutations \(s_{1}:=[2,1,3]\) and \(s_{2}:=[1,3,2]\). The permutation \(s_{1}s_{2}s_{1}=[3,2,1]\in S_{3}\) is not legitimate, because \(i=1\) does not satisfy \(\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\geq 0\) for \(\sigma=s_{1}s_{2}s_{1}\) (indeed, for this \(i\) and this \(\sigma\), we have \(\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i=\mu_{3}-3+1=1-3+1=-1<0\)). For a similar reason, the permutation \(s_{2}s_{1}=[3,1,2]\) is not legitimate either. All four remaining permutations in \(S_{3}\) are legitimate. Here are these four permutations \(\sigma\) along with the corresponding diagrams \(P\left(\sigma\right)\): \[\begin{array}{|c|c|c|c|}\hline\sigma&P\left(\sigma\right)\\ \hline\hline\text{id}=\left[1,2,3\right]&\\ \hline\hline\text{id}=\left[1,2,3\right]&\\ \hline\hline\text{id}=\left[1,2,3\right]&\\ \hline\hline\text{id}=\left[1,3,2\right]&\\ \hline\hline\text{id}=\left[2,3,1\right]&\\ \hline\hline\end{array}.\] Thus, for instance, an \(s_{1}\)-array is a filling of \(P\left(s_{1}\right)\) whose entries weakly increase along each row, i.e., a filling \(T\) of the form \[\begin{array}{|c|c|c|c|c|}\hline a&\\ \hline b&c&d&e&f\\ \hline g&\\ \hline\end{array}\] with \(b\leq c\leq d\leq e\leq f\). (No conditions on the columns are made!) The weight of this \(s_{1}\)-array \(T\) is \[w\left(T\right)=u_{a,0}u_{b,-1}u_{c,0}u_{d,1}u_{e,2}u_{f,3}u_{g,-2}.\] This \(s_{1}\)-array \(T\) is \(\mathbf{b}\)-flagged if and only if \(a\leq b_{2}\) and \(f\leq b_{1}\) and \(g\leq b_{3}\). (Of course, in order to check that every entry in the \(i\)-th row is \(\leq b_{\sigma\left(i\right)}\), we only need to check this for the last entry.) We can use \(\mathbf{b}\)-flagged \(\sigma\)-arrays to interpret the right hand side of Theorem 8.3: **Lemma 8.7**.: 1. For each \(\sigma\in S_{n}\), we have \[\prod_{i=1}^{n}h_{b_{\sigma(i)};\;\mu_{\sigma(i)}-\sigma(i)+i}\left[i\right]= \sum_{T\text{ is a }\mathbf{b}\text{-flagged}\atop\sigma\text{-array}}w\left(T\right).\] (22) 2. We have \[\det\left(h_{b_{i};\;\mu_{i}-i+j}\left[j\right]\right)_{i,j\in\left[n\right]}= \sum_{\sigma\in S_{n}}\left(-1\right)^{\sigma}\sum_{T\text{ is a }\mathbf{b}\text{-flagged}\atop\sigma\text{-array}}w\left(T\right).\] We bring the claim of Lemma 8.7**(b)** into a more convenient form using the following definition: **Definition 8.8**.: A _twisted array_ will mean a pair \(\left(\sigma,T\right)\), where \(\sigma\in S_{n}\) and where \(T\) is a \(\sigma\)-array. (Of course, \(\sigma\) is necessarily legitimate if \(\left(\sigma,T\right)\) is a twisted array, since otherwise there are no \(\sigma\)-arrays.) We say that a twisted array \(\left(\sigma,T\right)\) is \(\mathbf{b}\)-_flagged_ if the \(\sigma\)-array \(T\) is \(\mathbf{b}\)-flagged. Thus, Lemma 8.7**(b)** becomes the following: **Lemma 8.9**.: We have \[\det\left(h_{b_{i};\;\mu_{i}-i+j}\left[j\right]\right)_{i,j\in\left[n\right]}= \sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right).\] It remains to connect the left hand side of Theorem 8.3 with twisted arrays. In a sense, the connection is obvious: A \(\mathbf{b}\)-flagged semistandard tableau \(T\in\operatorname{FSSYT}\left(\mu,\mathbf{b}\right)\) is a specific kind of \(\operatorname{id}\)-array, so that \(\left(\operatorname{id},T\right)\) is a twisted array. It just remains to somehow get rid of all the other twisted arrays \(\left(\sigma,T\right)\) (i.e., those for which \(\sigma\neq\operatorname{id}\), but also those for which \(\sigma=\operatorname{id}\) but \(T\) is not semistandard). This is what we shall do next. Indeed, we will pair up these unwanted twisted arrays with each other in such a way that their contributions to the sum \[\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a }\mathbf{b}\text{- flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)\] cancel out in each pair (i.e., each unwanted twisted array is cancelled out by its partner in our pairing). This will reduce this sum to only the wanted part \[\sum_{T\in\operatorname{FSSYT}\left(\mu,\mathbf{b}\right)}w\left(T\right),\] which is easily seen to be the left hand side of Theorem 8.3. In combination with Lemma 8.9, this will prove Theorem 8.3. In order to construct our pairing, we introduce the concept of _failure_ of a twisted array. Such a failure will exist exactly when the twisted array is unwanted. **Definition 8.10**.: Let \(\left(\sigma,T\right)\) be a twisted array (i.e., let \(\sigma\in S_{n}\), and let \(T\) be a \(\sigma\)-array). * A box \(\left(i,j\right)\in P\left(\sigma\right)\) is said to be an _outer failure_ of \(\left(\sigma,T\right)\) if \(i>1\) and \(\left(i-1,j\right)\notin P\left(\sigma\right)\). (In other words, a box \(c\) of \(P\left(\sigma\right)\) is an outer failure of \(\left(\sigma,T\right)\) if it does not lie in the first row, but its northern neighbor fails to belong to \(P\left(\sigma\right)\).) (This notion does not depend on \(T\).) * A box \(\left(i,j\right)\in P\left(\sigma\right)\) is said to be an _inner failure_ of \(\left(\sigma,T\right)\) if \(\left(i-1,j\right)\in P\left(\sigma\right)\) and \(T\left(i-1,j\right)\geq T\left(i,j\right)\). (In other words, a box \(c\) of \(P\left(\sigma\right)\) is an inner failure of \(\left(\sigma,T\right)\) if its northern neighbor is another box \(d\in P\left(\sigma\right)\) but satisfies \(T\left(d\right)\geq T\left(c\right)\).) * A box \(\left(i,j\right)\in P\left(\sigma\right)\) is said to be a _failure_ of \(\left(\sigma,T\right)\) if it is an outer failure or an inner failure of \(\left(\sigma,T\right)\). * A _leftmost failure_ of \(\left(\sigma,T\right)\) means a failure \(\left(i,j\right)\) of \(\left(\sigma,T\right)\) for which \(j\) is minimum (i.e., which lies as far west as a failure of \(\left(\sigma,T\right)\) can lie). * A _bottommost leftmost failure_ of \(\left(\sigma,T\right)\) means a leftmost failure \(\left(i,j\right)\) of \(\left(\sigma,T\right)\) for which \(i\) is maximum (i.e., which lies as far south as a leftmost failure of \(\left(\sigma,T\right)\) can lie). Note that this requirement uniquely determines the failure (since all leftmost failures of \(\left(\sigma,T\right)\) lie in the same column). * We say that the twisted array \(\left(\sigma,T\right)\) is _failing_ if it has a failure. Otherwise, we say that it is _unfailing_. **Example 8.11**.: Let \(n=5\) and \(\mu=\left(4,4,3,3,3\right)\). Let \(\sigma\in S_{5}\) be the permutation that swaps \(3\) with \(5\) while keeping the remaining elements unchanged. Let \(T\) be the following \(\sigma\)-array: \[\begin{array}{|c|c|c|c|}\hline 1&2&3&5\\ \hline 2&2&4&4\\ \hline 5&&\\ \hline 6&8&9&\\ \hline 7&9&9&9\\ \hline\end{array}.\] Then, the twisted array \(\left(\sigma,T\right)\) is failing. It has both inner and outer failures. Its outer failures are \(\left(4,2\right)\), \(\left(4,3\right)\), \(\left(5,4\right)\) and \(\left(5,5\right)\). Its inner failures are \(\left(2,2\right)\) (since \(T\left(2,1\right)\geq T\left(2,2\right)\)) as well as \(\left(2,4\right)\) and \(\left(5,3\right)\). Its leftmost failures are therefore \(\left(2,2\right)\) and \(\left(4,2\right)\). Hence, its bottommost leftmost failure is \(\left(4,2\right)\). (Note that the failure \(\left(5,3\right)\) lies further south, but does not count as leftmost since it also lies further east. Thus, the bottommost leftmost failure is not the leftmost bottommost failure!) **Remark 8.12**.: If \(\left(i,j\right)\) is any failure of a twisted array \(\left(\sigma,T\right)\), then \(i>1\). (This is clear for outer failures, and follows for inner failures from \(\left(i-1,j\right)\in P\left(\sigma\right)\).) **Lemma 8.13**.: Let \(\left(\sigma,T\right)\) be a twisted array. Then: 1. If \(\sigma\neq\mathrm{id}\), then \(\left(\sigma,T\right)\) has an outer failure. 2. If \(\sigma=\mathrm{id}\) and \(T\notin\mathrm{SSYT}\left(\mu\right)\), then \(\left(\sigma,T\right)\) has an inner failure. 3. If \(\left(\sigma,T\right)\) is unfailing, then \(\sigma=\mathrm{id}\) and \(T\in\mathrm{SSYT}\left(\mu\right)\). As a consequence of Lemma 8.13, we easily obtain: **Lemma 8.14**.: We have \[\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is an}\\ \text{unfailing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)= \sum_{T\in\mathrm{FSSYT}\left(\mu,\mathbf{b}\right)}\ \prod_{\left(i,j\right)\in Y\left(\mu\right)}u_{T\left(i,j\right),\ j-i}.\] Now, we promised to pair up the unwanted (i.e., failing) twisted arrays \(\left(\sigma,T\right)\). This is achieved by the following transformation: **Definition 8.15**.: Let \(\left(\sigma,T\right)\) be a failing twisted array. Let \(c=\left(i,j\right)\in P\left(\sigma\right)\) be the bottommost leftmost failure of \(\left(\sigma,T\right)\). To _flip_\(\left(\sigma,T\right)\) means to perform the following transformations on \(\sigma\) and on \(T\): 1. We exchange the values of \(\sigma\) on \(i-1\) and on \(i\). (In other words, we replace \(\sigma\) by \(\sigma\circ s_{i-1}\), where \(s_{i-1}\in S_{n}\) is the transposition that swaps \(i-1\) with \(i\).) The resulting permutation will be called \(\sigma^{\prime}\). (Note that \(s_{i-1}\) is well-defined, since Remark 8.12 yields \(i>1\).) 2. We define the _top floor_ of the failure \(c\) to be the part of the \(\left(i-1\right)\)-st row of \(T\) that consists of the entries \[T\left(i-1,j\right),\ \ T\left(i-1,j+1\right),\ \ T\left(i-1,j+2\right),\ \ \ldots\] (that is, the entries from \(T\left(i-1,j\right)\) on eastwards). We define the _bottom floor_ of the failure \(c\) to be the part of the \(i\)-th row of \(T\) that consists of the entries \[T\left(i,j+1\right),\ \ T\left(i,j+2\right),\ \ T\left(i,j+3\right),\ \ \ldots\] (that is, the entries from \(T\left(i,j+1\right)\) on eastwards). Now, we swap the top floor of the failure \(c\) with the bottom floor (i.e., we move the entries at the positions \(\left(i-1,j\right),\ \ \left(i-1,j+1\right),\ \left(i-1,j+2\right),\ \ldots\) to the positions \(\left(i,j+1\right),\ \ \left(i,j+2\right),\ \ \left(i,j+3\right),\ \ \ldots\) (respectively), and vice versa). In other words, we replace the filling \(T\) by the filling \(T^{\prime}\) of \(P\left(\sigma^{\prime}\right)\) which is given by \[T^{\prime}\left(p,q\right)=\begin{cases}T\left(i,q+1\right),&\text{if $p=i-1$ and $q\geq j$;}\\ T\left(i-1,q-1\right),&\text{if $p=i$ and $q>j$;}\\ T\left(p,q\right),&\text{otherwise}\end{cases}\] The resulting filling \(T^{\prime}\) can be shown to be a \(\sigma^{\prime}\)-array (see Lemma 8.17**(a)** for a proof). The resulting pair \(\left(\sigma^{\prime},T^{\prime}\right)\) will be denoted by flip \(\left(\sigma,T\right)\), and is said to arise from \(\left(\sigma,T\right)\) by _flipping_. **Example 8.16**.: Let \(n\), \(\mu\) and \(\sigma\) be as in Example 8.11. We shall flip three different failing arrays. 1. Let \(T\) be the \(\sigma\)-array shown in Figure 1**(a)**. Its bottommost leftmost failure is \(\left(5,1\right)\). The top floor of this failure consists of the entries \(6,8,9\) in the \(4\)-th row, whereas the bottom floor consists of the entries \(9,9,9,9\) in the Figure 1: Arrays for Example 8.16**(a)**. The boxes constituting the top floor of \(c\) in \(T\) are marked in yellow, whereas the boxes constituting the bottom floor are marked in green. In \(T^{\prime}\), the yellow entries and the green entries trade places. 5-th row. Thus, flipping \((\sigma,T)\) results in the pair \((\sigma^{\prime},T^{\prime})\), where \(\sigma^{\prime}\) is the permutation \(\sigma\circ s_{4}\) (which sends \(1,2,3,4,5\) to \(1,2,5,3,4\)) and where \(T^{\prime}\) is the \(\sigma^{\prime}\)-array shown in Figure 1**(b)** (obtained from \(T\) by swapping the top floor with the bottom floor). Note that \((\sigma^{\prime},T^{\prime})\) is still a failing twisted array, and \((5,1)\) is still its bottommost leftmost failure. 2. Let \(T\) instead be the \(\sigma\)-array shown in Figure 2**(a)**. Its bottommost leftmost failure is \((4,2)\). The top floor of this failure is empty (i.e., consists of no entries at all), whereas the bottom floor consists of the single entry 9 in the 4-th row. Thus, flipping \((\sigma,T)\) results in the pair \((\sigma^{\prime},T^{\prime})\), where \(\sigma^{\prime}\) is the permutation \(\sigma\circ s_{3}\) (which sends \(1,2,3,4,5\) to \(1,2,4,5,3\)) and where \(T^{\prime}\) is the \(\sigma^{\prime}\)-array shown in Figure 2**(b)** (obtained from \(T\) by swapping the top floor with the bottom floor). Note that \((\sigma^{\prime},T^{\prime})\) is still a failing twisted array, and \((4,2)\) is still its bottommost leftmost failure (but is now an inner failure). 3. Let \(T\) instead be the \(\sigma\)-array shown in Figure 3**(a)**. Its bottommost leftmost failure is \((3,1)\). The top floor of this failure consists of the entire 2-nd row, whereas the bottom floor is empty. Thus, flipping \((\sigma,T)\) results in the pair \((\sigma^{\prime},T^{\prime})\), where \(\sigma^{\prime}\) is the permutation \(\sigma\circ s_{2}\) (which sends \(1,2,3,4,5\) to \(1,5,2,4,3\)) and where \(T^{\prime}\) is the \(\sigma^{\prime}\)-array shown in Figure 3**(b)** (obtained from \(T\) by swapping the top floor with the bottom floor). Note that \((\sigma^{\prime},T^{\prime})\) is still a failing twisted array, and \((3,1)\) is still its bottommost leftmost failure (but is now an outer failure). Figure 3: Arrays for Example 8.16**(c)**. The colors are as in Figure 1. Figure 2: Arrays for Example 8.16**(b)**. The colors are as in Figure 1. Whatever the name might suggest, flipping a failing twisted array does not remove the failure; quite to the contrary, the failure is preserved: Let \(\left(\sigma,T\right)\) be a failing twisted array. Let \(\left(\sigma^{\prime},T^{\prime}\right)\) be the pair flip \(\left(\sigma,T\right)\). Then: 1. This pair \(\left(\sigma^{\prime},T^{\prime}\right)\) is again a failing twisted array. 2. If \(c\) is the bottommost leftmost failure of \(\left(\sigma,T\right)\), then the same box \(c\) is again the bottommost leftmost failure of \(\left(\sigma^{\prime},T^{\prime}\right)\). 3. We have flip \(\left(\sigma^{\prime},T^{\prime}\right)=\left(\sigma,T\right)\). In other words, if we flip \(\left(\sigma,T\right)\), and then flip the result, then we return back to \(\left(\sigma,T\right)\). 4. We have \(\left(-1\right)^{\sigma^{\prime}}=-\left(-1\right)^{\sigma}\). 5. We have \(w\left(T^{\prime}\right)=w\left(T\right)\). 6. If \(\left(\sigma,T\right)\) is \(\mathbf{b}\)-flagged, then so is \(\left(\sigma^{\prime},T^{\prime}\right)\). Lemma 8 allows us to pair up the failing twisted arrays with each other, leading to the following: We have \[\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a }\mathbf{b}\text{- flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)= \sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is an}\\ \text{unfailing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right).\] Theorem 8 now follows by combining the results of Lemma 8, Lemma 8 and Lemma 8. In turn, deriving Proposition 8 from Theorem 8 is now an exercise in substitution. ### A formula for \(\mathbf{s}_{\lambda}\left[\nu\right]\) Having proved Proposition 8, we can apply it to the polynomials \(\mathbf{s}_{\lambda}\left[\nu\right]\) introduced in Definition 5: Let \(n\in\mathbb{N}\). Let \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) be any partition with at most \(n\) nonzero entries. Let \(\lambda\) be a further partition. Let \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be the flagging induced by \(\lambda/\mu\). Then, \[\mathbf{s}_{\lambda}\left[\mu\right]=\det\left(h(\mu_{i}-i+j,\ \ b_{i},\ \ 1-j)\right)_{i,j\in\left[n\right]}. \tag{23}\] ## 9 Determinantal identities In this section, we shall prove some general properties of determinants, which we will later use in reducing the determinants that arise from Corollary 8.19. The proofs will require nothing but the Laplace expansion of a determinant. **Convention 9.1**.: In all proofs in this section, we shall use the following notation: Let \(n\in\mathbb{N}\). For any \(k,\ell\in[n]\), and for any \(n\times n\)-matrix \(A\), we let \(A_{\sim k,\sim\ell}\) denote the matrix obtained from \(A\) by removing the \(k\)-th row and the \(\ell\)-th column. We also let \(A_{k,\ell}\) denote the \((k,\ell)\)-th entry of the matrix \(A\). For example, if \(A=\begin{pmatrix}a&b&c\\ d&e&f\\ g&h&i\end{pmatrix}\), then \(A_{\sim 2,\sim 3}=\begin{pmatrix}a&b\\ g&h\end{pmatrix}\) and \(A_{2,3}=f\). Using these notations, we can state Laplace expansion as follows: * For any \(n\times n\)-matrix \(A\) and any \(k\in[n]\), we can compute \(\det A\) using Laplace expansion along the \(k\)-th row: \[\det A=\sum_{\ell=1}^{n}\,(-1)^{k+\ell}\,A_{k,\ell}\det\left(A_{\sim k,\sim \ell}\right).\] (24) * For any \(n\times n\)-matrix \(A\) and any \(\ell\in[n]\), we can compute \(\det A\) using Laplace expansion along the \(\ell\)-th column: \[\det A=\sum_{k=1}^{n}\,(-1)^{k+\ell}\,A_{k,\ell}\det\left(A_{\sim k,\sim\ell} \right).\] (25) We now move on to less well-trodden ground. We begin with the following general lemma about determinants (the \(r=1\) case of [16, SS319]): **Lemma 9.2**.: Let \(P\) and \(Q\) be two \(n\times n\)-matrices over some commutative ring. For each \(k\in[n]\), we let \(P\underset{\mathrm{row}}{\overset{k}{\underset{\mathrm{row}}{\rightleftarrow}}}Q\) denote the \(n\times n\)-matrix that is obtained from \(P\) by replacing the \(k\)-th row by the \(k\)-th row of \(Q\). (That is, the \((i,j)\)-th entry of this matrix is \(\begin{cases}P_{i,j},&\text{ if }i\neq k;\\ Q_{i,j},&\text{ if }i=k\end{cases}\) for every \(i,j\in[n]\).) For each \(k\in[n]\), we let \(P\underset{\mathrm{col}}{\overset{k}{\underset{\mathrm{col}}{\rightleftarrow}}}Q\) denote the matrix that is obtained from \(P\) by replacing the \(k\)-th column by the \(k\)-th column of \(Q\). (That is, the \((i,j)\)-th entry of this matrix is \(\begin{cases}P_{i,j},&\text{ if }j\neq k;\\ Q_{i,j},&\text{ if }j=k\end{cases}\) for every \(i,j\in[n]\).) Then, \[\sum_{k=1}^{n}\det\left(P\underset{\mathrm{row}}{\overset{k}{\underset{ \mathrm{row}}{\rightleftarrow}}}Q\right)=\sum_{k=1}^{n}\det\left(P\underset{ \mathrm{col}}{\overset{k}{\underset{\mathrm{col}}{\rightleftarrow}}}Q\right).\] **Example 9.3**.: For \(n=3\), this is saying that \[\det\left(\begin{matrix}Q_{1,1}&Q_{1,2}&Q_{1,3}\\ P_{2,1}&P_{2,2}&P_{2,3}\\ P_{3,1}&P_{3,2}&P_{3,3}\end{matrix}\right)+\det\left(\begin{matrix}P_{1,1}&P_{1,2 }&P_{1,3}\\ Q_{2,1}&Q_{2,2}&Q_{2,3}\\ P_{3,1}&P_{3,2}&P_{3,3}\end{matrix}\right)+\det\left(\begin{matrix}P_{1,1}&P_{1, 2}&P_{1,3}\\ P_{2,1}&P_{2,2}&P_{2,3}\\ Q_{3,1}&Q_{3,2}&Q_{3,3}\end{matrix}\right)\] \[=\det\left(\begin{matrix}Q_{1,1}&P_{1,2}&P_{1,3}\\ Q_{2,1}&P_{2,2}&P_{2,3}\\ Q_{3,1}&P_{3,2}&P_{3,3}\end{matrix}\right)+\det\left(\begin{matrix}P_{1,1}&Q_{1,2}&P_{1,3}\\ P_{2,1}&Q_{2,2}&P_{2,3}\\ P_{3,1}&Q_{3,2}&P_{3,3}\end{matrix}\right)+\det\left(\begin{matrix}P_{1,1}&P_{1,2}&Q_{1,3}\\ P_{2,1}&P_{2,2}&Q_{2,3}\\ P_{3,1}&P_{3,2}&Q_{3,3}\end{matrix}\right).\] The interested reader can find the general case of [11, SS319] in Section 16 below (as Lemma 16.3); but Lemma 9.2 will suffice for us. We recall the Iverson bracket notation. The following lemma is an easy consequence of Lemma 9.2. **Lemma 9.4**.: Let \(n\) be a positive integer. Let \(R\) be a commutative ring. Let \(u_{i,j}\) be an element of \(R\) for each \(i\in[n]\) and each \(j\in[n+1]\). Then, \[\sum_{k=1}^{n}\det\left(u_{i,j+[k=i]}\right)_{i,j\in[n]}=\det\left(u_{i,j+[n=j] }\right)_{i,j\in[n]}.\] **Example 9.5**.: For \(n=3\), the claim of Lemma 9.4 is \[\det\left(\begin{array}{ccc}u_{1,2}&u_{1,3}&u_{1,4}\\ u_{2,1}&u_{2,2}&u_{2,3}\\ u_{3,1}&u_{3,2}&u_{3,3}\end{array}\right)+\det\left(\begin{array}{ccc}u_{1, 1}&u_{1,2}&u_{1,3}\\ u_{2,2}&u_{2,3}&u_{2,4}\\ u_{3,1}&u_{3,2}&u_{3,3}\end{array}\right)+\det\left(\begin{array}{ccc}u_{1, 1}&u_{1,2}&u_{1,3}\\ u_{2,1}&u_{2,2}&u_{2,3}\\ u_{3,2}&u_{3,3}&u_{3,4}\end{array}\right)\] \[=\det\left(\begin{array}{ccc}u_{1,1}&u_{1,2}&u_{1,4}\\ u_{2,1}&u_{2,2}&u_{2,4}\\ u_{3,1}&u_{3,2}&u_{3,4}\end{array}\right).\] **Lemma 9.6**.: Let \(n\) be a positive integer. Let \(R\) be a commutative ring. Let \(u_{i,j}\) be an element of \(R\) for each \(i\in[n]\) and each \(j\in[n+1]\). Let \(p_{1},p_{2},\ldots,p_{n}\) be \(n\) further elements of \(R\). Then, \[\sum_{k=1}^{n}\det\left(u_{i,j+[k=i]}-p_{i}u_{i,j}\left[k=i\right] \right)_{i,j\in[n]}\] \[=\det\left(u_{i,j+[n=j]}\right)_{i,j\in[n]}-\left(\sum_{k=1}^{n}p _{k}\right)\det\left(u_{i,j}\right)_{i,j\in[n]}.\] **Example 9.7**.: For \(n=3\), the claim of Lemma 9.6 is \[\det\left(\begin{array}{ccc}u_{1,2}-p_{1}u_{1,1}&u_{1,3}-p_{1}u_{1,2}&u_{1,A}-p_ {1}u_{1,3}\\ u_{2,1}&u_{2,2}&u_{2,3}\\ u_{3,1}&u_{3,2}&u_{3,3}\end{array}\right)\] \[\qquad+\det\left(\begin{array}{ccc}u_{1,1}&u_{1,2}&u_{1,3}\\ u_{2,2}-p_{2}u_{2,1}&u_{2,3}-p_{2}u_{2,2}&u_{2,4}-p_{2}u_{2,3}\\ u_{3,1}&u_{3,2}&u_{3,3}\end{array}\right)\] \[\qquad+\det\left(\begin{array}{ccc}u_{1,1}&u_{1,2}&u_{1,3}\\ u_{2,1}&u_{2,2}&u_{2,3}\\ u_{3,2}-p_{3}u_{3,1}&u_{3,3}-p_{3}u_{3,2}&u_{3,4}-p_{3}u_{3,3}\end{array}\right)\] \[=\det\left(\begin{array}{ccc}u_{1,1}&u_{1,2}&u_{1,4}\\ u_{2,1}&u_{2,2}&u_{2,4}\\ u_{3,1}&u_{3,2}&u_{3,4}\end{array}\right)-(p_{1}+p_{2}+p_{3})\det\left( \begin{array}{ccc}u_{1,1}&u_{1,2}&u_{1,3}\\ u_{2,1}&u_{2,2}&u_{2,3}\\ u_{3,1}&u_{3,2}&u_{3,3}\end{array}\right).\] As a curiosity, we observe that Lemma 9.6 would also hold if the \(p_{i}\) on the left hand side were replaced by \(p_{j}\); see Lemma 16.4 below. Finally, we state a particularly simple and well-known property of determinants: **Lemma 9.8**.: Let \(n\) be a positive integer. Let \(R\) be a commutative ring. Let \(a_{i,j}\) be an element of \(R\) for each \(i,j\in[n]\). Assume that \[a_{n,\ell}=0\qquad\text{for each }\ell\in[n-1]\,. \tag{26}\] Then, \[\det\left(a_{i,j}\right)_{i,j\in[n]}=a_{n,n}\cdot\det\left(a_{i,j}\right)_{i, j\in[n-1]}.\] ## 10 Combinatorial lemmas We shall next prove some combinatorial lemmas. ### The numbers \(\ell_{i}\), \(m_{i}\), \(\ell_{i}^{t}\), \(m_{i}^{t}\) **Convention 10.1**.: From now on **for the rest of this paper**, we fix two partitions \(\lambda\) and \(\mu\). (We do not require \(\lambda\supseteq\mu\).) We furthermore set \[\ell_{i} :=\lambda_{i}-i, m_{i}:=\mu_{i}-i,\] \[\ell_{i}^{t} :=\lambda_{i}^{t}-i\qquad\text{and} m_{i}^{t}:=\mu_{i}^{t}-i\qquad\text{for all }i\geq 1.\] Thus, Definition 5.4 yields \[\Delta\left(\lambda\right) =\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}, \Delta\left(\mu\right) =\left\{m_{1},m_{2},m_{3},\ldots\right\},\] \[\Delta\left(\lambda^{t}\right) =\left\{\ell_{1}^{t},\ell_{2}^{t},\ell_{3}^{t},\ldots\right\}, \Delta\left(\mu^{t}\right) =\left\{m_{1}^{t},m_{2}^{t},m_{3}^{t},\ldots\right\}.\] **Lemma 10.2**.: We have \[\ell_{1}>\ell_{2}>\ell_{3}>\cdots\qquad\text{and}\] \[m_{1}>m_{2}>m_{3}>\cdots\qquad\text{and}\] \[\ell_{1}^{t}>\ell_{2}^{t}>\ell_{3}^{t}>\cdots\qquad\text{and}\] \[m_{1}^{t}>m_{2}^{t}>m_{3}^{t}>\cdots.\] ### \(\operatorname{ER}\left(\mu\right)\) and \(\mu^{+k}\) Next, we shall get a better understanding of the partitions \(\nu\) that satisfy \(\mu\lessdot\nu\). **Definition 10.3**.: * If \(k\) is a positive integer, then \(\mu^{+k}\) shall denote the sequence \[\left(\mu_{1},\ \mu_{2},\ \ldots,\ \mu_{k-1},\ \mu_{k}+1,\ \mu_{k+1},\ \mu_{k+2},\ \ldots \right),\] which is obtained from \(\mu\) by incrementing the \(k\)-th entry by \(1\). Note that this sequence \(\mu^{+k}\) is a partition if and only if \(k=1\) or \(\mu_{k}\neq\mu_{k-1}\). * We let \(\operatorname{ER}\left(\mu\right)\) be the set of all positive integers \(k\) that satisfy \(k=1\) or \(\mu_{k}\neq\mu_{k-1}\). Thus, \(\operatorname{ER}\left(\mu\right)\) is the set of all positive integers \(k\) for which \(\mu^{+k}\) is a partition. (The notation \(\operatorname{ER}\) is short for "extensible rows", since \(\operatorname{ER}\left(\mu\right)\) consists exactly of those \(k\geq 1\) such that the \(k\)-th row of \(Y\left(\mu\right)\) can be extended by a new box at its end and the result will still be the diagram of a partition.) **Example 10.4**.: If \(\mu=\left(5,2,2,1\right)\), then \(\operatorname{ER}\left(\mu\right)=\left\{1,2,4,5\right\}\) and \[\mu^{+1} =\left(6,2,2,1\right), \mu^{+2} =\left(5,3,2,1\right),\] \[\mu^{+4} =\left(5,2,2,2\right), \mu^{+5} =\left(5,2,2,1,1\right).\] (Do not forget that \(\mu_{5}=0\) can be incremented by \(1\), so that \(5\in\operatorname{ER}\left(\mu\right)\).) The following two lemmas are very easy: **Lemma 10.5**.: The partitions \(\nu\) that satisfy \(\mu\lessdot\nu\) are precisely the partitions \(\mu^{+k}\) for the elements \(k\in\operatorname{ER}\left(\mu\right)\). **Lemma 10.6**.: Let \(k\geq 1\) and \(i\geq 1\). Then, the \(i\)-th entry of the sequence \(\mu^{+k}\) is \(\left(\mu^{+k}\right)_{i}=\mu_{i}+[k=i]\). ### Some easy properties of \(\Delta\left(\lambda\right)\) and \(\Delta\left(\mu\right)\) We will now state some further easy lemmas, which will help us simplify some sums later on. **Lemma 10.7**.: Let \(n\in\mathbb{N}\) be such that \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). Let \(j\in[n]\), and let \(k\) be a positive integer that satisfies \(\ell_{j}=m_{k}\). Then, \(k\in[n]\). **Lemma 10.8**.: Let \(n\in\mathbb{N}\) be such that \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). Let \(j\in[n]\) be such that \(\ell_{j}\in\Delta\left(\mu\right)\). Then, \[\sum_{\begin{subarray}{c}i\in[n];\\ \ell_{j}=m_{i}\end{subarray}}1=1.\] **Lemma 10.9**.: Let \(n\in\mathbb{N}\) be such that \(\lambda=\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)\) (that is, \(\lambda_{n+1}=0\)) and \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). Let \(k\) be a positive integer that satisfies \(\ell_{k}\notin\Delta\left(\mu\right)\). Then, \(k\in[n]\). **Lemma 10.10**.: Let \(n\in\mathbb{N}\) be such that \(\lambda=\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)\) (that is, \(\lambda_{n+1}=0\)) and \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). Let \(R\) be an additive group. Let \(f,g:[n]\to R\) be two maps such that if \(i,j\in[n]\) satisfy \(m_{i}=\ell_{j}\), then \[f\left(i\right)=g\left(j\right). \tag{27}\] Then, \[\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta\left(\lambda\right)\end{subarray}}f\left(i\right)=\sum_{ \begin{subarray}{c}j\in[n];\\ \ell_{j}\in\Delta\left(\mu\right)\end{subarray}}g\left(j\right).\] ### Some properties of the flagging induced by \(\lambda/\mu\) The next few lemmas involve the flagging induced by \(\lambda/\mu\). (As we recall, it was defined in Definition 6.8 to be the flagging \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\), where \(b_{i}\) is the largest \(k\geq 0\) satisfying \(\lambda_{k}-k\geq\mu_{i}-i\). Here, \(\lambda_{0}\) is understood to be \(+\infty\).) **Lemma 10.11**.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(i\) and \(j\) be positive integers such that \(m_{i}=\ell_{j}\). Then, \(b_{i}=j\). **Lemma 10.12**.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(n\in\mathbb{N}\) be such that \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) (that is, \(\lambda_{n+1}=0\)) and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n})\) (that is, \(\mu_{n+1}=0\)). Then, \[\sum_{i=1}^{n}x_{i}-\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}x_{b_{i}}=\sum_{\begin{subarray}{c}k\in [n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}.\] **Lemma 10.13**.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(i\) be a positive integer that satisfies \(m_{i}\notin\Delta\left(\lambda\right)\). Then, \[m_{i}+1+b_{i}\geq 1\qquad\text{and}\qquad\ell_{m_{i}+1+b_{i}}^{t}=-1-m_{i}.\] The next lemma specifically concerns the case when the partition \(\mu\) has two equal entries (i.e., when \(\mu_{j-1}=\mu_{j}\)). **Lemma 10.14**.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(j>1\) be an integer such that \(\mu_{j-1}=\mu_{j}\). Then: * If \(m_{j}\notin\Delta\left(\lambda\right)\), then \(b_{j}=b_{j-1}\). * If \(m_{j}\in\Delta\left(\lambda\right)\), then \(b_{j}=b_{j-1}+1\). ### The flagging induced by \(\lambda/\mu^{+k}\) How does the flagging induced by \(\lambda/\mu\) change when we replace \(\mu\) by a partition of the form \(\mu^{+k}\) (that is, when we add \(1\) to the \(k\)-th entry of \(\mu\))? The following lemma gives a simple answer: **Lemma 10.15**.: Let \(k\in\operatorname{ER}\left(\mu\right)\). Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). Let \(\mathbf{b}^{*}=(b_{1}^{*},b_{2}^{*},b_{3}^{*},\ldots)\) be the flagging induced by \(\lambda/\mu^{+k}\). Then: * If \(m_{k}\notin\Delta\left(\lambda\right)\), then \(b_{i}^{*}=b_{i}\) for each \(i\geq 1\). * If \(m_{k}\in\Delta\left(\lambda\right)\), then \(b_{i}^{*}=b_{i}-[k=i]\) for each \(i\geq 1\). * If \(i\) is a positive integer such that \(i\neq k\), then \(b_{i}^{*}=b_{i}\). ## 11 A variant of the Konvalinka recursion We shall now approach a variant of the Konvalinka recursion, which follows fairly easily from the results of the previous sections. **Convention 11.1**.: In addition to Convention 10.1, we agree on the following: * We let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). * We fix a positive integer \(n\) that is so large that \(\lambda_{n}=0\) and \(\mu_{n}=0\). Thus, \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n})=(\mu_{1},\mu_{2},\ldots,\mu_{n-1})\). From Lemma 6.13, we obtain \(b_{n}=n\). We observe that the outer sum on the right hand side of Theorem 5.8 can be rewritten by dropping the \(\nu\subseteq\lambda\) condition under the summation sign: **Lemma 11.2**.: We have \[\sum_{\mu<\nu\subseteq\lambda}\mathbf{s}_{\lambda}\left[\nu\right]=\sum_{\mu <\nu}\mathbf{s}_{\lambda}\left[\nu\right]\] (where \(\nu\) is the summation index in both outer sums, and is supposed to be a partition). Using Lemma 10.5 and Lemma 11.2, we can easily see the following: **Lemma 11.3**.: We have \[\sum_{\mu<\nu\subseteq\lambda}\mathbf{s}_{\lambda}\left[\nu\right]=\sum_{k=1 }^{n}\mathbf{s}_{\lambda}\left[\mu^{+k}\right].\] We continue with some lemmas that help us understand the right hand side. **Lemma 11.4**.: Let \(k\in\left[n\right]\). Then: * If \(m_{k}\not\in\Delta\left(\lambda\right)\), then \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(h(\mu_{i}-i+j+\left[k=i \right],\;\;b_{i},\;\;1-j)\right)_{i,j\in\left[n\right]}.\] * If \(m_{k}\in\Delta\left(\lambda\right)\), then \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(h(\mu_{i}-i+j+\left[k=i \right],\;\;b_{i}-\left[k=i\right],\;\;1-j)\right)_{i,j\in\left[n\right]}.\] **Convention 11.5**.: Set \[u_{i,j}:=h\left(\mu_{i}-i+j,\ \ b_{i},\ \ 1-j\right)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{for all $i\in[n]$ and $j\in[n+1]$.}\] Furthermore, set \[p_{i}:=\begin{cases}x_{b_{i},}&\text{if $m_{i}\in\Delta\left(\lambda\right)$ ;}\\ -y_{m_{i}+1+b_{i},}&\text{if $m_{i}\notin\Delta\left(\lambda\right)$ }\end{cases}\ \ \ \ \ \ \ \ \ \ \ \ \ \text{for any $i\in[n]$.}\] **Lemma 11.6**.: Define \(u_{i,j}\) as in Convention 11.5. Then, \[\mathbf{s}_{\lambda}\left[\mu\right] =\det\left(u_{i,j}\right)_{i,j\in[n]} \tag{28}\] \[=\det\left(u_{i,j}\right)_{i,j\in[n-1]}. \tag{29}\] **Lemma 11.7**.: Define \(u_{i,j}\) and \(p_{i}\) as in Convention 11.5. Let \(k\in[n]\). Then, \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(u_{i,j+[k=i]}-p_{i}u_{i, j}\left[k=i\right]\right)_{i,j\in[n]}.\] We can now reap some rewards: **Lemma 11.8**.: Define \(u_{i,j}\) and \(p_{i}\) as in Convention 11.5. Then, \[\sum_{k=1}^{n}\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(u_{i,j+[n=i] }\right)_{i,j\in[n]}-\left(\sum_{k=1}^{n}p_{k}\right)\mathbf{s}_{\lambda} \left[\mu\right].\] Now, let us simplify the \(\det\) and \(\sum\) terms on the right hand side of Lemma 11.8. **Lemma 11.9**.: Define \(u_{i,j}\) as in Convention 11.5. Then, \[\det\left(u_{i,j+[n=i]}\right)_{i,j\in[n]}=\left(\sum_{i=1}^{n}x_{i}\right) \cdot\mathbf{s}_{\lambda}\left[\mu\right].\] **Lemma 11.10**.: Define \(p_{i}\) as in Convention 11.5. Then, \[\sum_{i=1}^{n}x_{i}-\sum_{k=1}^{n}p_{k}=\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[ n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}.\] We are now ready to prove a simplified version of the Konvalinka recursion (which, as we recall, will be the version that we need for our proof of Theorem 3.1): **Lemma 11.11**.: We have \[\left(\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\not\in\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n] ;\\ m_{i}\not\in\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\right)\mathbf{s}_{ \lambda}\left[\mu\right]=\sum_{\mu<\nu\subseteq\lambda}\mathbf{s}_{\lambda} \left[\nu\right].\] Here, the sum on the right hand side ranges over all partitions \(\nu\) that satisfy \(\mu<\nu\subseteq\lambda\). ## 12 Proof of the Naruse-Pak-Postnikov formula **Convention 12.1**.: For the rest of this section, we introduce the following notations: 1. We fix a skew partition \(\lambda/\mu\) (that is, two partitions \(\lambda\) and \(\mu\) satisfying \(\lambda\supseteq\mu\)). 2. We remind the reader that \[\ell_{i} :=\lambda_{i}-i, m_{i}:=\mu_{i}-i,\] \[\ell_{i}^{t} :=\lambda_{i}^{t}-i \text{and} m_{i}^{t}:=\mu_{i}^{t}-i \text{for all }i\geq 1.\] 3. We fix a positive integer \(n\) that is large enough to satisfy \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n-1})\) (that is, \(\lambda_{n}=0\)) and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n-1})\) (that is, \(\mu_{n}=0\)). 4. For each integer \(i\), we set \[w_{i}:=\sum_{k=-n}^{i}z_{k}=z_{-n}+z_{-n+1}+\cdots+z_{i}.\] 5. We shall use all the notations from Convention 4.2 as well as the algebraic hook lengths \(h_{\lambda}\left(c;z\right)\) defined in Theorem 2.3. **Lemma 12.2**.: Let \(\left(i,j\right)\in Y\left(\lambda\right)\). Then \[w_{\ell_{i}}-w_{-\ell_{j}^{t}-1}=h_{\lambda}\left(\left(i,j\right);z\right).\] **Lemma 12.3**.: Let \(i\geq 1\) be a positive integer. Then \[w_{\ell_{i}}-w_{m_{i}}=\sum_{\begin{subarray}{c}j\geq 1;\\ (i,j)\in Y(\lambda/\mu)\end{subarray}}z_{j-i}.\] **Lemma 12.4**.: We have \[\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}w_{\ell_{k}}-\sum_{\begin{subarray}{c }i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}=\sum_{(i,j)\in Y(\lambda/\mu )}z_{j-i}.\] **Lemma 12.5**.: Assume that \(\mu\neq\lambda\). Then \[\sum_{E\in\mathcal{E}(\lambda/\mu)}\ \ \prod_{c\in Y(\lambda)\setminus E}\frac{1}{ h_{\lambda}\left(c;z\right)}=\frac{1}{\sum\limits_{(i,j)\in Y(\lambda/\mu)}z_{j-i}} \ \ \sum_{\mu<v\subseteq\lambda}\ \ \sum_{E\in\mathcal{E}(\lambda/v)}\ \ \prod_{c\in Y( \lambda)\setminus E}\frac{1}{h_{\lambda}\left(c;z\right)}.\] Theorem 3.1 can now be proved by a simple induction on \(\left|Y\left(\lambda/\mu\right)\right|\), using Lemma 4.5**(b)** and Lemma 12.5 in the induction step. (As with all the other proofs, details can be found in Section 15.) ## 13 Proof of the Konvalinka recursion Now that the primary goal of this work (the proof of Theorem 3.1) has been achieved, we turn to the proof of the Konvalinka recursion (Theorem 5.8). Having established a closely related result (Lemma 11.11) already, we only need to show that the sums of \(x\)'s and of \(y\)'s that appear in the former agree with those that appear in the latter. This requires some combinatorial lemmas, some of which are interesting in their own right. ### More on partitions, Delta-sets and transposes We begin with some general properties of partitions. **Lemma 13.1**.: Let \(\lambda\) be a partition. Let \(i\) and \(j\) be two positive integers. Then, \(\lambda_{j}+\lambda_{i}^{t}-i-j\neq-1\). **Lemma 13.2**.: Let \(\lambda\) be a partition. Then, \(\left(\lambda^{t}\right)^{t}=\lambda\). **Lemma 13.3**.: Let \(\lambda\) be any partition. Let \(p\in\Delta\left(\lambda\right)\). Then, \(-1-p\notin\Delta\left(\lambda^{t}\right)\). We note in passing that the converse of Lemma 13.3 also holds (see Proposition 16.5 below). Next, we state an analogue of Lemma 10.9: **Lemma 13.4**.: Let \(n\in\mathbb{N}\) be such that \(\lambda=\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)\) (that is, \(\lambda_{n+1}=0\)) and \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). Let \(i\) be a positive integer that satisfies \(m_{i}\notin\Delta\left(\lambda\right)\). Then, \(i\in[n]\). ### A bijection The next two lemmas develop the claim of Lemma 10.13 further. They will help us transform a sum that appears in Lemma 11.11 into a sum that appears in Theorem 5.8. **Lemma 13.5**.: We follow Convention 10.1. Let \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be the flagging induced by \(\lambda/\mu\). Then, the map \[\left\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\right\} \to\left\{p\geq 1\ \mid\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\right\},\] \[i \mapsto m_{i}+1+b_{i}\] is well-defined and is a bijection. **Lemma 13.6**.: We follow Convention 10.1. Let \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be the flagging induced by \(\lambda/\mu\). Let \(n\in\mathbb{N}\) be such that \(\lambda=\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)\) (that is, \(\lambda_{n+1}=0\)) and \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). Then, \[\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}y_{m_{i}+1+b_{i}}=\sum_{ \begin{subarray}{c}k\geq 1;\\ \ell_{k}^{t}\notin\Delta\left(\mu^{t}\right)\end{subarray}}y_{k}.\] ### Proof of the Konvalinka recursion It is now easy to derive the original Konvalinka recursion (Theorem 5.8) from Lemma 11.11. ## 14 Hints The following hints are given for the less straightforward proofs. _Hint to Lemma 6.17._ Observe that if \(e\in Y\left(\mu\right)\) is any box, then the box \(e_{+T}\) lies on the same diagonal as \(e\). However, the condition of each of the parts **(a)-(d)** of the lemma entails that the boxes \(c_{+T}\) and \(d_{+T}\) lie either on the same or on adjacent diagonals. Hence, the same holds for the boxes \(c\) and \(d\). Consequently, the box \(d\) lies either weakly southeast or weakly northwest of \(c\) (where "weakly" means that the boxes can share a row or column). If the boxes \(c\) and \(d\) were more than one unit apart, or the entries \(T\left(c\right)\) and \(T\left(d\right)\) differed by more than \(1\), then Lemma 6.3 would thus place the boxes \(c_{+T}\) and \(d_{+T}\) too far apart from each other. A sufficiently precise version of this reasoning leads to the claims of parts **(a)-(d)**. _Hint to Lemma 6.19._ **(a)** It is clear that the diagram \(\mathbf{D}\left(T\right)\) can be obtained from \(Y\left(\mu\right)\) by moving each box \(\left(i,j\right)\) down into the \(T\left(i,j\right)\)-th row by a sequence of \(T\left(i,j\right)-i\) southeastern moves (i.e., moves of the form \(c\mapsto c_{\searrow}\)). What needs to be proved is that these moves can be arranged in an order in which they are excited moves (i.e., are not blocked by the presence of \(c_{\to}\) or \(c_{\downarrow}\) or \(c_{\searrow}\) in the diagram). One way to do this is by first moving the southeasternmost boxes of \(Y\left(\mu\right)\) to their intended positions first, then proceed recursively. An easier way is to induct on the sum of all entries of \(T\), using Lemma 6.18. _Hint to Lemma 6.20._ The well-definedness follows from Lemma 6.19**(a)**. The hard part is surjectivity, i.e., to show that each excitation \(E\) of \(Y\left(\mu\right)\) has the form \(\mathbf{D}\left(T\right)\) for some \(T\in\mathrm{SSYT}\left(\mu\right)\). One way is to induct on the number of excited moves needed to construct \(E\). By the induction hypothesis, the diagram obtained just before the last step has the form \(\mathbf{D}\left(S\right)\) for some \(S\in\mathrm{SSYT}\left(\mu\right)\). Argue that incrementing one specific entry of \(S\) will cause \(\mathbf{D}\left(S\right)\) to become \(E\) instead (while \(S\) still remains semistandard). Injectivity follows from a similar argument. _Hint to Lemma 7.5._ Break up the sum in the definition of \(h\left(a,b,c\right)\) into the part with \(i_{a}=b\) and the part with \(i_{a}<b\). _Hint to Lemma 7.6._ Induct on \(a+b\). In the induction step, use Lemma 7.5 three times (once for \(a,b,c\); once for \(a-1,b,c\); and once for \(a,b,c-1\)) and use the induction hypothesis twice (once for \(a-1,b,c\), and once for \(a,b-1,c\)). The five linear equations in the various \(h\left(i,j,k\right)\)'s obtained can be combined to obtain the claim. _Hint to Corollary 7.7._ Combine Lemma 7.5 with Lemma 7.6. _Hint to Lemma 8.13._ **(a)** Find an \(i\in\left\{1,2,\ldots,n-1\right\}\) such that \(\sigma\left(i\right)>\sigma\left(i+1\right)\), and show that the last box in the \(\left(i+1\right)\)-th row of \(P\left(\sigma\right)\) is an outer failure. _Hint to Lemma 8.17._ Parts **(b)**, **(c)**, **(d)** and **(e)** are easy, but **(a)** and **(f)** are not to be underestimated. Let \(c=\left(i,j\right)\) be as in Definition 8.15. For part **(a)**, it needs to first be checked that the swapping of the top and bottom floors results in an actual filling of \(Y\left(\sigma^{\prime}\right)\) (as opposed to a filling with gaps, or a filling of the wrong diagram). Start by observing that the \(i\)-th row of \(T\) has at least \(j\) entries (obviously since \(\left(i,j\right)\in Y\left(\sigma\right)\)) and that the \(\left(i-1\right)\)-st row of \(T\) has at least \(j-1\) entries (here you need to use the fact that \(c\) is a **leftmost** failure). This shows that \(T^{\prime}\) has no gaps between entries in a row. Then show that the rows of \(T^{\prime}\) have the right lengths. Finally show that the entries in each row of \(T^{\prime}\) increase weakly from left to right (this requires slightly different arguments when \(c\) is an inner failure and when \(c\) is an outer failure). This handles part **(a)**. Part **(f)** requires showing that the last entry of the \(k\)-th row of \(T^{\prime}\) is \(\leq b_{\sigma^{\prime}(k)}\) (beware: not \(\leq b_{\sigma(k)}\) any more!) for each \(k\in[n]\). This is clear for \(k\notin\left\{i,i-1\right\}\), as in this case the \(k\)-th row does not change from \(T\) to \(T^{\prime}\) and the value \(\sigma\left(k\right)\) does not change from \(\sigma\) to \(\sigma^{\prime}\). It remains to check the inequality for \(k=i\) and for \(k=i-1\). This is easy when the top floor and the bottom floor both are nonempty (i.e., contain at least one entry), because in this case the last entries of the \(\left(i-1\right)\)-st and \(i\)-th row are swapped along with the values \(\sigma\left(i\right)\) and \(\sigma\left(i-1\right)\). The tricky cases are when one of the two floors is empty. If the bottom floor is empty, then we need to show that \(T\left(i-1,j-1\right)\leq b_{\sigma(i)}\), but this follows from \(T\left(i-1,j-1\right)<T\left(i,j-1\right)\leq b_{\sigma(i)}\). If the top floor is empty, then \(c\) is an outer failure, and in this case we need to show that \(b_{\sigma(i)}\leq b_{\sigma(i-1)}\), but this follows from \(\sigma\left(i\right)>\sigma\left(i-1\right)\), which in turn follows (after a bit of work) from the fact that the \(t\)-th row of \(Y\left(\sigma\right)\) is longer than the \(\left(i-1\right)\)-st row. In either case, the claim follows. _Hint to Lemma 9.2._ Laplace expansion helps. _Hint to Lemma 9.4._ Apply Lemma 9.2 to \(P=\left(u_{i,j}\right)_{i,j\in[n]}\) and \(Q=\left(u_{i,j+1}\right)_{i,j\in[n]}\). _Hint to Lemma 9.6._ Expand the determinant on the left hand side along the \(k\)-th row and thus rewrite it as \(\det\left(u_{i,j+\left[k=i\right]}\right)_{i,j\in[n]}-p_{i}\det\left(u_{i,j} \right)_{i,j\in[n]}\). Now recall Lemma 9.4. _Hint to Lemma 10.12._ Apply Lemma 10.10 to \(f\left(i\right)=x_{b_{i}}\) and \(g\left(i\right)=x_{j}\). _Hint to Lemma 10.13._ Set \(k=m_{i}+1+b_{i}\). Note that \(m_{i}\neq\ell_{b_{i}}\) (since \(m_{i}\notin\Delta\left(\lambda\right)\)). Use Lemma 6.11 and this fact to prove that \(\lambda_{b_{i}}\geq k\) but \(\lambda_{b_{i}+1}<k\). From this, conclude that \(\lambda_{k}^{t}=b_{i}\). The rest is easy. _Hint to Lemma 10.14._ Set \(\ell_{0}=\infty\). Observe that \(b_{j}=\max\left\{k\geq 0\mid\ell_{k}\geq m_{j}\right\}\) and \(b_{j-1}=\max\left\{k\geq 0\mid\ell_{k}\geq m_{j-1}\right\}\). Draw consequences. _Hint to Lemma 10.15._ Rather similar to the proof of Lemma 10.14. _Hint to Lemma 11.4._ If \(k\in\mathrm{ER}\left(\mu\right)\), then both parts of this lemma follow easily from (23) and Lemma 10.15. In the remaining case, both right hand sides are \(0\), since the matrices have two equal rows. _Hint to Lemma 11.7._ Use Lemma 11.4 as well as Corollary 7.7 (in the case when \(m_{k}\in\Delta\left(\lambda\right)\)) and Corollary 7.8 (in the case when \(m_{k}\notin\Delta\left(\lambda\right)\)). Don't forget that \(x_{i}=y_{i}=0\) for \(i<0\). _Hint to Lemma 11.8._ Apply Lemma 11.7, then simplify using Lemma 9.6. _Hint to Lemma 11.9._ Use Lemma 9.8, then simplify using Lemma 7.4. _Hint to Lemma 11.10._ The only nontrivial ingredient is Lemma 10.12. _Hint to Lemma 11.11._ Combine Lemmas 11.3, 11.8, 11.9 and 11.10. _Hint to Lemma 12.4._ Sum Lemma 12.3 over all \(i\in[n]\) and recall Lemma 10.10. _Hint to Lemma 12.5._ Set \(x_{i}:=w_{\ell_{i}}\) and \(y_{i}:=-w_{-\ell_{i}^{t}-1}\) for all \(i\geq 1\), and apply Lemma 11.11. Use Lemmas 10.13 and 12.4 to simplify the first factor on the left hand side. Observe that Lemma 12.2 yields \(x_{i}+y_{j}=h_{\lambda}\left(\left(i,j\right);z\right)\) for each \(\left(i,j\right)\in Y\left(\lambda\right)\); use this to simplify the \(\mathbf{s}_{\lambda}\left[\nu\right]\) and \(\mathbf{s}_{\lambda}\left[\mu\right]\) expressions. _Hint to Lemma 13.3._ Use Lemma 13.1. _Hint to Lemma 13.5._ Let us denote this map by \(\Phi\). First, show that this map is well-defined by using Lemmas 10.13 and 13.3. Then, show that \(\Phi\) is injective (argue that \(\Phi\left(u\right)=\Phi\left(v\right)\) entails \(m_{u}=m_{v}\) using Lemma 10.13, and then conclude \(u=v\)). It remains to show that \(\Phi\) is surjective. For that, it suffices to argue that its domain and its target have the same size (since they are easily seen to be finite sets). The injectivity of \(\Phi\) yields that its domain is at most as large as its target; but the same argument, with \(\lambda\) and \(\mu\) replaced by \(\mu^{t}\) and \(\lambda^{t}\), yields that its target is at most as large as its domain. ## 15 Proofs ### To Section 2 Proof of Lemma 2.8.: The diagram \(Y\left(\varnothing\right)\) is the empty set \(\varnothing\). Hence, no excited moves can be applied to it. In other words, its only excitation is \(\varnothing\) itself. This excitation \(\varnothing\) of course satisfies \(\varnothing\subseteq Y\left(\lambda\right)\). Therefore, \(\mathcal{E}\left(\lambda/\varnothing\right)=\left\{\varnothing\right\}\) (by Definition 2.6). Proof of Lemma 2.9.: We don't have \(\lambda\supseteq\mu\). In other words, we don't have \(Y\left(\lambda\right)\supseteq Y\left(\mu\right)\). In other words, there exists some box \(c\in Y\left(\mu\right)\) that does not belong to \(Y\left(\lambda\right)\). This box must clearly retain this property (of not belonging to \(Y\left(\lambda\right)\)) under any excited move (since any excited move can only move this box further southeast). Thus, any excitation of \(Y\left(\mu\right)\) contains a box that does not belong to \(Y\left(\lambda\right)\). Hence, there exists no excitation \(E\) of \(Y\left(\mu\right)\) that satisfies \(E\subseteq Y\left(\lambda\right)\). In other words, the set of all such excitations is empty. In other words, \(\mathcal{E}\left(\lambda/\mu\right)\) is empty (since \(\mathcal{E}\left(\lambda/\mu\right)\) was defined to be this set). Proof of Lemma 2.10.: The only excitation \(E\) of the diagram \(Y\left(\lambda\right)\) that satisfies \(E\subseteq Y\left(\lambda\right)\) is \(Y\left(\lambda\right)\) itself (since any excited move would cause a box to move out of \(Y\left(\lambda\right)\), and thus break the \(E\subseteq Y\left(\lambda\right)\) condition). This proves the lemma. ### To Section 4 Proof of Lemma 4.3.: Recall that \(Y\left(\lambda/\mu\right)=Y\left(\lambda\right)\setminus Y\left(\mu\right)\). Let \(c=\left(p,q\right)\) be the box in \(Y\left(\lambda/\mu\right)\) that contains the entry \(1\) in \(T\) (that is, that satisfies \(T\left(c\right)=1\)). If the western neighbor \(\left(p,q-1\right)\) of this box \(c\) lied in \(Y\left(\lambda/\mu\right)\), then the entry of \(T\) in this western neighbor would be smaller than \(1\) (since the entries of \(T\) increase left-to-right in each row), which is impossible (since the entries of \(T\) are positive integers). Thus, the western neighbor \(\left(p,q-1\right)\) of \(c\) does not lie in \(Y\left(\lambda/\mu\right)\). Hence, this western neighbor \(\left(p,q-1\right)\) lies in \(Y\left(\mu\right)\) unless \(q=1\) (since it definitely lies in \(Y\left(\lambda\right)\) unless \(q=1\)). Similarly, we can see that the northern neighbor \(\left(p-1,q\right)\) of \(c\) lies in \(Y\left(\mu\right)\) unless \(p=1\). In other words, \(\mu_{p-1}\geq q\) unless \(p=1\). Recall that the box \(\left(p,q-1\right)\) lies in \(Y\left(\mu\right)\) unless \(q=1\). This box must be the easternmost box in the \(p\)-th row of \(Y\left(\mu\right)\) (since its eastern neighbor \(\left(p,q\right)=c\) lies in \(Y\left(\lambda/\mu\right)=Y\left(\lambda\right)\setminus Y\left(\mu\right)\) and thus does not lie in \(Y\left(\mu\right)\)). Thus, the \(p\)-th row of \(Y\left(\mu\right)\) has \(q-1\) boxes in total. In other words, \(\mu_{p}=q-1\), so that \(\mu_{p}+1=q\). If we increment the \(p\)-th entry of the partition \(\mu\) by \(1\), then we obtain a new sequence \(\nu=\left(\nu_{1},\nu_{2},\nu_{3},\ldots\right)\). Consider this \(\nu\). Explicitly, it is given by \[\nu_{i} =\mu_{i}\qquad\text{ for all }i\neq p,\qquad\text{ and }\] \[\nu_{p} =\mu_{p}+1.\] In particular, if \(p\neq 1\), then \(\nu_{p-1}=\mu_{p-1}\geq q\) (since we know that \(\mu_{p-1}\geq q\) unless \(p=1\)), and thus \(\nu_{p-1}\geq q=\mu_{p}+1=\nu_{p}\). Thus, it is easy to see that the sequence \(\nu\) is a partition. [_Proof:_ The sequence \(\mu\) is weakly decreasing (since \(\mu\) is a partition). But the sequence \(\nu\) is obtained from \(\mu\) by incrementing the \(p\)-th entry by \(1\). Hence, \(\nu\) is also weakly decreasing, unless its (incremented) \(p\)-th entry has become larger than the previous (i.e., the \(\left(p-1\right)\)-th) entry. In other words, \(\nu\) is also weakly decreasing, unless \(p\neq 1\) and \(\nu_{p}>\nu_{p-1}\). But the latter case cannot happen (because if \(p\neq 1\), then \(\nu_{p-1}\geq\nu_{p}\)). Thus, we conclude that \(\nu\) is weakly decreasing, i.e., is a partition (since it is clear that \(\nu_{i}=\mu_{i}=0\) for all sufficiently large \(i\)).] Recall that the partition \(\nu\) is obtained from \(\mu\) by incrementing the \(p\)-th entry by \(1\). Hence, \[Y\left(\nu\right)=Y\left(\mu\right)\cup\left\{\left(p,\mu_{p}+1\right)\right\} \tag{30}\] (because by incrementing \(\mu_{p}\), we add a new box \(\left(p,\mu_{p}+1\right)\) to the Young diagram \(Y\left(\mu\right)\)) and \(\mu\lessdot\nu\). Let us now recall that \(\mu_{p}+1=q\). Hence, \(\left(p,\mu_{p}+1\right)=\left(p,q\right)=c\). Thus, we can rewrite (30) as \[Y\left(\nu\right)=Y\left(\mu\right)\cup\left\{c\right\}. \tag{31}\] Note furthermore that \(Y\left(\mu\right)\subseteq Y\left(\lambda\right)\) (since \(\lambda/\mu\) is a skew partition) and \(\left\{c\right\}\subseteq Y\left(\lambda\right)\) (since \(c\in Y\left(\lambda/\mu\right)\subseteq Y\left(\lambda\right)\)). Hence, \(Y\left(\mu\right)\cup\left\{c\right\}\subseteq Y\left(\lambda\right)\) (since a union of two subsets of \(Y\left(\lambda\right)\) must again be a subset of \(Y\left(\lambda\right)\)). In view of (31), this rewrites as \(Y\left(\nu\right)\subseteq Y\left(\lambda\right)\). In other words, \(\nu\subseteq\lambda\). Hence, \(\mu\lessdot\nu\subseteq\lambda\). Furthermore, \[Y\left(\lambda/\nu\right) =Y\left(\lambda\right)\setminus Y\left(\nu\right)=Y\left( \lambda\right)\setminus\left(Y\left(\mu\right)\cup\left\{c\right\}\right) \qquad\text{ (by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: Now, remove the box \(c\) with the entry \(1\) from \(T\), and subtract \(1\) from all remaining entries. Let \(T^{\prime}\) be the filling that remains. Then, \(T^{\prime}\) is a filling of \(Y\left(\lambda/\mu\right)\setminus\{c\}=Y\left(\lambda/\nu\right)\). The entries of this filling \(T^{\prime}\) are \(1,2,\ldots,n-1\) (since they are obtained by subtracting \(1\) from each of \(2,3,\ldots,n\)). Furthermore, these entries increase left-to-right in each row (since this is true for \(T\), and clearly remains true as we subtract \(1\) from each entry), and increase top-to-bottom in each column (similarly). Hence, \(T^{\prime}\) is a standard tableau of shape \(\lambda/\nu\). It remains to prove (9). To that purpose, we observe that the construction of \(T^{\prime}\) yields \(c_{T^{\prime}}(k)=c_{T}(k+1)\) for each \(k\in\{1,2,\ldots,n-1\}\) (since each entry \(k\) of \(T^{\prime}\) originated as an entry \(k+1\) in \(T\)). Thus, \[\prod_{k=1}^{n-1}\left(z_{c_{T^{\prime}}(k)}+z_{c_{T^{\prime}}(k +1)}+\cdots+z_{c_{T^{\prime}}(n-1)}\right) =\prod_{k=1}^{n-1}\left(z_{c_{T}(k+1)}+z_{c_{T}(k+2)}+\cdots+z_{c _{T}(n)}\right)\] \[=\prod_{k=2}^{n}\left(z_{c_{T}(k)}+z_{c_{T}(k+1)}+\cdots+z_{c_{T }(n)}\right)\] (here, we have substituted \(k\) for \(k+1\) in the product). On the other hand, \[\sum_{(i,j)\in Y\left(\lambda/\mu\right)}z_{j-i}=z_{c_{T}(1)}+z_{c_{T}(2)}+ \cdots+z_{c_{T}(n)},\] since the numbers \(c_{T}\left(1\right),\ c_{T}\left(2\right),\ \ldots,\ c_{T}\left(n\right)\) are precisely the numbers \(j-i\) for all \((i,j)\in Y\left(\lambda/\mu\right)\) (because each box of \(Y\left(\lambda/\mu\right)\) is occupied by exactly one of the numbers \(1,2,\ldots,n\) in \(T\)). Multiplying this equality by the preceding one, we find \[\left(\sum_{(i,j)\in Y\left(\lambda/\mu\right)}z_{j-i}\right) \cdot\prod_{k=1}^{n-1}\left(z_{c_{T^{\prime}}(k)}+z_{c_{T^{\prime}}(k+1)}+ \cdots+z_{c_{T^{\prime}}(n-1)}\right)\] \[= \left(z_{c_{T}(1)}+z_{c_{T}(2)}+\cdots+z_{c_{T}(n)}\right)\cdot \prod_{k=2}^{n}\left(z_{c_{T}(k)}+z_{c_{T}(k+1)}+\cdots+z_{c_{T}(n)}\right)\] \[= \prod_{k=1}^{n}\left(z_{c_{T}(k)}+z_{c_{T}(k+1)}+\cdots+z_{c_{T} (n)}\right).\] In other words, \[\frac{1}{\prod\limits_{k=1}^{n}\left(z_{c_{T}(k)}+z_{c_{T}(k+1)}+ \cdots+z_{c_{T}(n)}\right)}\] \[= \frac{1}{\sum\limits_{(i,j)\in Y\left(\lambda/\mu\right)}z_{j-i}} \cdot\frac{1}{\prod\limits_{k=1}^{n-1}\left(z_{c_{T^{\prime}}(k)}+z_{c_{T^{ \prime}}(k+1)}+\cdots+z_{c_{T^{\prime}}(n-1)}\right)}.\] But the fraction on the left hand side of this equality is \(\mathbf{z}_{T}\), while the second fraction on the right hand side is \(\mathbf{z}_{T^{\prime}}\). Thus, this equality is precisely (9). Proof of Lemma 4.5.: **(a)** Assume that \(\lambda=\mu\). Then, \(Y\left(\lambda/\mu\right)=\varnothing\). Hence, there is only one \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\), namely the empty tableau \(T_{\varnothing}\) (with no entries). Therefore, \(\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=\mathbf{ z}_{T_{\varnothing}}=1\) (since an empty product is \(1\) by definition). **(b)** Assume that \(\lambda\neq\mu\). Thus, the diagram \(Y\left(\lambda/\mu\right)\) has at least one box. Hence, any standard tableau \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) must contain the entry \(1\) somewhere. We shall use the following notations: * If \(\nu\) is a partition satisfying \(\mu\lessdot\nu\), then \(\mathrm{box}\left(\nu/\mu\right)\) shall denote the unique box in \(Y\left(\nu\right)\setminus Y\left(\mu\right)=Y\left(\nu/\mu\right)\). (This box is unique, since \(\mu\lessdot\nu\).) * If \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) is a standard tableau, then \(B_{1}\left(T\right)\) shall denote the unique box of \(T\) that contains the entry \(1\). (This box exists, since any standard tableau \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) must contain the entry \(1\) somewhere. It is unique, since a standard tableau cannot have repeated entries.) Let \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) be a standard tableau. Lemma 4.3 shows that if we remove the box \(B_{1}\left(T\right)\) (that is, the box that contains the entry \(1\)) from \(T\), and if we subtract \(1\) from all remaining entries, then we obtain a new standard tableau \(T^{\prime}\), which has shape \(\lambda/\nu\) for some partition \(\nu\) satisfying \(\mu\lessdot\nu\subseteq\lambda\). This latter partition \(\nu\) has the property that the unique box of \(Y\left(\nu/\mu\right)\) is \(B_{1}\left(T\right)\) (since this was the box removed from \(T\) to obtain \(T^{\prime}\)). In other words, it satisfies \(B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\). Furthermore, this property (in combination with \(\mu\lessdot\nu\subseteq\lambda\)) uniquely determines \(\nu\) (since \(B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\) means \(Y\left(\nu/\mu\right)=\left\{B_{1}\left(T\right)\right\}\) and thus \(Y\left(\nu\right)=Y\left(\mu\right)\cup\underbrace{Y\left(\nu/\mu\right)}_{= \left\{B_{1}\left(T\right)\right\}}=Y\left(\mu\right)\cup\left\{B_{1}\left(T \right)\right\}\), but this uniquely determines \(\nu\)). Thus, there is exactly one partition \(\nu\) satisfying \(\mu\lessdot\nu\subseteq\lambda\) and \(B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\). Hence, the sum \(\sum\limits_{\begin{subarray}{c}\mu\lessdot\nu\subseteq\lambda;\\ B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\end{subarray}}\mathbf{z}_ {T}\) has exactly one addend, and thus equals \(\mathbf{z}_{T}\). In other words, \[\mathbf{z}_{T}=\sum\limits_{\begin{subarray}{c}\mu\lessdot\nu\subseteq \lambda;\\ B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\end{subarray}}\mathbf{z}_ {T}.\] Forget that we fixed \(T\). We thus have shown that \[\mathbf{z}_{T}=\sum\limits_{\begin{subarray}{c}\mu\lessdot\nu\subseteq \lambda;\\ B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\end{subarray}}\mathbf{z}_ {T} \tag{32}\] for any standard tableau \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\). Now, summing the equality (32) over all \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\), we obtain \[\sum_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T} =\sum_{\begin{subarray}{c}T\in\mathrm{SYT}\left(\lambda/\mu\right) \\ \underset{B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)}{=\sum}\\ =\sum\limits_{\mu\prec\nu\subseteq\lambda}\end{subarray}}\sum_{ \begin{subarray}{c}T\in\mathrm{SYT}\left(\lambda/\mu\right);\\ B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\end{subarray}}\mathbf{z}_{T}\] \[=\sum_{\mu\prec\nu\subseteq\lambda}\sum_{\begin{subarray}{c}T\in \mathrm{SYT}\left(\lambda/\mu\right);\\ B_{1}\left(T\right)=\mathrm{box}\left(\nu/\mu\right)\end{subarray}}\mathbf{z}_{T}. \tag{33}\] Now, fix a partition \(\nu\) satisfying \(\mu\lessdot\nu\subseteq\lambda\). Thus, \(Y\left(\mu\right)\subseteq Y\left(\nu\right)\subseteq Y\left(\lambda\right)\). Let \(b=\mathrm{box}\left(\nu/\mu\right)\) (this is well-defined, since \(\mu\lessdot\nu\)). Then, \(Y\left(\nu/\mu\right)=\left\{b\right\}\). Hence, \[\underset{=Y\left(\lambda\right)\setminus Y\left(\mu\right)}{ \underset{=Y\left(\nu/\mu\right)}{=Y\left(\lambda\right)\setminus Y\left(\mu \right)}} =\left(Y\left(\lambda\right)\setminus Y\left(\mu\right)\right) \setminus\left(Y\left(\nu\right)\setminus Y\left(\mu\right)\right)\] \[=Y\left(\lambda\right)\setminus Y\left(\nu\right)\qquad\quad \left(\text{since }Y\left(\mu\right)\subseteq Y\left(\nu\right)\subseteq Y\left(\lambda \right)\right)\] \[=Y\left(\lambda/\nu\right).\] Consider a standard tableau \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) satisfying \(B_{1}\left(T\right)=b\). Then, the standard tableau \(T\) has its entry \(1\) in this box \(b\) (by the definition of \(B_{1}\left(T\right)\)). If we remove this box \(b\) from \(T\), and subtract \(1\) from all remaining entries, then we obtain a new standard tableau \(T^{\prime}\) of shape \(\lambda/\nu\) (indeed, we clearly obtain a filling of the diagram \(Y\left(\lambda/\mu\right)\setminus\left\{b\right\}=Y\left(\lambda/\nu\right)\), and furthermore this filling is a standard tableau because the subtraction of \(1\) from each entry of \(T\) did not overturn the inequalities between the entries of \(T\)). Thus, for any standard tableau \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) satisfying \(B_{1}\left(T\right)=b\), we have defined a standard tableau \(T^{\prime}\) of shape \(\lambda/\nu\). In other words, we have defined a map \[\left\{T\in\mathrm{SYT}\left(\lambda/\mu\right)\ \mid\ B_{1}\left(T \right)=b\right\} \rightarrow\mathrm{SYT}\left(\lambda/\nu\right),\] \[T \mapsto T^{\prime}. \tag{34}\] This map is easily seen to be injective (since we can recover \(T\) from \(T^{\prime}\) by incrementing all entries of \(T^{\prime}\) by \(1\) and placing a \(1\) into the extra box \(b\)) and surjective (because given any standard tableau \(S\) of shape \(\lambda/\nu\), we can increment all entries of \(S\) by \(1\) and fill the additional box \(b\) with the entry \(1\), thus obtaining a standard tableau \(T\in\mathrm{SYT}\left(\lambda/\mu\right)\) that satisfies \(B_{1}\left(T\right)=b\) and \(T^{\prime}=S\)). Hence, it is bijective. Now, recall that \(\text{box}\left(v/\mu\right)=b\). Thus, \[\sum_{\begin{subarray}{c}T\in\text{SYT}\left(\lambda/\mu\right);\\ B_{1}\left(T\right)=\text{box}\left(v/\mu\right)\end{subarray}}\mathbf{z}_{T} =\sum_{\begin{subarray}{c}T\in\text{SYT}\left(\lambda/\mu\right); \\ B_{1}\left(T\right)=b\end{subarray}}\mathbf{z}_{T}\] \[=\sum_{\begin{subarray}{c}T\in\text{SYT}\left(\lambda/\mu\right) ;\\ B_{1}\left(T\right)=b\end{subarray}}\frac{1}{\sum\limits_{\left(i,j\right)\in Y \left(\lambda/\mu\right)}z_{j-i}}\cdot\mathbf{z}_{T^{\prime}}\qquad\left( \text{by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeq The entries of \(T\) weakly increase left-to-right in each row (since \(T\) is a semistandard tableau). Hence, from \(j\leq v\), we obtain \(T\left(u,j\right)\leq T\left(u,v\right)\). Next, recall that \(\mu\) is a partition, thus weakly decreasing. Hence, for each \(k\in\left\{1,2,\ldots,u\right\}\), we have \(\mu_{u}\leq\mu_{k}\) (since \(u\geq k\)) and thus \(j\leq\mu_{u}\leq\mu_{k}\), so that \(\left(k,j\right)\in Y\left(\mu\right)\). Hence, \(T\left(k,j\right)\) is well-defined for each \(k\in\left\{1,2,\ldots,u\right\}\). We set \[p_{k}:=T\left(k,j\right)-k\qquad\quad\text{for each }k\in\left\{1,2,\ldots,u \right\}.\] We shall now show that the sequence \(\left(p_{i},p_{i+1},\ldots,p_{u}\right)\) is weakly increasing. [_Proof_: It suffices to show that \(p_{s}\leq p_{s+1}\) for each \(s\in\left\{i,i+1,\ldots,u-1\right\}\). So let us fix \(s\in\left\{i,i+1,\ldots,u-1\right\}\) and show this. The entries of \(T\) strictly increase top-to-bottom in each column (since \(T\) is a semistandard tableau). Hence, \(T\left(s,j\right)<T\left(s+1,j\right)\). Thus, \(T\left(s,j\right)\leq T\left(s+1,j\right)-1\) (since all entries of \(T\) are integers). However, the definition of \(p_{k}\) yields \(p_{s}=T\left(s,j\right)-s\) and \(p_{s+1}=T\left(s+1,j\right)-\left(s+1\right)\). Thus, \[p_{s}=\underset{\leq T\left(s+1,j\right)-1}{T\left(s+1,j\right)}-s\leq T \left(s+1,j\right)-1-s=T\left(s+1,j\right)-\left(s+1\right)=p_{s+1},\] as we desired to prove.] Thus, we have shown that the sequence \(\left(p_{i},p_{i+1},\ldots,p_{u}\right)\) is weakly increasing. Hence, \[p_{i} \leq p_{u}=T\left(u,j\right)-u\qquad\quad\text{(by the definition of }p_{u}\text{)}\] \[\leq T\left(u,v\right)-u\qquad\quad\text{(since }T\left(u,j\right)\leq T\left(u,v\right)\text{)}\,.\] Therefore, \[T\left(u,v\right)-u\geq p_{i}=T\left(i,j\right)-i\qquad\quad\text{(by the definition of }p_{i}\text{)}\,.\] This proves Lemma 6.3**(b)**. **(a)** Let \(\left(i,j\right)\in Y\left(\mu\right)\). Thus, \(j\leq\mu_{i}\). But \(\mu\) is a partition, thus weakly decreasing. Hence, \(\mu_{i}\leq\mu_{1}\) (since \(i\geq 1\)). Thus, \(j\leq\mu_{i}\leq\mu_{1}\), so that \(\left(1,j\right)\in Y\left(\mu\right)\). Since we also have \(1\leq i\) and \(j\leq j\), we can thus apply Lemma 6.3**(b)** to \(\left(1,j\right)\) and \(\left(i,j\right)\) instead of \(\left(i,j\right)\) and \(\left(u,v\right)\). We thus obtain \[T\left(i,j\right)-i\geq T\left(1,j\right)-1\geq 0\qquad\quad\quad\text{(since }T \left(1,j\right)\geq 1\text{)}\,.\] In other words, \(T\left(i,j\right)\geq i\). This proves Lemma 6.3**(a)**. Proof of Lemma 6.10.: Let \(i\geq 1\) be an integer. Every sufficiently large \(k\) satisfies both \(k>i-\mu_{i}\) and \(\lambda_{k}=0\), so that it satisfies \(\underset{=0}{\underbrace{\lambda_{k}}}-\underset{>i-\mu_{i}}{\underbrace{k}}< 0-\left(i-\mu_{i}\right)=\mu_{i}-i\). Hence, any sufficiently large integer \(k\geq 0\) will fail to satisfy \(\lambda_{k}-k\geq\mu_{i}-i\). Hence, the set of all integers \(k\geq 0\) that satisfy \(\lambda_{k}-k\geq\mu_{i}-i\) is finite. Since this set is furthermore nonempty (because \(\lambda_{0}-0=\lambda_{0}=\infty\geq\mu_{i}-i\) shows that \(0\) belongs to this set), this set must thus have a maximum (because any nonempty finite set of integers has a maximum). In other words, \(\max\left\{k\geq 0\mid\lambda_{k}-k\geq\mu_{i}-i\right\}\) is well-defined. Proof of Lemma 6.11.: The "\(\Longleftarrow\)" direction is easy: Since \(b_{i}\) is defined as the largest \(k\geq 0\) that satisfies \(\lambda_{k}-k\geq\mu_{i}-i\), it is clear that every integer \(k\geq 0\) that satisfies \(\lambda_{k}-k\geq\mu_{i}-i\) must be \(\leq b_{i}\). Hence, if \(\lambda_{j}-j\geq\mu_{i}-i\), then \(j\leq b_{i}\). This proves the "\(\Longleftarrow\)" direction of the lemma. It remains to prove the "\(\Longrightarrow\)" direction. Thus, we assume that \(j\leq b_{i}\), and set out to prove that \(\lambda_{j}-j\geq\mu_{i}-i\). From \(j\leq b_{i}\), we obtain \(\lambda_{j}\geq\lambda_{b_{i}}\) (since \(\lambda_{0}\geq\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\cdots\)) and thus \(\lambda_{j}-j\geq\lambda_{b_{i}}-j\geq\lambda_{b_{i}}-b_{i}\) (since \(j\leq b_{i}\)). However, \(b_{i}\) is defined as the largest \(k\geq 0\) that satisfies \(\lambda_{k}-k\geq\mu_{i}-i\). Thus, in particular, \(b_{i}\) itself is such a \(k\). In other words, \(\lambda_{b_{i}}-b_{i}\geq\mu_{i}-i\). Therefore, \(\lambda_{j}-j\geq\lambda_{b_{i}}-b_{i}\geq\mu_{i}-i\). This completes the proof of the "\(\Longrightarrow\)" direction. Hence, Lemma 6.11 is proved. Proof of Lemma 6.12.: We must show that \(b_{i-1}\leq b_{i}\) for each \(i\in\{2,3,4,\ldots\}\) (where the notations are those of Definition 6.8). Let us do this. Let \(i\in\{2,3,4,\ldots\}\). Then, \(b_{i-1}\) is defined as the largest \(k\geq 0\) that satisfies \(\lambda_{k}-k\geq\mu_{i-1}-(i-1)\). Thus, in particular, \(b_{i-1}\) itself is such a \(k\). In other words, \(\lambda_{b_{i-1}}-b_{i-1}\geq\mu_{i-1}-(i-1)\). However, from \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\cdots\), we obtain \(\mu_{i-1}-(i-1)\geq\mu_{i}-(i-1)\geq\mu_{i}-i\) (since \(i-1\leq i\)). Therefore, \[\lambda_{b_{i-1}}-b_{i-1}\geq\mu_{i-1}-(i-1)\geq\mu_{i}-i.\] Now, Lemma 6.11 (applied to \(j=b_{i-1}\)) yields the equivalence \[(b_{i-1}\leq b_{i})\ \Longleftrightarrow\ \left(\lambda_{b_{i-1}}-b_{i-1}\geq \mu_{i}-i\right).\] Hence, we have \(b_{i-1}\leq b_{i}\) (since we have \(\lambda_{b_{i-1}}-b_{i-1}\geq\mu_{i}-i\)). This completes the proof of the lemma. Proof of Lemma 6.13.: From \(\mu_{i}=0\), we obtain \(\mu_{i}-i=-i\). Let \(j>i\) be an integer. Then, \(j\geq i+1\) (since \(j>i\)). Since \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\cdots\), we thus have \(\lambda_{j}\leq\lambda_{i+1}=0\) and therefore \(\lambda_{j}=0\). Hence, \(\lambda_{j}-j=-j<-i\) (since \(j>i\)). In other words, \(\lambda_{j}-j<\mu_{i}-i\) (since \(\mu_{i}-i=-i\)). Forget that we fixed \(j\). We thus have shown that for every integer \(j>i\), we have \(\lambda_{j}-j<\mu_{i}-i\). We defined \(b_{i}\) to be the maximum of the set \(\{k\geq 0\mid\lambda_{k}-k\geq\mu_{i}-i\}\). But this set clearly contains \(i\) (since \(\lambda_{i}-i\geq-i=\mu_{i}-i\)), but does not contain any integer \(j>i\) (because for every integer \(j>i\), we have \(\lambda_{j}-j<\mu_{i}-i\)). Thus, its maximum is \(i\). In other words, \(b_{i}=i\). This proves Lemma 6.13. Proof of Lemma 6.17.: Let us write the two boxes \(c\) and \(d\) as \(c=(i,j)\) and \(d=(u,v)\) for some positive integers \(i\), \(j\), \(u\) and \(v\). From \(c=(i,j)\), we obtain \[c_{+T}=\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right). \tag{36}\] Similarly, \[d_{+T}=\left(T\left(u,v\right),\ T\left(u,v\right)+v-u\right). \tag{37}\] **(a)** Assume that \(d_{+T}=c_{+T}\). In view of (36) and (37), we can rewrite this as \[\left(T\left(u,v\right),\ T\left(u,v\right)+v-u\right)=\left(T\left(i,j\right), \ T\left(i,j\right)+j-i\right).\] In other words, the two equalities \(T\left(u,v\right)=T\left(i,j\right)\) and \(T\left(u,v\right)+v-u=T\left(i,j\right)+j-i\) hold. Subtracting the former equality from the latter, we obtain \(v-u=j-i\). In other words, \(v+i=u+j\), so that \(v-j=u-i\). We are in one of the following two cases: _Case 1:_ We have \(u\geq i\). _Case 2:_ We have \(u<i\). Let us first consider Case 1. In this case, we have \(u\geq i\). Hence, \(u-i\geq 0\), so that \(v\geq j\) (since \(v-j=u-i\geq 0\)). In other words, \(j\leq v\). Also, \(i\leq u\) (since \(u\geq i\)). Thus, Lemma 6.3**(b)** yields \(T\left(u,v\right)-u\geq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)\) from this inequality, we find \(-u\geq-i\), so that \(i\geq u\). Combining this with \(i\leq u\), we obtain \(i=u\). Hence, \(u=i\), so that \(u-i=0\). Thus, \(v=j\) (since \(v-j=u-i=0\)). Combining \(u=i\) with \(v=j\), we obtain \(\left(u,v\right)=\left(i,j\right)\). In other words, \(d=c\) (since \(d=\left(u,v\right)\) and \(c=\left(i,j\right)\)). Thus, Lemma 6.17**(a)** is proved in Case 1. Let us now consider Case 2. In this case, we have \(u<i\). Hence, \(u-i<0\), so that \(v<j\) (since \(v-j=u-i<0\)). Now, Lemma 6.3**(b)** (applied to \(\left(u,v\right)\) and \(\left(i,j\right)\) instead of \(\left(i,j\right)\) and \(\left(u,v\right)\)) yields \(T\left(i,j\right)-i\geq T\left(u,v\right)-u\) (since \(u<i\) and \(v<j\)). In other words, \(T\left(u,v\right)-u\leq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)\) from this inequality, we find \(-u\leq-i\), so that \(u\geq i\). But this contradicts \(u<i\). Thus, we have found a contradiction in Case 2, which shows that Case 2 is impossible. Consequently, the only possible case is Case 1. Since we have proved Lemma 6.17**(a)** in this Case 1, we thus conclude that Lemma 6.17**(a)** is proved. **(b)** Assume that \(d_{+T}=\left(c_{+T}\right)_{\rightarrow}\). In view of (36) and (37), we can rewrite this as \[\left(T\left(u,v\right),\ T\left(u,v\right)+v-u\right) =\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)_{\rightarrow}\] \[=\left(T\left(i,j\right),\ T\left(i,j\right)+j-i+1\right)\] (by the definition of \(\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)_{\rightarrow}\)). In other words, the two equalities \(T\left(u,v\right)=T\left(i,j\right)\) and \(T\left(u,v\right)+v-u=T\left(i,j\right)+j-i+1\) hold. Subtracting the former equality from the latter, we obtain \(v-u=j-i+1\). In other words, \(v+i=u+j+1\), so that \(v-j=u-i+1\). We are in one of the following two cases: _Case 1:_ We have \(u\geq i\). _Case 2:_ We have \(u<i\). Let us first consider Case 1. In this case, we have \(u\geq i\). Hence, \(u-i\geq 0\), and therefore \(v>j\) (since \(v-j=u-i+1>u-i\geq 0\)). In other words, \(j<v\). Also, \(i\leq u\) (since \(u\geq i\)). Thus, Lemma 6.3**(b)** yields \(T\left(u,v\right)-u\geq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)\) from this inequality, we find \(-u\geq-i\) so that \(i\geq u\). Combining this with \(i\leq u\), we obtain \(i=u\). Hence, \(u=i\), so that \(u-i+1=i-i+1=1\). Thus, \(v=j+1\) (since \(v-j=u-i+1=1\)). Combining \(u=i\) with \(v=j+1\), we obtain \((u,v)=(i,j+1)=(i,j)_{\rightarrow}\). In other words, \(d=c_{\rightarrow}\) (since \(d=(u,v)\) and \(c=(i,j)\)). Moreover, we have \(T\left(u,v\right)=T\left(i,j\right)\). In other words, \(T\left(d\right)=T\left(c\right)\) (since \(d=(u,v)\) and \(c=(i,j)\)). Thus, Lemma 6.17**(b)** is proved in Case 1. Let us now consider Case 2. In this case, we have \(u<i\). Hence, \(u-i<0\), so that \(u-i\leq-1\) (since \(u-i\) is an integer). In other words, \(u-i+1\leq 0\). Therefore, \(v\leq j\) (since \(v-j=u-i+1\leq 0\)). Now, Lemma 6.3**(b)** (applied to \((u,v)\) and \((i,j)\) instead of \((i,j)\) and \((u,v)\)) yields \(T\left(i,j\right)-i\geq T\left(u,v\right)-u\) (since \(u<i\) and \(v\leq j\)). In other words, \(T\left(u,v\right)-u\leq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)\) from this inequality, we find \(-u\leq-i\), so that \(u\geq i\). But this contradicts \(u<i\). Thus, we have found a contradiction in Case 2, which shows that Case 2 is impossible. Consequently, the only possible case is Case 1. Since we have proved Lemma 6.17**(b)** in this Case 1, we thus conclude that Lemma 6.17**(b)** is proved. **(c)** Assume that \(d_{+T}=\left(c_{+T}\right)_{\downarrow}\). In view of (36) and (37), we can rewrite this as \[\left(T\left(u,v\right),\;T\left(u,v\right)+v-u\right) =\left(T\left(i,j\right),\;T\left(i,j\right)+j-i\right)_{\downarrow}\] \[=\left(T\left(i,j\right)+1,\;T\left(i,j\right)+j-i\right)\] (by the definition of \(\left(T\left(i,j\right),\;T\left(i,j\right)+j-i\right)_{\downarrow}\)). In other words, the two equalities \(T\left(u,v\right)=T\left(i,j\right)+1\) and \(T\left(u,v\right)+v-u=T\left(i,j\right)+j-i\) hold. Subtracting the former equality from the latter, we obtain \(v-u=j-i-1\). In other words, \(v+i=u+j-1\), so that \(v-j=u-i-1\). We are in one of the following two cases: _Case 1:_ We have \(u>i\). _Case 2:_ We have \(u\leq i\). Let us first consider Case 1. In this case, we have \(u>i\). Hence, \(u\geq i+1\) (since \(u\) and \(i\) are integers), so that \(u-i\geq 1\). In other words, \(u-i-1\geq 0\). Therefore, \(v\geq j\) (since \(v-j=u-i-1\geq 0\)). In other words, \(j\leq v\). Also, \(i\leq u\) (since \(u>i\)). Thus, Lemma 6.3**(b)** yields \(T\left(u,v\right)-u\geq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)+1\) from this inequality, we find \(-u\geq-i-1\), so that \(i+1\geq u\). Combining this with \(u\geq i+1\), we obtain \(u=i+1\). In other words, \(u-i-1=0\). Therefore, \(v=j\) (since \(v-j=u-i-1=0\)). Combining \(u=i+1\) with \(v=j\), we obtain \((u,v)=(i+1,j)=(i,j)_{\downarrow}\). In other words, \(d=c_{\downarrow}\) (since \(d=(u,v)\) and \(c=(i,j)\)). Moreover, we have \(T\left(u,v\right)=T\left(i,j\right)+1\). In other words, \(T\left(d\right)=T\left(c\right)+1\) (since \(d=(u,v)\) and \(c=(i,j)\)). Thus, Lemma 6.17**(c)** is proved in Case 1. Let us now consider Case 2. In this case, we have \(u\leq i\). Hence, \(u-i\leq 0\), so that \(u-i-1<u-i\leq 0\). Therefore, \(v<j\) (since \(v-j=u-i-1<0\)). Now, Lemma 6.3**(b)** (applied to \((u,v)\) and \((i,j)\) instead of \((i,j)\) and \((u,v)\)) yields \(T\left(i,j\right)-i\geq T\left(u,v\right)-u\) (since \(u\leq i\) and \(v<j\)). In other words, \(T\left(u,v\right)-u\leq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)+1\) from this inequality, we find \(-u\leq-i-1<-i\). In other words, \(i<u\). But this contradicts \(u\leq i\). Thus, we have found a contradiction in Case 2, which shows that Case 2 is impossible. Consequently, the only possible case is Case 1. Since we have proved Lemma 6.17**(c)** in this Case 1, we thus conclude that Lemma 6.17**(c)** is proved. **(d)** Assume that \(d_{+T}=\left(c_{+T}\right)_{\searrow}\). In view of (36) and (37), we can rewrite this as \[\left(T\left(u,v\right),\ T\left(u,v\right)+v-u\right) =\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)_{\searrow}\] \[=\left(T\left(i,j\right)+1,\ T\left(i,j\right)+j-i+1\right)\] (by the definition of \(\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)_{\searrow}\)). In other words, the two equalities \(T\left(u,v\right)=T\left(i,j\right)+1\) and \(T\left(u,v\right)+v-u=T\left(i,j\right)+j-i+1\) hold. Subtracting the former equality from the latter, we obtain \(v-u=j-i\). In other words, \(v+i=u+j\), so that \(v-j=u-i\). We are in one of the following two cases: _Case 1:_ We have \(u>i\). _Case 2:_ We have \(u\leq i\). Let us first consider Case 1. In this case, we have \(u>i\). Therefore, \(u\geq i+1\) (since \(u\) and \(i\) are integers), so that \(u-i\geq 1\). Therefore, \(v>j\) (since \(v-j=u-i\geq 1>0\)). In other words, \(j<v\). Also, \(i<u\) (since \(u>i\)). Thus, Lemma 6.3**(b)** yields \(T\left(u,v\right)-u\geq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)+1\) from this inequality, we find \(-u\geq-i-1\), so that \(i+1\geq u\). Combining this with \(u\geq i+1\), we obtain \(u=i+1\). In other words, \(u-i=1\). Hence, \(v=j+1\) (since \(v-j=u-i=1\)). Combining \(u=i+1\) with \(v=j+1\), we obtain \(\left(u,v\right)=\left(i+1,j+1\right)=\left(i,j\right)_{\searrow}\). In other words, \(d=c_{\searrow}\) (since \(d=\left(u,v\right)\) and \(c=\left(i,j\right)\)). Moreover, we have \(T\left(u,v\right)=T\left(i,j\right)+1\). In other words, \(T\left(d\right)=T\left(c\right)+1\) (since \(d=\left(u,v\right)\) and \(c=\left(i,j\right)\)). Thus, Lemma 6.17**(d)** is proved in Case 1. Let us now consider Case 2. In this case, we have \(u\leq i\). Hence, \(u-i\leq 0\), so that \(v\leq j\) (since \(v-j=u-i\leq 0\)). Now, Lemma 6.3**(b)** (applied to \(\left(u,v\right)\) and \(\left(i,j\right)\) instead of \(\left(i,j\right)\) and \(\left(u,v\right)\)) yields \(T\left(i,j\right)-i\geq T\left(u,v\right)-u\) (since \(u\leq i\) and \(v\leq j\)). In other words, \(T\left(u,v\right)-u\leq T\left(i,j\right)-i\). Subtracting the equality \(T\left(u,v\right)=T\left(i,j\right)+1\) from this inequality, we find \(-u\leq-i-1<-i\). In other words, \(i<u\). But this contradicts \(u\leq i\). Thus, we have found a contradiction in Case 2, which shows that Case 2 is impossible. Consequently, the only possible case is Case 1. Since we have proved Lemma 6.17**(d)** in this Case 1, we thus conclude that Lemma 6.17**(d)** is proved. Proof of Lemma 6.18.: If \(c\in Y\left(\mu\right)\) is any box, then the box \(c_{+T}\) depends only on the position of the box \(c\) and on the entry \(T\left(c\right)\) in this box. Thus, if a box \(c\) has the same entry in both tableaux \(T\) and \(S\) (that is, satisfies \(T\left(c\right)=S\left(c\right)\)), then the boxes \(c_{+T}\) and \(c_{+S}\) will also be identical. Therefore, from (15), we obtain \[c_{+T}=c_{+S}\qquad\text{ for all }c\in Y\left(\mu\right)\text{ distinct from }\left(i,j\right). \tag{38}\] Now, define the set \(A:=\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\text{ such that }c\neq\left(i,j\right)\right\}\). Then, \[A =\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\text{ such that }c\neq \left(i,j\right)\right\}\] \[=\left\{c_{+S}\ \mid\ c\in Y\left(\mu\right)\text{ such that }c\neq \left(i,j\right)\right\}\] (since (38) allows us to rewrite \(c_{+T}\) as \(c_{+S}\) here). Lemma 6.17**(a)** shows that if two boxes \(d,c\in Y\left(\mu\right)\) satisfy \(d_{+T}=c_{+T}\), then \(d=c\). In other words, the boxes \(c_{+T}\) for all \(c\in Y\left(\mu\right)\) are distinct. Hence, in particular, for any box \(c\in Y\left(\mu\right)\) that satisfies \(c\neq\left(i,j\right)\), we have \(c_{T}\neq\left(i,j\right)_{+T}\). In other words, any element of \(A\) is distinct from \(\left(i,j\right)_{+T}\) (since the elements of \(A\) are exactly the boxes of the form \(c_{+T}\), where \(c\in Y\left(\mu\right)\) is a box that satisfies \(c\neq\left(i,j\right)\)). In other words, \(\left(i,j\right)_{+T}\notin A\). However, the definition of \(\mathbf{D}\left(T\right)\) yields \[\mathbf{D}\left(T\right) =\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\right\}\] \[=\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\text{ such that }c\neq \left(i,j\right)\right\}\cup\left\{\left(i,j\right)_{+T}\right\}\] (here, we have split off the element for \(c=\left(i,j\right)\) from the set, since \(\left(i,j\right)\in Y\left(\mu\right)\)). In other words, \[\mathbf{D}\left(T\right)=A\cup\left\{\left(i,j\right)_{+T}\right\} \tag{39}\] (since \(A=\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\text{ such that }c\neq \left(i,j\right)\right\}\)). Hence, \[\mathbf{D}\left(T\right)\setminus\left\{\left(i,j\right)_{+T}\right\}=\left(A \cup\left\{\left(i,j\right)_{+T}\right\}\right)\setminus\left\{\left(i,j \right)_{+T}\right\}=A\] (since \(\left(i,j\right)_{+T}\notin A\)). The same argument (applied to \(S\) instead of \(T\)) yields \[\mathbf{D}\left(S\right)\setminus\left\{\left(i,j\right)_{+S}\right\}=A\] (since \(A=\left\{c_{+S}\ \mid\ c\in Y\left(\mu\right)\text{ such that }c\neq \left(i,j\right)\right\}\)). Now, if we replace the box \(\left(i,j\right)_{+S}\) in the set \(\mathbf{D}\left(S\right)\) by the box \(\left(i,j\right)_{+T}\), then we obtain the set \[\underbrace{\left(\mathbf{D}\left(S\right)\setminus\left\{\left(i,j\right)_{ +S}\right\}\right)}_{=A}\cup\left\{\left(i,j\right)_{+T}\right\}=A\cup\left\{ \left(i,j\right)_{+T}\right\}=\mathbf{D}\left(T\right)\text{ is well-defined (by (14)) and surjective (again by (14)). Furthermore, this map is injective (since Lemma 6.17**(a)** shows that the boxes \(c_{+T}\) for all \(c\in Y\left(\mu\right)\) are distinct). Hence, this map is bijective. In other words, the map \[Y\left(\mu\right) \to\mathbf{D}\left(T\right),\] \[\left(i,j\right) \mapsto\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)\] is bijective (indeed, this is the same map as the one discussed in the preceding sentence, since \(c_{+T}\) is defined to be \(\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)\) for every box \(c=\left(i,j\right)\)). Hence, we can substitute \(\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)\) for \(\left(i,j\right)\) in the product \(\prod\limits_{\left(i,j\right)\in\mathbf{D}\left(T\right)}\left(x_{i}+y_{j}\right)\). We thus obtain \[\prod\limits_{\left(i,j\right)\in\mathbf{D}\left(T\right)}\left(x_{i}+y_{j} \right)=\prod\limits_{\left(i,j\right)\in Y\left(\mu\right)}\left(x_{T\left(i,j\right)}+y_{T\left(i,j\right)+j-i}\right).\] This proves Lemma 6.19**(b)**. **(a)** Forget that we fixed \(T\). We define the _load_ of a semistandard tableau \(T\in\text{SSYT}\left(\mu\right)\) to be the nonnegative integer \(\sum\limits_{c\in Y\left(\mu\right)}T\left(c\right)\) (that is, the sum of all entries of \(T\)). Note that this load is positive whenever \(\mu\neq\varnothing\) (since \(\mu\neq\varnothing\) means that \(Y\left(\mu\right)\) contains at least one box \(c\), and of course the entry \(T\left(c\right)\) in this box \(c\) must be positive). We shall prove Lemma 6.19**(a)** by induction on the load of \(T\). _Base case:_ The load of \(T\) can be \(0\) only if \(\mu=\varnothing\) (since the load of \(T\) is positive whenever \(\mu\neq\varnothing\)). Of course, \(\mathbf{D}\left(T\right)=\varnothing\) in this case, and this renders the claim of Lemma 6.19**(a)** trivial (since the empty diagram \(\varnothing\) is clearly an excitation of \(Y\left(\mu\right)=\varnothing\)). Thus, Lemma 6.19**(a)** is proved in the case when the load of \(T\) is \(0\). _Induction step:_ Fix a positive integer \(n\). Assume (as the induction hypothesis) that Lemma 6.19**(a)** is proved for all tableaux \(T\in\text{SSYT}\left(\mu\right)\) of load \(n-1\). We now fix a semistandard tableau \(T\in\text{SSYT}\left(\mu\right)\) of load \(n\). Our goal is to prove Lemma 6.19**(a)** for this \(T\). Lemma 6.3**(a)** shows that the inequality \(T\left(i,j\right)\geq i\) holds for each \(\left(i,j\right)\in Y\left(\mu\right)\). If this inequality is always an equality, then we have \(\mathbf{D}\left(T\right)=Y\left(\mu\right)\) (this is easy to see8), and thus the claim of Lemma 6.19**(a)** holds (since \(Y\left(\mu\right)\) itself is clearly an excitation of \(Y\left(\mu\right)\), obtained by making a sequence of \(0\) excited moves). Thus, for the rest of this induction step, we WLOG assume that the inequality \(T\left(i,j\right)\geq i\) is **not** always an equality. Hence, there exists a box \(\left(i,j\right)\in Y\left(\mu\right)\) such that \(T\left(i,j\right)\neq i\). We call such a box _interesting_. Now, let \(\left(i,j\right)\) be an interesting box with smallest possible \(i+j\). Then, a box \(\left(u,v\right)\) with \(u+v<i+j\) cannot be interesting. In particular: * The box \(\left(i-1,j\right)\) cannot be interesting (since \(\left(i-1\right)+j<i+j\)). Hence, \[\text{if }i>1\text{, then }T\left(i-1,j\right)=i-1\] (40) (because otherwise, the box \(\left(i-1,j\right)\) would be interesting). * The box \(\left(i,j-1\right)\) cannot be interesting (since \(i+\left(j-1\right)<i+j\)). Hence, \[\text{if }j>1\text{, then }T\left(i,j-1\right)=i\] (41) (because otherwise, the box \(\left(i,j-1\right)\) would be interesting). Note that \(T\left(i,j\right)\neq i\) (since the box \(\left(i,j\right)\) is interesting) and therefore \(T\left(i,j\right)>i\) (since Lemma 6.3 **(a)** yields \(T\left(i,j\right)\geq i\)). Hence, \(T\left(i,j\right)\geq i+1\) (since \(T\left(i,j\right)\) and \(i\) are integers), so that \[T\left(i,j\right)-1\geq i\geq 1.\] Now, let us decrease the entry \(T\left(i,j\right)\) of the tableau \(T\) by \(1\), while leaving all other entries unchanged. The resulting filling of \(Y\left(\mu\right)\) will be called \(\overline{T}\). Formally speaking, \(\overline{T}\) is thus the map from \(Y\left(\mu\right)\) to \(\left\{1,2,3,\ldots\right\}\) given by \[\overline{T}\left(i,j\right) =T\left(i,j\right)-1\text{and} \tag{42}\] \[\overline{T}\left(c\right) =T\left(c\right)\text{for all }c\in Y\left(\mu\right)\text{ distinct from }\left(i,j\right). \tag{43}\] It is easy to see (using (40) and (41)) that \(\overline{T}\) is again a semistandard tableau9 Footnote 9: Proof.: First, we note that the entries of \(\overline{T}\) are positive integers (since \(\overline{T}\left(i,j\right)=T\left(i,j\right)-1\geq 1\) shows that \(\overline{T}\left(i,j\right)\) is a positive integer, and since all the other entries of \(\overline{T}\) are copied from \(T\)). Next, we claim that the entries of \(\overline{T}\) weakly increase left-to-right in each row. Indeed, by the construction of \(\overline{T}\), this will follow from the analogous property of \(T\), as long as we can show that the decreased entry \(\overline{T}\left(i,j\right)\) is still greater or equal to its neighboring entry \(\overline{T}\left(i,j-1\right)\) (assuming that \(j>1\)). But we can easily show this: If \(j>1\), then \[\overline{T}\left(i,j-1\right) =T\left(i,j-1\right)\text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: This tableau \(\overline{T}\) is obtained from \(T\) by decreasing the \((i,j)\)-th entry by \(1\). Thus, \(\overline{T}\) has load \(n-1\) (since \(T\) has load \(n\), but the load of a tableau is just the sum of its entries). Hence, by our induction hypothesis, Lemma 6.19**(a)** holds for \(\overline{T}\) instead of \(T\). In other words, \(\mathbf{D}\left(\overline{T}\right)\) is an excitation of the diagram \(Y\left(\mu\right)\). In other words, \(\mathbf{D}\left(\overline{T}\right)\) can be obtained from \(Y\left(\mu\right)\) by a sequence of excited moves. From (42), we easily obtain10 Footnote 10: Recall that \(\left(u,v\right)_{\searrow}\) denotes the southeastern neighbor \(\left(u+1,v+1\right)\) of a box \(\left(u,v\right)\). \[\left(i,j\right)_{+\overline{T}}=\left(\overline{T}\left(i,j\right)\right)_{+ \overline{T}}\ _Case 1:_ The diagram \(\mathbf{D}\left(\overline{T}\right)\) contains \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\downarrow}\). _Case 2:_ The diagram \(\mathbf{D}\left(\overline{T}\right)\) contains \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\rightarrow}\). _Case 3:_ The diagram \(\mathbf{D}\left(\overline{T}\right)\) contains \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}\). 1. Let us first consider Case 1. In this case, the diagram \(\mathbf{D}\left(\overline{T}\right)\) contains \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\downarrow}\). In other words, (by the definition of \(\mathbf{D}\left(\overline{T}\right)\)). In other words, there exists some \(c\in Y\left(\mu\right)\) such that \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\downarrow}=c_{+\overline{T}}\). Consider this \(c\). The two boxes \(c\) and \(\left(i,j\right)\) in \(Y\left(\mu\right)\) satisfy \(c_{+\overline{T}}=\left(\left(i,j\right)_{+\overline{T}}\right)_{\downarrow}\). Hence, Lemma 6.17**(c)** (applied to \(\overline{T}\), \(\left(i,j\right)\) and \(c\) instead of \(T\), \(c\) and \(d\)) yields that \(c=\left(i,j\right)_{\downarrow}\) and \(\overline{T}\left(c\right)=\overline{T}\left(i,j\right)+1\). However, (43) yields \(\overline{T}\left(c\right)=T\left(c\right)\) (since \(c=\left(i,j\right)_{\downarrow}\neq\left(i,j\right)\)). Thus, \[T\left(c\right)=\overline{T}\left(c\right)=\overline{T}\left(i,j\right)+1=T \left(i,j\right)\qquad\quad\left(\text{by (\ref{eq:1})}\right).\] However, \(c=\left(i,j\right)_{\downarrow}=\left(i+1,j\right)\), so that \(T\left(c\right)=T\left(i+1,j\right)>T\left(i,j\right)\) (since \(T\) is a semistandard tableau, and thus its entries strictly increase top-to-bottom in each column). This contradicts \(T\left(c\right)=T\left(i,j\right)\). Thus, we have found a contradiction in Case 1. 2. Let us next consider Case 2. In this case, the diagram \(\mathbf{D}\left(\overline{T}\right)\) contains \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\rightarrow}\). In other words, (by the definition of \(\mathbf{D}\left(\overline{T}\right)\)). In other words, there exists some \(c\in Y\left(\mu\right)\) such that \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\rightarrow}=c_{+\overline{T}}\). Consider this \(c\). The two boxes \(c\) and \(\left(i,j\right)\) in \(Y\left(\mu\right)\) satisfy \(c_{+\overline{T}}=\left(\left(i,j\right)_{+\overline{T}}\right)_{\rightarrow}\). Hence, Lemma 6.17**(b)** (applied to \(\overline{T}\), \(\left(i,j\right)\) and \(c\) instead of \(T\), \(c\) and \(d\)) yields that \(c=\left(i,j\right)_{\rightarrow}\) and \(\overline{T}\left(c\right)=\overline{T}\left(i,j\right)\). However, (43) yields \(\overline{T}\left(c\right)=T\left(c\right)\) (since \(c=\left(i,j\right)_{\rightarrow}\neq\left(i,j\right)\)). Thus, \[T\left(c\right) =\overline{T}\left(c\right)=\overline{T}\left(i,j\right)=T\left(i,j \right)-1\qquad\quad\left(\text{by (\ref{eq:1})}\right)\] However, \(c=\left(i,j\right)_{\rightarrow}=\left(i,j+1\right)\), so that \(T\left(c\right)=T\left(i,j+1\right)\geq T\left(i,j\right)\) (since \(T\) is a semistandard tableau, and thus its entries weakly increase left-to-right in each row). This contradicts \(T\left(c\right)<T\left(i,j\right)\). Thus, we have found a contradiction in Case 2. 3. Let us finally consider Case 3. In this case, the diagram \(\mathbf{D}\left(\overline{T}\right)\) contains \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}\). In other words, \[\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}\in\mathbf{D}\left( \overline{T}\right)=\left\{c_{+\overline{T}}\ \mid\ c\in Y\left(\mu\right)\right\}\] (by the definition of \(\mathbf{D}\left(\overline{T}\right)\)). In other words, there exists some \(c\in Y\left(\mu\right)\) such that \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}=c_{+\overline{T}}\). Consider this \(c\). The two boxes \(c\) and \(\left(i,j\right)\) in \(Y\left(\mu\right)\) satisfy \(c_{+\overline{T}}=\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}\). Hence, Lemma 6.17 **(d)** (applied to \(\overline{T}\), \(\left(i,j\right)\) and \(c\) instead of \(T\), \(c\) and \(d\)) yields that \(c=\left(i,j\right)_{\searrow}\) and \(\overline{T}\left(c\right)=\overline{T}\left(i,j\right)+1\). However, (43) yields \(\overline{T}\left(c\right)=T\left(c\right)\) (since \(c=\left(i,j\right)_{\searrow}\neq\left(i,j\right)\)). Thus, \[T\left(c\right)=\overline{T}\left(c\right)=\overline{T}\left(i,j\right)+1=T \left(i,j\right)\qquad\quad\left(\text{by (\ref{eq:T})}\right).\] However, \(c=\left(i,j\right)_{\searrow}=\left(i+1,j+1\right)\), and we have \(i\leq i+1\) and \(j\leq j+1\). Hence, Lemma 6.3 **(b)** (applied to \(\left(u,v\right)=\left(i+1,j+1\right)\)) yields \(T\left(i+1,j+1\right)-\left(j+1\right)\geq T\left(i,j\right)-j\). Hence, \[T\left(i+1,j+1\right)\geq\left(T\left(i,j\right)-j\right)+\left(j+1\right)=T \left(i,j\right)+1>T\left(i,j\right).\] In other words, \(T\left(c\right)>T\left(i,j\right)\) (since \(c=\left(i+1,j+1\right)\)). This contradicts \(T\left(c\right)=T\left(i,j\right)\). Thus, we have found a contradiction in Case 3. We have now found a contradiction in each of our three cases. Thus, we always obtain a contradiction. This shows that our assumption was false. Thus, we have shown that our replacement of \(\left(i,j\right)_{+\overline{T}}\) by \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}\) in \(\mathbf{D}\left(\overline{T}\right)\) is an excited move. Let \(\mathbf{e}\) denote this excited move. Then, the diagram \(\mathbf{D}\left(T\right)\) can be obtained from \(\mathbf{D}\left(\overline{T}\right)\) by this excited move \(\mathbf{e}\) (since we know that the diagram \(\mathbf{D}\left(T\right)\) can be obtained from \(\mathbf{D}\left(\overline{T}\right)\) by replacing the box \(\left(i,j\right)_{+\overline{T}}\) by its southeastern neighbor \(\left(\left(i,j\right)_{+\overline{T}}\right)_{\searrow}\)). Since \(\mathbf{D}\left(\overline{T}\right)\) can, in turn, be obtained from \(Y\left(\mu\right)\) by a sequence of excited moves (as we know), we thus conclude that \(\mathbf{D}\left(T\right)\) can be obtained from \(Y\left(\mu\right)\) by a sequence of excited moves (just apply the sequence of excited moves that gives \(\mathbf{D}\left(\overline{T}\right)\) first, and then perform the excited move \(\mathbf{e}\) to transform it further into \(\mathbf{D}\left(T\right)\)). In other words, \(\mathbf{D}\left(T\right)\) is an excitation of \(Y\left(\mu\right)\). This proves Lemma 6.19 **(a)** for our tableau \(T\). This completes the induction step, and thus Lemma 6.19 **(a)** is proved. **(c)** Let \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be the flagging induced by \(\lambda/\mu\). Thus, \(\mathcal{F}\left(\lambda/\mu\right)=\) FSSYT \(\left(\mu,\mathbf{b}\right)\) (by the definition of \(\mathcal{F}\left(\lambda/\mu\right)\)). We must prove the equivalence \(\left(\mathbf{D}\left(T\right)\in\mathcal{E}\left(\lambda/\mu\right)\right) \Longleftrightarrow\left(T\in\mathcal{F}\left(\lambda/\mu\right)\right)\). Let us prove the "\(\Longrightarrow\)" and "\(\Longleftarrow\)" directions of this equivalence separately: \(\Longrightarrow\): Assume that \(\mathbf{D}\left(T\right)\in\mathcal{E}\left(\lambda/\mu\right)\). We must prove that \(T\in\mathcal{F}\left(\lambda/\mu\right)\). Let \(\left(i,j\right)\in Y\left(\mu\right)\). We shall prove that \(T\left(i,j\right)\leq b_{i}\). Indeed, consider the box \(\left(i,\mu_{i}\right)\). Then, \(\left(i,\mu_{i}\right)\in Y\left(\mu\right)\) (since \(\mu_{i}\leq\mu_{i}\)), so that \(\left(i,\mu_{i}\right)_{+T}\in\mathbf{D}\left(T\right)\) (by (14)). Set \(k:=T\left(i,\mu_{i}\right)\) (this is well-defined since \(\left(i,\mu_{i}\right)\in Y\left(\mu\right)\)). Then, the definition of \(\left(i,\mu_{i}\right)_{+T}\) yields \[\left(i,\mu_{i}\right)_{+T} =\left(T\left(i,\mu_{i}\right),\ T\left(i,\mu_{i}\right)+\mu_{i}-i\right)\] \[=\left(k,\ k+\mu_{i}-i\right)\qquad\quad\left(\text{since }T\left(i,\mu_{i}\right)=k\right).\] Hence, \(\left(k,\ k+\mu_{i}-i\right)=\left(i,\mu_{i}\right)_{+T}\in\mathbf{D}\left(T \right)\subseteq Y\left(\lambda\right)\) (since \(\mathbf{D}\left(T\right)\in\mathcal{E}\left(\lambda/\mu\right)\)). In other words, \(k+\mu_{i}-i\leq\lambda_{k}\). In other words, \(\lambda_{k}-k\geq\mu_{i}-i\). By Lemma 6.11 (applied to \(j=k\)), this is equivalent to \(k\leq b_{i}\). Thus, we have \(k\leq b_{i}\). However, from \(\left(i,j\right)\in Y\left(\mu\right)\), we obtain \(j\leq\mu_{i}\). Since the entries of \(T\) weakly increase left-to-right in each row, this entails \(T\left(i,j\right)\leq T\left(i,\mu_{i}\right)=k\leq b_{i}\). Forget that we fixed \(\left(i,j\right)\). We thus have shown that \(T\left(i,j\right)\leq b_{i}\) for all \(\left(i,j\right)\in Y\left(\mu\right)\). In other words, the tableau \(T\) is \(\mathbf{b}\)-flagged. In other words, \(T\in\) FSSYT \(\left(\mu,\mathbf{b}\right)=\mathcal{F}\left(\lambda/\mu\right)\). Thus, the "\(\Longrightarrow\)" direction of Lemma 6.19**(c)** is proved. \(\Longleftarrow\): Assume that \(T\in\mathcal{F}\left(\lambda/\mu\right)\). We must prove that \(\mathbf{D}\left(T\right)\in\mathcal{E}\left(\lambda/\mu\right)\). In other words, we must prove that \(\mathbf{D}\left(T\right)\subseteq Y\left(\lambda\right)\) (since Lemma 6.19**(a)** shows that \(\mathbf{D}\left(T\right)\) is an excitation of \(Y\left(\mu\right)\)). In other words, we must prove that \(c_{+T}\in Y\left(\lambda\right)\) for each \(c\in Y\left(\mu\right)\) (by (14)). So let us do this. Consider any \(c\in Y\left(\mu\right)\). We must prove that \(c_{+T}\in Y\left(\lambda\right)\). Write the box \(c\) as \(c=\left(i,j\right)\). Hence, \(\left(i,j\right)=c\in Y\left(\mu\right)\), so that \(\mu_{i}\geq j\). Set \(k:=T\left(i,j\right)\). Then, \(\left(i,j\right)_{+T}=\left(k,\ k+j-i\right)\) by the definition of \(\left(i,j\right)_{+T}\). However, \(T\in\mathcal{F}\left(\lambda/\mu\right)=\) FSSYT \(\left(\mu,\mathbf{b}\right)\), which shows that \(T\) is \(\mathbf{b}\)-flagged. Therefore, \(T\left(i,j\right)\leq b_{i}\). In other words, \(k\leq b_{i}\) (since \(k=T\left(i,j\right)\)). By Lemma 6.11 (applied to \(j=k\)), this is equivalent to \(\lambda_{k}-k\geq\mu_{i}-i\). Hence, we have \(\lambda_{k}-k\geq\underbrace{\mu_{i}}_{\geq j}-i\geq j-i\). In other words, \(k+j-i\leq\lambda_{k}\). In other words, \(\left(i,j\right)_{+T}\in Y\left(\lambda\right)\) (since we know that \(\left(i,j\right)_{+T}=\left(k,\ k+j-i\right)\)). In other words, \(c_{+T}\in Y\left(\lambda\right)\) (since \(c=\left(i,j\right)\)). As explained above, this completes the proof of the "\(\Longleftarrow\)" direction of Lemma 6.19**(c)**. Thus, Lemma 6.19**(c)** is proved. Proof of Lemma 6.20.: Lemma 6.19**(a)** yields that the map \[\text{SSYT}\left(\mu\right) \rightarrow\left\{\text{all excitations of }Y\left(\mu\right)\right\},\] \[T \mapsto\mathbf{D}\left(T\right)\] is well-defined. It remains to show that this map is a bijection. Towards this aim, we will prove the following two claims: _Claim 1:_ Let \(T\) and \(S\) be two distinct tableaux in \(\text{SSYT}\left(\mu\right)\). Then, \(\mathbf{D}\left(T\right)\neq\mathbf{D}\left(S\right)\). _Claim 2:_ Let \(n\in\mathbb{N}\). Let \(E\) be a diagram obtained from \(Y\left(\mu\right)\) by a sequence of \(n\) excited moves. Then, there is a tableau \(T\in\text{SSYT}\left(\mu\right)\) such that \(E=\mathbf{D}\left(T\right)\). _Proof of Claim 1._ Assume the contrary. Thus, \(\mathbf{D}\left(T\right)=\mathbf{D}\left(S\right)\). Since \(T\) and \(S\) are distinct, there exists a box \(\left(i,j\right)\in Y\left(\mu\right)\) satisfying \(T\left(i,j\right)\neq S\left(i,j\right)\). Pick such a box \(\left(i,j\right)\) with smallest \(i\). Thus, \[T\left(u,v\right)=S\left(u,v\right) \tag{45}\] for every box \(\left(u,v\right)\in Y\left(\mu\right)\) satisfying \(u<i\) (since \(\left(i,j\right)\) was chosen to have smallest \(i\)). From \(T\left(i,j\right)\neq S\left(i,j\right)\), we see that either \(T\left(i,j\right)<S\left(i,j\right)\) or \(S\left(i,j\right)<T\left(i,j\right)\). We WLOG assume that \(S\left(i,j\right)<T\left(i,j\right)\) is the case (since otherwise, we can simply swap \(T\) with \(S\)). The definition of \(\mathbf{D}\left(S\right)\) yields \(\mathbf{D}\left(S\right)=\left\{c_{+S}\ \mid\ c\in Y\left(\mu\right)\right\}\). Hence, \[\left(i,j\right)_{+S} \in\mathbf{D}\left(S\right) \text{(since }\left(i,j\right)\in Y\left(\mu\right)\text{)}\] \[=\mathbf{D}\left(T\right) \text{(since }\mathbf{D}\left(T\right)=\mathbf{D}\left(S\right)\text{)}\] \[=\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\right\} \text{(by the definition of }\mathbf{D}\left(T\right)\text{)}\,.\] In other words, \(\left(i,j\right)_{+S}=c_{+T}\) for some \(c\in Y\left(\mu\right)\). Consider this \(c\), and denote it by \(\left(u,v\right)\). Thus, \[\left(i,j\right)_{+S}=\left(u,v\right)_{+T}=\left(T\left(u,v\right),\ T\left(u,v \right)+v-u\right)\] (by the definition of \(\left(u,v\right)_{+T}\)). Hence, \[\left(T\left(u,v\right),\ T\left(u,v\right)+v-u\right)=\left(i,j\right)_{+S}= \left(S\left(i,j\right),\ S\left(i,j\right)+j-i\right)\] (by the definition of \(\left(i,j\right)_{+S}\)). In other words, the two equalities \(T\left(u,v\right)=S\left(i,j\right)\) and \(T\left(u,v\right)+v-u=S\left(i,j\right)+j-i\) hold. Subtracting the former equality from the latter, we obtain \(v-u=j-i\). In other words, \(v+i=j+u\). Thus, \(v-j=u-i\). We are in one of the following two cases: _Case 1:_ We have \(u<i\). _Case 2:_ We have \(u\geq i\). Let us first consider Case 1. In this case, we have \(u<i\). Hence, \(u-i<0\) and \(i>u\). Moreover, \(v<j\) (since \(v-j=u-i<0\)). Hence, Lemma 6.3**(b)** (applied to \(S\), \(\left(u,v\right)\) and \(\left(i,j\right)\) instead of \(T\), \(\left(i,j\right)\) and \(\left(u,v\right)\)) yields \(S\left(i,j\right)-i\geq S\left(u,v\right)-u\) (since \(u<i\) and \(v<j\)). Adding the inequality \(i>u\) to this inequality, we obtain \(S\left(i,j\right)>S\left(u,v\right)\). However, from (45), we obtain \(T\left(u,v\right)=S\left(u,v\right)\) (since \(u<i\)). Thus, \(S\left(i,j\right)>S\left(u,v\right)=T\left(u,v\right)=S\left(i,j\right)\). This is absurd. Thus, we have obtained a contradiction in Case 1. Let us now consider Case 2. In this case, we have \(u\geq i\). In other words, \(i\leq u\). Also, \(u\geq i\) entails \(u-i\geq 0\) and thus \(v\geq j\) (since \(v-j=u-i\geq 0\)). Hence, \(j\leq v\). Thus, Lemma 6.3**(b)** yields \(T\left(u,v\right)-u\geq T\left(i,j\right)-i\). Adding the inequality \(u\geq i\) to this inequality, we obtain \(T\left(u,v\right)\geq T\left(i,j\right)\). This contradicts \(T\left(u,v\right)=S\left(i,j\right)<T\left(i,j\right)\). Thus, we have obtained a contradiction in Case 2. We have now obtained a contradiction in each of the two cases. Thus, we always have a contradiction. Hence, our assumption was false. This proves Claim 1. Proof of Claim 2.: We proceed by induction on \(n\): _Base case:_ Let us prove Claim 2 for \(n=0\). Indeed, let \(E\) be a diagram obtained from \(Y\left(\mu\right)\) by a sequence of \(0\) excited moves. Thus, \(E\) must be \(Y\left(\mu\right)\) itself. Now, let \(T\) be the filling of the diagram \(Y\left(\mu\right)\) (that is, the map \(Y\left(\mu\right)\rightarrow\left\{1,2,3,\ldots\right\}\)) that is given by \[T\left(i,j\right)=i\qquad\text{ for each }i\in Y\left(\mu\right).\] (For instance, if \(\mu=\left(5,2,2\right)\), then \(T=\begin{array}{c|c|c|c|c}\hline 1&1&1&1\\ \hline 2&2&\text{}&\text{.}\end{array}\)) Note that \(T\) is a semi-standard tableau of shape \(\mu\) (indeed, the entries of \(T\) are weakly increasing left-to-right in each row12 and strictly increasing top-to-bottom in each column13). In other words, \(T\in\text{SSYT}\left(\mu\right)\). Moreover, if \(c=\left(i,j\right)\in Y\left(\mu\right)\) is any box, then Footnote 12: since they are constant in each row Footnote 13: since the entries of \(T\) in any given column are \(1,2,\ldots,k\) from top to bottom (where \(k\) is the length of said column) \[c_{+T} =\left(T\left(i,j\right),\ T\left(i,j\right)+j-i\right)\qquad \text{ (by the definition of }c_{+T}\text{, since }c=\left(i,j\right)\text{)}\] \[=\left(i,\ i+j-i\right)\qquad\text{ (since }T\left(i,j\right)=i\text{)}\] \[=\left(i,j\right)=c.\] Thus, \(\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\right\}=\left\{c\ \mid\ c\in Y\left(\mu\right)\right\}=Y\left(\mu\right)\). Therefore, the definition of \(\mathbf{D}\left(T\right)\) yields \(\mathbf{D}\left(T\right)=\left\{c_{+T}\ \mid\ c\in Y\left(\mu\right)\right\}=Y\left(\mu\right)=E\) (since \(E\) is \(Y\left(\mu\right)\) itself). Hence, we have found a tableau \(T\in\text{SSYT}\left(\mu\right)\) such that \(E=\mathbf{D}\left(T\right)\). Thus, Claim 2 is proved for our \(E\). This completes the base case. _Induction step:_ Let \(n\) be a positive integer. Assume (as the induction hypothesis) that Claim 2 is proved for \(n-1\) instead of \(n\). We must now prove Claim 2 for \(n\). So let \(E\) be a diagram obtained from \(Y\left(\mu\right)\) by a sequence of \(n\) excited moves. Then, we must prove that there is a tableau \(T\in\text{SSYT}\left(\mu\right)\) such that \(E=\mathbf{D}\left(T\right)\). We assumed that \(E\) is obtained from \(Y\left(\mu\right)\) by a sequence of \(n\) excited moves. Let \(\mathbf{s}\) be this sequence of \(n\) excited moves. Let \(\mathbf{e}\) be the last move in this sequence, and let \(F\) be the diagram obtained from \(Y\left(\mu\right)\) after the first \(n-1\) moves of this sequence \(\mathbf{s}\) (stopping short of the last move \(\mathbf{e}\)). Then, \(E\) is obtained from \(F\) by the excited move \(\mathbf{e}\). Moreover, the diagram \(F\) is obtained from \(Y\left(\mu\right)\) by a sequence of \(n-1\) excited moves (namely, by the first \(n-1\) moves of the sequence \(\mathbf{s}\)). Hence, our induction hypothesis (applied to \(F\) instead of \(E\)) shows that there is a tableau \(T\in\text{SSYT}\left(\mu\right)\) such that \(F=\mathbf{D}\left(T\right)\). Consider this tableau \(T\), and denote it by \(S\). Thus, \(S\in\mathrm{SSYT}\left(\mu\right)\) and \(F=\mathbf{D}\left(S\right)\). Recall that \(E\) is obtained from \(F\) by the excited move \(\mathbf{e}\). Hence, \(\mathbf{e}\) is an excited move for \(F\). In other words, \(\mathbf{e}\) replaces some box \(d\in F\) by its southeastern neighbor \(d_{\searrow}\), where the box \(d\) is chosen in such a way that \(F\) contains none of its three neighbors \(d_{\downarrow},d_{\rightarrow},d_{\searrow}\) (by the definition of an "excited move"). Consider this box \(d\). We have \[d \in F=\mathbf{D}\left(S\right)=\left\{c_{+S}\ \mid\ c\in Y \left(\mu\right)\right\}\qquad\quad\left(\text{by the definition of }\mathbf{D}\left(S\right)\right)\] \[=\left\{\left(i,j\right)_{+S}\ \mid\ \left(i,j\right)\in Y\left(\mu \right)\right\}\qquad\quad\left(\text{here, we have renamed the index }c\text{ as }\left(i,j\right)\right).\] In other words, \(d=\left(i,j\right)_{+S}\) for some box \(\left(i,j\right)\in Y\left(\mu\right)\). Consider this \(\left(i,j\right)\). Thus, \[d=\left(i,j\right)_{+S}=\left(S\left(i,j\right),\ S\left(i,j\right)+j-i\right)\] (by the definition of \(\left(i,j\right)_{+S}\)). Therefore, \[d_{\rightarrow}=\left(S\left(i,j\right),\ S\left(i,j\right)+j-i\right)_{ \rightarrow}=\left(S\left(i,j\right),\ S\left(i,j\right)+j-i+1\right)\] and \[d_{\downarrow}=\left(S\left(i,j\right),\ S\left(i,j\right)+j-i\right)_{ \downarrow}=\left(S\left(i,j\right)+1,\ S\left(i,j\right)+j-i\right).\] We observe the following two facts: 1. We have \[S\left(i,j\right)+1\leq S\left(i,j+1\right)\qquad\quad\text{if }\left(i,j+1\right)\in Y \left(\mu\right).\] (46) 2. [Proof.] Assume that \(\left(i,j+1\right)\in Y\left(\mu\right)\). Since the entries of \(S\) weakly increase left-to-right in each row (because \(S\) is a semistandard tableau), we then have \(S\left(i,j\right)\leq S\left(i,j+1\right)\). We must prove that \(S\left(i,j\right)+1\leq S\left(i,j+1\right)\). Assume the contrary. Thus, \(S\left(i,j\right)+1>S\left(i,j+1\right)\), so that \(S\left(i,j\right)+1\geq S\left(i,j+1\right)+1\) (since \(S\left(i,j\right)+1\) and \(S\left(i,j+1\right)\) are integers). In other words, \(S\left(i,j\right)\geq S\left(i,j+1\right)\). Combining this with \(S\left(i,j\right)\leq S\left(i,j+1\right)\), we obtain \(S\left(i,j\right)=S\left(i,j+1\right)\). However, from \(\left(i,j+1\right)\in Y\left(\mu\right)\), we obtain \(\left(i,j+1\right)_{+S}\in F\) (since we have \(F=\left\{c_{+S}\ \mid\ c\in Y\left(\mu\right)\right\}\)). The definition of \(\left(i,j+1\right)_{+S}\) yields \[\left(i,j+1\right)_{+S} =\left(S\left(i,j+1\right),\ S\left(i,j+1\right)+\left(j+1\right) -i\right)\] \[=\left(S\left(i,j\right),\ S\left(i,j\right)+\left(j+1\right)-i \right)\qquad\quad\left(\text{since }S\left(i,j+1\right)=S\left(i,j\right)\right)\] \[=\left(S\left(i,j\right),\ S\left(i,j\right)+j-i+1\right)\qquad \quad\left(\text{since }\left(j+1\right)-i=j-i+1\right)\] \[=d_{\rightarrow}\qquad\quad\left(\text{since }d_{\rightarrow}=\left(S\left(i,j\right),\ S\left(i,j\right)+j-i+1\right)\right)\] \[\notin F\qquad\quad\left(\text{since }F\text{ contains none of }d_{\downarrow},d_{\rightarrow},d_{\searrow}\right).\] This contradicts \(\left(i,j+1\right)_{+S}\in F\). This contradiction shows that our assumption was false. Hence, (46) is proved.] 2. We have \[S\left(i,j\right)+1<S\left(i+1,j\right)\qquad\quad\text{if}\ \left(i+1,j\right)\in Y \left(\mu\right).\] (47) [Proof.: Assume that \(\left(i+1,j\right)\in Y\left(\mu\right)\). Since the entries of \(S\) strictly increase top-to-bottom in each column (because \(S\) is a semistandard tableau), we then have \(S\left(i,j\right)<S\left(i+1,j\right)\). Thus, \(S\left(i,j\right)\leq S\left(i+1,j\right)-1\) (since \(S\left(i,j\right)\) and \(S\left(i+1,j\right)\) are integers). In other words, \(S\left(i,j\right)+1\leq S\left(i+1,j\right)\). We must prove that \(S\left(i,j\right)+1<S\left(i+1,j\right)\). Assume the contrary. Thus, \(S\left(i,j\right)+1\geq S\left(i+1,j\right)\). Combining this with \(S\left(i,j\right)+1\leq S\left(i+1,j\right)\), we obtain \(S\left(i,j\right)+1=S\left(i+1,j\right)\). However, from \(\left(i+1,j\right)\in Y\left(\mu\right)\), we obtain \(\left(i+1,j\right)_{+S}\in F\) (since we have \(F=\left\{c_{+S}\ \mid\ c\in Y\left(\mu\right)\right\}\)). The definition of \(\left(i+1,j\right)_{+S}\) yields \[\left(i+1,j\right)_{+S} =\left(S\left(i+1,j\right),\ S\left(i+1,j\right)+j-\left(i+1 \right)\right)\] \[=\left(S\left(i,j\right)+1,\ S\left(i,j\right)+1+j-\left(i+1 \right)\right)\] \[\qquad\qquad\qquad\qquad\left(\text{since }S\left(i+1,j\right)=S \left(i,j\right)+1\right)\] \[=\left(S\left(i,j\right)+1,\ S\left(i,j\right)+j-i\right)\qquad \qquad\left(\text{since }1+j-\left(i+1\right)=j-i\right)\] \[=d_{\downarrow}\qquad\qquad\left(\text{since }d_{\downarrow}= \left(S\left(i,j\right)+1,\ S\left(i,j\right)+j-i\right)\right)\] \[\notin F\qquad\quad\left(\text{since }F\text{ contains none of }d_{\downarrow},d_{\rightarrow},d_{\searrow}\right).\] This contradicts \(\left(i+1,j\right)_{+S}\in F\). This contradiction shows that our assumption was false. Hence, (47) is proved.] Now, let us increase the entry \(S\left(i,j\right)\) of the tableau \(S\) by \(1\), while leaving all other entries unchanged. The resulting filling of \(Y\left(\mu\right)\) will be called \(T\). Formally speaking, \(T\) is thus the map from \(Y\left(\mu\right)\) to \(\left\{1,2,3,\ldots\right\}\) given by \[T\left(i,j\right) =S\left(i,j\right)+1\qquad\quad\text{and} \tag{48}\] \[T\left(c\right) =S\left(c\right)\qquad\quad\text{for all }c\in Y\left(\mu\right)\text{ distinct from }\left(i,j\right). \tag{49}\] It is easy to see (using (46) and (47)) that \(T\) is again a semistandard tableau14. Thus, \(T\in\text{SSYT}\left(\mu\right)\). From (48), we easily see that \[\left(i,j\right)_{+T}=\left(\left(i,j\right)_{+S}\right)_{\searrow} \tag{50}\] \({}^{15}\). Since \(\left(i,j\right)_{+S}=d\), we can rewrite this as \(\left(i,j\right)_{+T}=d_{\searrow}\). However, Lemma 6.18 shows that the diagram \(\mathbf{D}\left(T\right)\) can be obtained from \(\mathbf{D}\left(S\right)\) by replacing the box \(\left(i,j\right)_{+S}\) by the box \(\left(i,j\right)_{+T}\) (since (49) holds). In other words, the diagram \(\mathbf{D}\left(T\right)\) can be obtained from \(F\) by replacing the box \(d\) by the box \(d_{\searrow}\) (since \(\mathbf{D}\left(S\right)=F\) and \(\left(i,j\right)_{+S}=d\) and \(\left(i,j\right)_{+T}=d_{\searrow}\)). In other words, the diagram \(\mathbf{D}\left(T\right)\) can be obtained from \(F\) by the excited move \(\mathbf{e}\) (since the replacement of the box \(d\) by \(d_{\searrow}\) in the diagram \(F\) is precisely the excited move \(\mathbf{e}\) (by the definition of the box \(d\))). However, we know that the diagram \(E\) is also obtained from \(F\) by the excited move \(\mathbf{e}\). Thus, both diagrams \(E\) and \(\mathbf{D}\left(T\right)\) can be obtained from \(F\) by the excited move \(\mathbf{e}\). Since any given excited move has only one possible outcome, we thus conclude that these diagrams \(E\) and \(\mathbf{D}\left(T\right)\) are identical. In other words, \(E=\mathbf{D}\left(T\right)\). Hence, we have found a tableau \(T\in\mathrm{SSYT}\left(\mu\right)\) such that \(E=\mathbf{D}\left(T\right)\). Thus, Claim 2 is proved for our \(E\). This completes the induction step. The induction proof of Claim 2 is now complete. Claim 2 easily yields the following: Finally, we claim that the entries of \(T\) strictly increase top-to-bottom in each column. Indeed, by the construction of \(T\), this will follow from the analogous property of \(S\), as long as we can show that the increased entry \(T\left(i,j\right)\) is still smaller than its neighboring entry \(T\left(i+1,j\right)\) (assuming that \(\left(i+1,j\right)\in Y\left(\mu\right)\)). But we can easily show this: If \(\left(i+1,j\right)\in Y\left(\mu\right)\), then \[T\left(i+1,j\right) =S\left(i+1,j\right)\qquad\text{ (by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq **Claim 3**: _Let \(E\in\left\{\text{all excitations of }Y\left(\mu\right)\right\}\). Then, there is a tableau \(T\in\text{SSYT}\left(\mu\right)\) such that \(E=\mathbf{D}\left(T\right)\)._ Proof of Claim 3.: We have \(E\in\left\{\text{all excitations of }Y\left(\mu\right)\right\}\). Thus, \(E\) is an excitation of \(Y\left(\mu\right)\). In other words, \(E\) can be obtained from \(Y\left(\mu\right)\) by a finite sequence of excited moves. Consider this sequence, and let \(n\) be its length. Thus, \(E\) can be obtained from \(Y\left(\mu\right)\) by a sequence of \(n\) excited moves. Hence, Claim 2 shows that there is a tableau \(T\in\text{SSYT}\left(\mu\right)\) such that \(E=\mathbf{D}\left(T\right)\). This proves Claim 3. Let us now consider the map \[\text{SSYT}\left(\mu\right) \to\left\{\text{all excitations of }Y\left(\mu\right)\right\},\] \[T \mapsto\mathbf{D}\left(T\right)\] once again. This map is injective (by Claim 1) and surjective (by Claim 3). Hence, it is bijective, i.e., is a bijection. This completes the proof of Lemma 6.20. Proof of Lemma 6.21.: We know that \(\mathcal{E}\left(\lambda/\mu\right)\) is a subset of \(\left\{\text{all excitations of }Y\left(\mu\right)\right\}\), whereas \(\mathcal{F}\left(\lambda/\mu\right)\) is a subset of \(\text{SSYT}\left(\mu\right)\). Lemma 6.20 yields that the map \[\text{SSYT}\left(\mu\right) \to\left\{\text{all excitations of }Y\left(\mu\right)\right\},\] \[T \mapsto\mathbf{D}\left(T\right)\] is well-defined and is a bijection. According to Lemma 6.19 **(c)**, a semistandard tableau \(T\in\text{SSYT}\left(\mu\right)\) belongs to \(\mathcal{F}\left(\lambda/\mu\right)\) if and only if its image \(\mathbf{D}\left(T\right)\) under this map belongs to \(\mathcal{E}\left(\lambda/\mu\right)\). Hence, the subset \(\mathcal{F}\left(\lambda/\mu\right)\) of \(\text{SSYT}\left(\mu\right)\) corresponds precisely to the subset \(\mathcal{E}\left(\lambda/\mu\right)\) of \(\left\{\text{all excitations of }Y\left(\mu\right)\right\}\) under this map. Therefore, restricting this map to \(\mathcal{F}\left(\lambda/\mu\right)\), we obtain a bijection16 Footnote 16: Here is the argument in more detail: Let \(\Phi\) be the map \[\text{SSYT}\left(\mu\right) \to\left\{\text{all excitations of }Y\left(\mu\right)\right\},\] \[T \mapsto\mathbf{D}\left(T\right).\] As we have seen above, this map \(\Phi\) is a bijection. Thus, \(\Phi\) is injective and surjective. Moreover, each \(T\in\mathcal{F}\left(\lambda/\mu\right)\) satisfies \(\mathbf{D}\left(T\right)\in\mathcal{E}\left(\lambda/\mu\right)\) (by Lemma 6.19 **(c)**). Hence, the map \[\Psi:\mathcal{F}\left(\lambda/\mu\right) \to\mathcal{E}\left(\lambda/\mu\right),\] \[T \mapsto\mathbf{D}\left(T\right)\] is well-defined. Consider this map \(\Psi\). Clearly, \(\Psi\) is a restriction of the map \(\Phi\) (since both maps are given by the same formula), and thus is injective (since \(\Phi\) is injective). Let us next show that \(\Psi\) is surjective. Indeed, let \(E\in\mathcal{E}\left(\lambda/\mu\right)\). Then, \(E\in\left\{\text{all excitations of }Y\left(\mu\right)\right\}\). Hence, there exists a \(T\in\text{SSYT}\left(\mu\right)\) such that \(E=\Phi\left(T\right)\) (since \(\Phi\) is surjective). Consider this \(T\). Then, \(E=\Phi\left(T\right)=\mathbf{D}\left(T\right)\) (by the definition of \(\Phi\)), so that \(\mathbf{D}\left(T\right)=E\in\mathcal{E}\left(\lambda/\mu\right)\). Hence, Lemma 6.19 **(c)** shows that \(T\in\mathcal{F}\left(\lambda/\mu\right)\). Thus, \(\Psi\left(T\right)\) is well-defined and equals \(\mathbf{D}\left(T\right)\) (by the definition of \(\Psi\)). Therefore, \(\Psi\left(T\right)=\mathbf{D}\left(T\right)=E\). This shows that \(E\) is a value of the map \(\Psi\). This proves Lemma 6.21. Proof of Corollary 6.22.: The definition of \(\mathbf{s}_{\lambda}\left[\mu\right]\) yields \[\mathbf{s}_{\lambda}\left[\mu\right]=\sum_{D\in\mathcal{E}\left(\lambda/\mu \right)}\ \prod_{(i,j)\in D}\left(x_{i}+y_{j}\right)=\sum_{T\in\mathcal{F}\left(\lambda/ \mu\right)}\ \prod_{(i,j)\in\mathbf{D}\left(T\right)}\left(x_{i}+y_{j}\right)\] (here, we have substituted \(\mathbf{D}\left(T\right)\) for \(D\) in the sum, since Lemma 6.21 shows that the map \[\mathcal{F}(\lambda/\mu) \to\mathcal{E}(\lambda/\mu),\] \[T \mapsto\mathbf{D}\left(T\right)\] is a bijection). Hence, \[\mathbf{s}_{\lambda}\left[\mu\right]=\sum_{T\in\mathcal{F}\left(\lambda/\mu \right)}\ \ (here, we have substituted \(j\) for \(i+c\) in the second sum). This proves Lemma 7.4. Proof of Lemma 7.5.: Let \(a\) and \(c\) be integers, and let \(b\) be a positive integer. If \(a\leq 0\), then the claim of the lemma boils down to \(1=(x_{b}+y_{a+b+c-1})\cdot 0+1\) (by (16)), which is obviously true. Thus, from now on, we WLOG assume that \(a>0\). We can rewrite the definition of \(h(a,b,c)\) as follows: \[h(a,b,c) =\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{a})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a};\end{subarray}}\ \prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] \[=\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{a})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a};\end{subarray}}\ \prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c})+ \sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{a})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a};\end{subarray}}\ \prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] (here we have broken up the sum into a part with \(i_{a}=b\) and a part with \(i_{a}\neq b\)). In view of \[\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{a})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a};\\ i_{a}=b\end{subarray}}\prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] \[=\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{a})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a};\\ i_{a}=b\end{subarray}}\underbrace{(x_{i_{a}}+y_{i_{a}+(a-1)+c})}_{=x_{i_{a}}+y _{i_{a}+i_{a}+c-1}}\prod_{j=1}^{a-1}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] \[\qquad\qquad(\text{here, we have split off the $j=a$ factor from the product})\] \[=(x_{b}+y_{a+b+c-1})\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i _{a})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a};\\ i_{a}=b\end{subarray}}\prod_{j=1}^{a-1}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] \[=(x_{b}+y_{a+b+c-1})\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots, i_{a-1})\in[b]^{a-1};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a-1}\leq b\end{subarray}}\prod_{j=1}^{a-1}(x _{i_{j}}+y_{i_{j}+(j-1)+c})\] \[\qquad\qquad\qquad\left(\begin{array}{c}\text{here, we substituted $(i_{1},i_{2},\ldots,i_{a-1},b)$ for $(i_{1},i_{2},\ldots,i_{a})$}\\ \qquad\qquad\qquad\text{in the sum, since the condition $i_{a}=b$}\\ \qquad\qquad\text{uniquely determines the entry $i_{a}$}\end{array}\right)\] \[=(x_{b}+y_{a+b+c-1})\underbrace{\sum_{\begin{subarray}{c}(i_{1},i_{2 },\ldots,i_{a-1})\in[b]^{a-1};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{a-1}\end{subarray}}\prod_{j=1}^{a-1}(x_{i_{j }}+y_{i_{j}+(j-1)+c})}_{=h(a-1,b,c)}\] \[\qquad\qquad\qquad\left(\begin{array}{c}\text{here, we replaced the condition $i_{1}\leq i_{2}\leq\cdots\leq i_{a-1}\leq b$ under}\\ \qquad\qquad\text{the summation sign by the condition $i_{1}\leq i_{2}\leq\cdots\leq i_{a-1}$,}\\ \qquad\qquad\text{which is equivalent because $i_{1},i_{2},\ldots,i_{a-1}\in[b]$}\end{array}\right)\] \[=(x_{b}+y_{a+b+c-1})\,h(a-1,b,c)\] and \[\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{d})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{d};\\ i_{d}\neq b\end{subarray}}\ \prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] \[=\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{d})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{d};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{d};\end{subarray}}\ \prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c}) \left(\begin{array}{c}\text{since the condition }i_{a}\neq b\\ \text{ is equivalent to }i_{a}<b\\ \text{ when }(i_{1},i_{2},\ldots,i_{a})\in[b]^{a}\end{array}\right)\] \[=\sum_{\begin{subarray}{c}(i_{1},i_{2},\ldots,i_{d})\in[b]^{a};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{d}<b\end{subarray}}\ \prod_{j=1}^{a}(x_{i_{j}}+y_{i_{j}+(j-1)+c})\] \[=h(a,b-1,c) (\text{by the definition of }h(a,b-1,c))\,\] we can rewrite this as \[h(a,b,c)=(x_{b}+y_{a+b+c-1})\,h(a-1,b,c)+h(a,b-1,c).\] This proves the lemma. Proof of Lemma 7.6.: First, we observe that the lemma holds for \(b=0\). Indeed, for \(b=0\), it is claiming that \(h\left(a,0,c\right)-h\left(a,0,c-1\right)=\left(y_{a+c-1}-y_{c}\right)h\left( a-1,0,c\right)\). But this follows by comparing \[h\left(a,0,c\right)-h\left(a,0,c-1\right) =[a=0]-[a=0]\] (by (17)) \[=0\] with \[\left(y_{a+c-1}-y_{c}\right)h\left(a-1,0,c\right) =\underbrace{\left(y_{a+c-1}-y_{c}\right)}_{=0\text{ if }a=1} \underbrace{\left[a-1=0\right]}_{=0\text{ if }a\neq 1}\] (by (17)) \[=0.\] Thus, the lemma is proved for \(b=0\). Furthermore, the lemma clearly holds for \(a<0\), since all three h-polynomials are \(0\) in this case. Thus, we can WLOG assume that \(a\geq 0\). Hence, \(a+b\geq 0\) (since both \(a\) and \(b\) are \(\geq 0\)). We shall now prove the lemma by induction on \(a+b\). The _base case_ (\(a+b=0\)) is clear, since \(a+b=0\) entails \(b=0\) (in light of \(a\geq 0\)), but we already have proved the lemma for \(b=0\). For the _induction step_, we fix a positive integer \(N\). Assume (as the induction hypothesis) that the lemma holds whenever \(a+b=N-1\). We must now prove that the lemma holds whenever \(a+b=N\). Thus, we fix integers \(a,b\geq 0\) satisfying \(a+b=N\). Our goal is to prove the equality (19). We WLOG assume that \(b\neq 0\), since we already have proved the lemma for \(b=0\). Thus, \(b\geq 1\) (since \(b\) is a nonnegative integer), so that \(b-1\geq 0\). Hence, by the induction hypothesis, we can apply the lemma to \(b-1\) instead of \(b\) (since \(a+(b-1)=\underbrace{a+b}_{=N}-1=N-1\)). As a result, we obtain \[h\left(a,b-1,c\right)-h\left(a,b-1,c-1\right)=\left(y_{a+b+c-2}-y_{c}\right) \cdot h\left(a-1,b-1,c\right). \tag{51}\] Furthermore, by the induction hypothesis, we can apply the lemma to \(a-1\) instead of \(a\) (since \((a-1)+b=\underbrace{a+b}_{=N}-1=N-1\)). As a result, we obtain \[h\left(a-1,b,c\right)-h\left(a-1,b,c-1\right)=\left(y_{a+b+c-2}-y_{c}\right) \cdot h\left(a-2,b,c\right). \tag{52}\] On the other hand, Lemma 7.5 yields \[h\left(a,b,c\right)=\left(x_{b}+y_{a+b+c-1}\right)\cdot h\left(a-1,b,c\right) +h\left(a,b-1,c\right). \tag{53}\] Furthermore, Lemma 7.5 (applied to \(a-1\) instead of \(a\)) yields \[h\left(a-1,b,c\right)=\left(x_{b}+y_{a+b+c-2}\right)\cdot h\left(a-2,b,c\right) +h\left(a-1,b-1,c\right). \tag{54}\] Finally, Lemma 7.5 (applied to \(c-1\) instead of \(c\)) yields \[h\left(a,b,c-1\right)=\left(x_{b}+y_{a+b+c-2}\right)\cdot h\left(a-1,b,c-1 \right)+h\left(a,b-1,c-1\right). \tag{55}\] Let us set \[t :=h\left(a,b,c\right),\] \[p :=h\left(a-1,b,c\right), q :=h\left(a,b-1,c\right), r :=h\left(a,b,c-1\right),\] \[u :=h\left(a,b-1,c-1\right), v :=h\left(a-1,b,c-1\right), w :=h\left(a-1,b-1,c\right),\] \[s :=h\left(a-2,b,c\right),\] \[x :=x_{b}, y :=y_{c}, y^{\prime} :=y_{a+b+c-1}, y^{\prime\prime} :=y_{a+b+c-2}.\] Then, the equalities (51), (52), (53), (54) and (55) can be rewritten as follows: \[q-u =\left(y^{\prime\prime}-y\right)\cdot w; \tag{56}\] \[p-v =\left(y^{\prime\prime}-y\right)\cdot s;\] (57) \[t =\left(x+y^{\prime}\right)\cdot p+q;\] (58) \[p =\left(x+y^{\prime\prime}\right)\cdot s+w;\] (59) \[r =\left(x+y^{\prime\prime}\right)\cdot v+u. \tag{60}\] Recall that our goal is to prove the equality (19). In view of the notations we have just introduced, we can rewrite this equality as \[t-r=\left(y^{\prime}-y\right)\cdot p.\] But it is not hard to derive this equality from the five equalities (56)-(60): Namely, subtracting (60) from (58), we obtain \[t-r =\left(\left(x+y^{\prime}\right)\cdot p+q\right)-\left(\left(x+y^ {\prime\prime}\right)\cdot v+u\right)\] \[=\underbrace{\left(x+y^{\prime}\right)\cdot p}_{=\left(y^{\prime }-y\right)\cdot p+\left(x+y\right)\cdot p}-\left(x+y^{\prime\prime}\right) \cdot v+\underbrace{q-u}_{\begin{subarray}{c}\left(y^{\prime\prime}-y\right) \cdot w\\ \left(\text{by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eq Proof of Corollary 7.8.: Lemma 7.6 (applied to \(a+1\) and \(c+1\) instead of \(a\) and \(c\)) yields \[h\left(a+1,b,c+1\right)-h\left(a+1,b,c+1-1\right)=\left(y_{\left(a+1\right)+b+ \left(c+1\right)-1}-y_{c+1}\right)\cdot h\left(a+1-1,b,c+1\right).\] In view of \(a+1-1=a\) and \(c+1-1=c\) and \(\left(a+1\right)+b+\left(c+1\right)-1=a+b+c+1\), we can simplify this to \[h\left(a+1,b,c+1\right)-h\left(a+1,b,c\right)=\left(y_{a+b+c+1}-y_{c+1}\right) \cdot h\left(a,b,c+1\right).\] In other words, \[h\left(a+1,b,c+1\right)=h\left(a+1,b,c\right)+\left(y_{a+b+c+1}-y_{c+1}\right) \cdot h\left(a,b,c+1\right).\] This proves Corollary 7.8. ### To Section 8 Proof of Lemma 8.7.: **(a)** Let \(\sigma\in S_{n}\). We must prove the equality (22). This equality easily boils down to \(0=0\) when \(\sigma\) is not legitimate17 Footnote 17: _Proof._ Assume that \(\sigma\) is not legitimate. Then, there are no \(\sigma\)-arrays (by Definition 8.5 **(c)**). Hence, the sum \(\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}w\left(T\right)\) is an empty sum and thus equals \(0\). On the other hand, not every factor for \(i=j\)). Therefore, this whole product is \(0\). In other words, \(\prod\limits_{i=1}^{n}h_{\mathfrak{e}_{\left(j\right)};\,\mu_{\sigma\left(i \right)}-\sigma\left(i\right)+i}\left[i\right]=0\). Since we also know that \(\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}w\left(T\right)\) equals \(0\), we thus conclude that the equality (22) boils down to \(0=0\). For any \(\sigma\)-array \(T\), we have \[w\left(T\right)=\prod\limits_{\left(i,j\right)\in P\left(\sigma\right)}u_{T \left(i,j\right),\ j-i}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}w\left(T\right)=\sum \limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{61}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}w\left(T\right)=\sum \limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{62}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}w\left(T\right)=\sum \limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{63}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{64}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{65}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{66}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{67}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{68}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{69}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{70}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{71}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{72}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{73}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{74}\] (by the definition of \(w\left(T\right)\)). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\sum\limits_{ \sigma\text{-array}}w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-}\text{flagged}}\prod\limits_{\left(i,j\right)\in P\left( \sigma\right)}u_{T\left(i,j\right)},\ j-i\cdot \tag{75}\] For each \(i\in[n]\), the \(i\)-th row of the diagram \(P\left(\sigma\right)\) has \(\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\) many boxes (by the definition of \(P\left(\sigma\right)\)). In other words, for each \(i\in[n]\), the \(i\)-th row of the diagram \(P\left(\sigma\right)\) has \(q_{i}\) many boxes (since \(q_{i}=\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\)). These boxes occupy the columns \(1,2,\ldots,q_{i}\) (since the rows of \(P\left(\sigma\right)\) are left-aligned). Hence, these boxes are \[\left(i,1\right),\ \left(i,2\right),\ \ldots,\ \left(i,q_{i}\right).\] Thus, altogether, the boxes of \(P\left(\sigma\right)\) are \[\left(1,1\right),\ \left(1,2\right),\ \ldots,\ \left(1,q_{1} \right),\] \[\left(2,1\right),\ \left(2,2\right),\ \ldots,\ \left(2,q_{2} \right),\] \[\ldots,\] \[\left(n,1\right),\ \left(n,2\right),\ \ldots,\ \left(n,q_{n} \right).\] Therefore, the product sign \(\prod\limits_{\left(i,j\right)\in P\left(\sigma\right)}\) can be rewritten as \(\prod\limits_{i=1}^{n}\ \ \prod\limits_{j=1}^{q_{i}}\). Hence, the equality (61) rewrites as \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-flagged}\atop\sigma\text{-array} }w\left(T\right)=\sum\limits_{T\text{ is a }\mathbf{b}\text{-flagged}\atop\sigma\text{-array}}\ \ \prod\limits_{i=1}^{n}\ \ \prod\limits_{j=1}^{q_{i}}u_{T\left(i,j\right),\ j-i}. \tag{62}\] Now, recall that for each \(i\in[n]\), the \(i\)-th row of the diagram \(P\left(\sigma\right)\) has \(q_{i}\) many boxes, and these boxes occupy the columns \(1,2,\ldots,q_{i}\). In a \(\sigma\)-array, these \(q_{i}\) boxes have to be filled with positive integers \(a_{i,1},a_{i,2},\ldots,a_{i,q_{i}}\) that satisfy \(a_{i,1}\leq a_{i,2}\leq\cdots\leq a_{i,q_{i}}\) (since the entries of a \(\sigma\)-array must be weakly increasing along each row). Moreover, in a \(\mathbf{b}\)-flagged \(\sigma\)-array, these entries \(a_{i,1},a_{i,2},\ldots,a_{i,q_{i}}\) must belong to the set \(\left[b_{\sigma\left(i\right)}\right]\) (since every entry of \(T\) in the \(i\)-th row must be \(\leq b_{\sigma\left(i\right)}\)). Thus, a \(\mathbf{b}\)-flagged \(\sigma\)-array is simply a way to fill the \(i\)-th row of \(P\left(\sigma\right)\) with positive integers \(a_{i,1},a_{i,2},\ldots,a_{i,q_{i}}\in\left[b_{\sigma\left(i\right)}\right]\) satisfying \(a_{i,1}\leq a_{i,2}\leq\cdots\leq a_{i,q_{i}}\) for each \(i\in[n]\). If we denote our \(\sigma\)-array by \(T\), then the latter integers \(a_{i,j}\) are simply its respective entries \(T\left(i,j\right)\). Thus, \[\sum\limits_{T\text{ is a }\mathbf{b}\text{-flagged}\atop\sigma\text{-array} }\ \prod\limits_{i=1}^{n}\ \ \prod\limits_{j=1}^{q_{i}}u_{T\left(i,j\right),\ j-i}\] \[= \sum\limits_{\begin{subarray}{c}\left(a_{i,1}a_{i,2},\ldots,a_{i,q_{i}}\right)\in\left[b_{\sigma\left(i\right)}\right]^{q_{i}}\text{ for each }i\in[n];\\ a_{i,1}\leq a_{i,2}\leq\cdots\leq a_{i,q_{i}}\text{ for each }i\in[n] \end{subarray}}\ \ \prod\limits_{i=1}^{n}\ \ \prod\limits_{j=1}^{q_{i}}u_{a_{i,j},\ j-i}\] \[= \prod\limits_{i=1}^{n}\ \ \sum\limits_{\begin{subarray}{c}\left(a_{i,1}a_{i,2}, \ldots,a_{i,q_{i}}\right)\in\left[b_{\sigma\left(i\right)}\right]^{q_{i}} \text{ for }j=1\\ a_{i,1}\leq a_{i,2}\leq\cdots\leq a_{i,q_{i}}\end{subarray}}\ \prod\limits_{j=1}^{q_{i}}u_{a_{i,j},\ j-i} \tag{63}\] (by the product rule).18 Footnote 18: Here is a more rigorous way of deriving this equality: Consider the map \[\{\mbox{{b-flagged $\sigma$-arrays}}\}\to\prod_{i=1}^{n}\Big{\{}\big{(}a_{i,1},a_{i,2 },\ldots,a_{i,q_{i}}\big{)}\in\left[b_{\sigma(i)}\right]^{q_{i}}\ \ |\ \ a_{i,1}\leq a_{i,2}\leq\cdots\leq a_{i,q_{i}}\Big{\}}\] that sends each \(\mathbf{b}\)-flagged \(\sigma\)-array \(T\) to the \(n\)-tuple \[\big{(}\left(T\left(1,1\right),\ T\left(1,2\right),\ \ldots,\ T\left(1,q_{1} \right)\right),\] \[\ However, for each \(i\in[n]\), we have \[\sum_{\begin{subarray}{c}\left(a_{i,1},a_{i,2},\ldots,a_{i,q_{i}} \right)\in\left[b_{\sigma(i)}\right]^{q_{i}};\\ a_{i,1}\leq a_{i,2}\leq\cdots\leq a_{i,q_{i}}\end{subarray}}\prod_{j=1}^{q_{i }}u_{a_{i,j},\ j-i}\] \[=\sum_{\begin{subarray}{c}\left(t_{1},t_{2},\ldots,t_{q_{i}} \right)\in\left[b_{\sigma(i)}\right]^{q_{i}};\\ t_{1}\leq t_{2}\leq\cdots\leq t_{q_{i}}\end{subarray}}\prod_{j=1}^{q_{i}}u_{t_{ j},\ j-i}\qquad\quad\left(\begin{array}{c}\text{here, we have renamed}\\ \text{the index }\left(a_{i,1},a_{i,2},\ldots,a_{i,q_{i}}\right)\\ \text{as }\left(t_{1},t_{2},\ldots,t_{q_{i}}\right)\end{array}\right)\] \[=h_{b_{\sigma(i)};\ q_{i}}\left[i\right]\qquad\quad\left(\text{ by the definition of }h_{b_{\sigma(i)};\ q_{i}}\left[i\right]\right)\] \[=h_{b_{\sigma(i)};\ \mu_{\sigma(i)}-\sigma(i)+i}\left[i\right] \qquad\quad\left(\text{since }q_{i}=\mu_{\sigma(i)}-\sigma(i)+i\right).\] Thus, we can rewrite (63) as \[\sum_{\begin{subarray}{c}T\text{ is a }\mathbf{b}\text{-flagged}\\ \sigma\text{-array}\end{subarray}}\prod_{i=1}^{n}\ \ \prod_{j=1}^{q_{i}}u_{T(i,j)},\ j-i=\prod_{i=1}^{n}h_{b_{\sigma(i)};\ \mu_{\sigma(i)}-\sigma(i)+i}\left[i\right].\] In view of (62), this rewrites as \[\sum_{\begin{subarray}{c}T\text{ is a }\mathbf{b}\text{-flagged}\\ \sigma\text{-array}\end{subarray}}w\left(T\right)=\prod_{i=1}^{n}h_{b_{\sigma( i)};\ \mu_{\sigma(i)}-\sigma(i)+i}\left[i\right].\] This proves Lemma 8.7**(a)**. **(b)** Every square matrix \(A\) satisfies \(\det\left(A^{T}\right)=\det A\). In other words, every square matrix \(\left(a_{i,j}\right)_{i,j\in[n]}\) satisfies \(\det\left(a_{j;i}\right)_{i,j\in[n]}=\det\left(a_{i,j}\right)_{i,j\in[n]}\). Applying this to \(a_{i,j}=h_{b_{j};\ \mu_{j}-j+i}\left[i\right]\), we obtain \[\det\left(h_{b_{i};\ \mu_{i}-i+j}\left[j\right]\right)_{i,j\in[n]} =\det\left(h_{b;\ \mu_{j}-j+i}\left[i\right]\right)_{i,j\in[n]}\] \[=\sum_{\sigma\in S_{n}}\left(-1\right)^{\sigma}\prod_{\begin{subarray} {c}i=1\\ \text{$T$ is a }\text{$\text{B-flagged}}\\ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$}$\text{$\text{$}$\text{$}$}}}}}$}}$}}} \\ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ $\text{$\text{$\text{$\text{$\text{$}}}}}}}$}}$}}}$}}\\ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$}}}}}}}$}}}$}}}$}}$}}}w\left(T \right).\] This proves Lemma 8.7**(b)**. Proof of Lemma 8.9.: Lemma 8.7 yields \[\det\left(h_{b_{i};\ \mu_{i}-i+j}\left[j\right]\right)_{i,j\in \left[n\right]} =\sum_{\sigma\in S_{n}}\left(-1\right)^{\sigma}\sum_{T\text{ is a }\mathbf{b}\text{-flagged}}w\left(T\right)\] \[=\sum_{\sigma\in S_{n}}\ \sum_{T\text{ is a }\mathbf{b}\text{-flagged} \sigma\text{-array}}\left(-1\right)^{\sigma}w\left(T\right). \tag{64}\] However, the \(\mathbf{b}\)-flagged twisted arrays are precisely the pairs \(\left(\sigma,T\right)\) in which \(\sigma\in S_{n}\) and \(T\) is a \(\mathbf{b}\)-flagged \(\sigma\)-array. Thus, the summation sign \(\underset{\sigma\in S_{n}}{\sum}\ \underset{\text{is a }\mathbf{b}\text{-flagged}}{\sum}\) can be rewritten as the nested summation \(\underset{\sigma\in S_{n}}{\sum}\ \underset{\text{is a }\mathbf{b}\text{-flagged}}{\sum}\). Hence, \[\underset{\left(\sigma,T\right)\text{ is a }\mathbf{b}\text{-flagged}}{ \sum}\left(-1\right)^{\sigma}w\left(T\right)=\sum_{\sigma\in S_{n}}\ \underset{\text{is a }\mathbf{b}\text{-flagged}}{\sum}\left(-1\right)^{\sigma}w\left(T\right).\] Comparing this with (64), we obtain \[\det\left(h_{b_{i};\ \mu_{i}-i+j}\left[j\right]\right)_{i,j\in\left[n\right]}= \underset{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}{\sum}\left(-1\right)^{\sigma}w\left(T \right).\] Thus, Lemma 8.9 is proved. Proof of Lemma 8.13.: We know that \(\left(\sigma,T\right)\) is a twisted array. In other words, we have \(\sigma\in S_{n}\), and \(T\) is a \(\sigma\)-array. **(a)** Assume that \(\sigma\neq\) id. Then, there exists some \(i\in\left\{1,2,\ldots,n-1\right\}\) such that \(\sigma\left(i\right)>\sigma\left(i+1\right)\) (since otherwise, we would have \(\sigma\left(1\right)\leq\sigma\left(2\right)\leq\cdots\leq\sigma\left(n\right)\), which would readily lead to \(\sigma=\) id). Consider this \(i\). We have \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\cdots\) (since \(\mu\) is a partition). Subtracting the chain of inequalities \(1<2<3<\cdots\) from \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\cdots\), we obtain \(\mu_{1}-1>\mu_{2}-2>\mu_{3}-3>\cdots\). In other words, if \(u\) and \(v\) are two positive integers satisfying \(u<v\), then \(\mu_{u}-u>\mu_{v}-v\). Applying this to \(u=\sigma\left(i+1\right)\) and \(v=\sigma\left(i\right)\), we obtain \(\mu_{\sigma\left(i+1\right)}-\sigma\left(i+1\right)>\mu_{\sigma\left(i\right)}- \sigma\left(i\right)\) (since \(\sigma\left(i+1\right)<\sigma\left(i\right)\)). Hence, \[\underset{>\mu_{\sigma\left(i\right)}-\sigma\left(i+1\right)}{ \underbrace{\mu_{\sigma\left(i+1\right)}-\sigma\left(i+1\right)}}+\left(i+1 \right)>\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+\underset{>i}{ \underbrace{\left(i+1\right)}}>\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i.\] In other words, the \(\left(i+1\right)\)-th row of the diagram \(P\left(\sigma\right)\) is longer than the \(i\)-th row (since Definition 8.5**(b)** shows that the \(\left(i+1\right)\)-th row has length \(\mu_{\sigma\left(i+1\right)}-\sigma\left(i+1\right)+\left(i+1\right)\), while the \(i\)-th has length \(\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\)). Thus, the easternmost box of the \(\left(i+1\right)\)-th row of the diagram \(P\left(\sigma\right)\) is an outer failure of \(\left(\sigma,T\right)\) (because its northern neighbor lies too far east to be contained in the \(i\)-th row of \(P\left(\sigma\right)\)). Therefore, \(\left(\sigma,T\right)\) has an outer failure. This proves Lemma 8.13**(a)**. **(b)** Assume that \(\sigma=\operatorname{id}\) and \(T\notin\operatorname{SSYT}\left(\mu\right)\). For each \(i\in\left[n\right]\), we have \(\sigma\left(i\right)=i\) (since \(\sigma=\operatorname{id}\)) and thus \[\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i=\mu_{i}-i+i=\mu_{i}.\] Hence, the diagram \(P\left(\sigma\right)\) is precisely the Young diagram \(Y\left(\mu\right)\) (just compare their definitions). Therefore, \(T\) is a filling of \(Y\left(\mu\right)\) (since \(T\) is a filling of \(P\left(\sigma\right)\)) with positive integers that weakly increase left-to-right along each row (since \(T\) is a \(\sigma\)-array). If the entries of \(T\) also strictly increased top-to-bottom down each column, then it would follow that \(T\) is a semistandard tableau, whence \(T\in\operatorname{SSYT}\left(\mu\right)\); but this would contradict \(T\notin\operatorname{SSYT}\left(\mu\right)\). Hence, the entries of \(T\) cannot strictly increase top-to-bottom down each column. Thus, there must be at least one column of \(T\) in which the entries do not strictly increase top-to-bottom. In other words, there must be at least one column of \(T\) that has two adjacent boxes \(\left(i-1,j\right)\) and \(\left(i,j\right)\) satisfying \(T\left(i-1,j\right)\geq T\left(i,j\right)\). Consider these two boxes. Then, \(\left(i,j\right)\) is an inner failure of \(\left(\sigma,T\right)\) (by the definition of an inner failure). Therefore, \(\left(\sigma,T\right)\) has an inner failure. This proves Lemma 8.13**(b)**. **(c)** Assume that \(\left(\sigma,T\right)\) is unfailing. Thus, \(\left(\sigma,T\right)\) has no failures. If we had \(\sigma\neq\operatorname{id}\), then \(\left(\sigma,T\right)\) would have an outer failure (by Lemma 8.13**(a)**), which would contradict the fact that \(\left(\sigma,T\right)\) has no failures. Thus, we cannot have \(\sigma\neq\operatorname{id}\). Hence, we have \(\sigma=\operatorname{id}\). If we had \(T\notin\operatorname{SSYT}\left(\mu\right)\), then \(\left(\sigma,T\right)\) would have an inner failure (by Lemma 8.13**(b)**), which would contradict the fact that \(\left(\sigma,T\right)\) has no failures. Thus, we cannot have \(T\notin\operatorname{SSYT}\left(\mu\right)\). Hence, we have \(T\in\operatorname{SSYT}\left(\mu\right)\). This completes the proof of Lemma 8.13**(c)**. Proof of Lemma 8.14.: Let \(\operatorname{id}\) denote the identity permutation in \(S_{n}\). Then, the diagram \(P\left(\operatorname{id}\right)\) is a left-aligned diagram whose \(i\)-th row has \[\mu_{\operatorname{id}\left(i\right)}-\operatorname{id}\left(i \right)+i =\mu_{i}-i+i\qquad\quad\left(\text{since }\operatorname{id}\left(i\right)=i\right)\] \[=\mu_{i}\] boxes for each \(i\in\left[n\right]\). But this precisely describes the Young diagram \(Y\left(\mu\right)\). Hence, \(P\left(\operatorname{id}\right)=Y\left(\mu\right)\). If \(T\in\operatorname{SSYT}\left(\mu\right)\) is a semistandard tableau, then the pair \(\left(\operatorname{id},T\right)\) is clearly a twisted array (since \(T\) is a filling of the diagram \(Y\left(\mu\right)=P\left(\operatorname{id}\right)\), and its entries weakly increase left-to-right along each row). Moreover, this twisted array \(\left(\operatorname{id},T\right)\) has no outer failures (since its shape \(P\left(\operatorname{id}\right)=Y\left(\mu\right)\) is the Young diagram of a partition) and no inner failures (since \(T\) is semistandard, so that the entries of \(T\) strictly increase down each column). In other words, this twisted array \(\left(\operatorname{id},T\right)\) is unfailing. Thus, we obtain a map \[\Phi:\operatorname{SSYT}\left(\mu\right) \to\left\{\text{unfailing twisted arrays}\right\},\] \[T \mapsto\left(\operatorname{id},T\right).\] This map \(\Phi\) is injective (obviously) and surjective (by Lemma 8.13 **(c)**)19 Footnote 19: Here is the proof of the surjectivity of \(\Phi\) in some more detail: Let \((\sigma,T)\) be an unfailing twisted array. Then, Lemma 8.13 **(c)** shows that \(\sigma=\operatorname{id}\) and \(T\in\operatorname{SSYT}\left(\mu\right)\). Hence, \(\Phi\left(T\right)=\left(\operatorname{id},T\right)=\left(\sigma,T\right)\) (since \(\operatorname{id}=\sigma\)). Therefore, \((\sigma,T)=\Phi\left(T\right)\). Thus, \((\sigma,T)\) is an image under the map \(\Phi\). Forget that we fixed \((\sigma,T)\). We thus have shown that each unfailing twisted array \((\sigma,T)\) is an image under the map \(\Phi\). In other words, \(\Phi\) is surjective. Moreover, a semistandard tableau \(T\in\operatorname{SSYT}\left(\mu\right)\) is \(\mathbf{b}\)-flagged if and only if the corresponding twisted array \(\Phi\left(T\right)=\left(\operatorname{id},T\right)\) is \(\mathbf{b}\)-flagged20 Footnote 20: _Proof._ Let \(T\in\operatorname{SSYT}\left(\mu\right)\) be a semistandard tableau. Then, the tableau \(T\) is \(\mathbf{b}\)-flagged if and only if it satisfies the condition \[\left(T\left(i,j\right)\leq b_{i}\hskip 28.452756pt\text{for all }\left(i,j\right)\in Y \left(\mu\right)\right). \tag{65}\] On the other hand, the twisted array \(\left(\operatorname{id},T\right)\) is \(\mathbf{b}\)-flagged if and only if the \(\operatorname{id}\)-array \(T\) is \(\mathbf{b}\)-flagged, i.e., if and only if it satisfies the condition \[\left(T\left(i,j\right)\leq b_{\operatorname{id}\left(i\right)}\hskip 28.452756pt \text{for all }\left(i,j\right)\in P\left(\operatorname{id}\right)\right). \tag{66}\] But the two conditions (65) and (66) are equivalent (since \(Y\left(\mu\right)=P\left(\operatorname{id}\right)\) and \(b_{i}=b_{\operatorname{id}\left(i\right)}\) for each \(i\)). Thus, the tableau \(T\) is \(\mathbf{b}\)-flagged if and only if \(\left(\operatorname{id},T\right)\) is \(\mathbf{b}\)-flagged. In other words, the tableau \(T\) is \(\mathbf{b}\)-flagged if and only if \(\Phi\left(T\right)\) is \(\mathbf{b}\)-flagged (since \(\Phi\left(T\right)=\left(\operatorname{id},T\right)\)). Thus, the tableau \(T\) is \(\mathbf{b}\)-flagged if and only if \(\left(\operatorname{id},T\right)\) is \(\mathbf{b}\)-flagged. In other words, the tableau \(T\) is \(\mathbf{b}\)-flagged if and only if \(\left(\operatorname{id},T\right)\) is \(\mathbf{b}\)-flagged. Remark 8.12 shows that \(i>1\) (since \(\left(i,j\right)\) is a failure of \(\left(\sigma,T\right)\)), but we also have \(i\in\left[n\right]\) (since each box of \(P\left(\sigma\right)\) has the form \(\left(p,q\right)\) with \(p\in\left[n\right]\)). Thus, \(i-1\in\left[n\right]\). Definition 8.15 yields \(\sigma^{\prime}=\sigma\circ s_{i-1}\). Thus, the permutation \(\sigma^{\prime}\) is obtained from \(\sigma\) by swapping the values at \(i-1\) and \(i\). In other words, we have \[\sigma^{\prime}\left(i-1\right)=\sigma\left(i\right)\qquad\text{ and }\qquad\sigma^{\prime}\left(i\right)=\sigma\left(i-1\right)\] and \[\sigma^{\prime}\left(k\right)=\sigma\left(k\right)\qquad\text{ for each }k\in\left[n\right]\setminus\left\{i,i-1\right\}. \tag{67}\] We set \[\rho_{k}:=\mu_{\sigma\left(k\right)}-\sigma\left(k\right)+k\qquad\text{ and }\qquad\rho_{k}^{\prime}:=\mu_{\sigma^{\prime}\left(k\right)}-\sigma^{\prime} \left(k\right)+k\] for each \(k\in\left[n\right]\). Now, we claim the following: _Claim 1_: We have \(\rho_{k}^{\prime}=\rho_{k}\) for each \(k\in\left[n\right]\setminus\left\{i,i-1\right\}\). _Claim 2_: We have \(\rho_{i}^{\prime}=\rho_{i-1}+1\). _Claim 3_: We have \(\rho_{i-1}^{\prime}=\rho_{i}-1\). Proof of Claim 1.: Let \(k\in\left[n\right]\setminus\left\{i,i-1\right\}\). Then, (67) yields \(\sigma^{\prime}\left(k\right)=\sigma\left(k\right)\). Now, recall that \(\rho_{k}^{\prime}=\mu_{\sigma^{\prime}\left(k\right)}-\sigma^{\prime}\left(k \right)+k\) and \(\rho_{k}=\mu_{\sigma\left(k\right)}-\sigma\left(k\right)+k\). The right hand sides of these two equalities are identical (since \(\sigma^{\prime}\left(k\right)=\sigma\left(k\right)\)). Hence, their left hand sides are identical as well. In other words, \(\rho_{k}^{\prime}=\rho_{k}\). This proves Claim 1. Proof of Claim 2.: The definition of \(\rho_{i-1}\) yields \[\rho_{i-1}=\mu_{\sigma\left(i-1\right)}-\sigma\left(i-1\right)+\left(i-1 \right).\] The definition of \(\rho_{i}^{\prime}\) yields \[\rho_{i}^{\prime} =\mu_{\sigma^{\prime}\left(i\right)}-\sigma^{\prime}\left(i\right) +i\] \[=\mu_{\sigma\left(i-1\right)}-\sigma\left(i-1\right)+\underbrace{i }_{=\left(i-1\right)+1}\qquad\quad\left(\text{since }\sigma^{\prime}\left(i\right)=\sigma\left(i-1 \right)\right)\] \[=\underbrace{\mu_{\sigma\left(i-1\right)}-\sigma\left(i-1\right) +\left(i-1\right)}_{=\rho_{i-1}}+1\] \[=\rho_{i-1}+1.\] Thus, Claim 2 is proved. Proof of Claim 3.: Similar to the above proof of Claim 2. Now, \(T\) is a \(\sigma\)-array (since \(\left(\sigma,T\right)\) is a twisted array). Hence, the permutation \(\sigma\) is legitimate (since \(\sigma\)-arrays only exist when \(\sigma\) is legitimate). In other words, each \(k\in\left[n\right]\) satisfies \(\mu_{\sigma\left(k\right)}-\sigma\left(k\right)+k\geq 0\) (by the definition of "legitimate"). In other words, each \(k\in\left[n\right]\) satisfies \[\rho_{k}\geq 0 \tag{68}\] (since \(\rho_{k}\) was defined to be \(\mu_{\sigma\left(k\right)}-\sigma\left(k\right)+k\)). Recall that the \(k\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\mu_{\sigma\left(k\right)}-\sigma\left(k\right)+k\) boxes for each \(k\in\left[n\right]\) (by the definition of \(P\left(\sigma\right)\)). In other words, the \(k\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{k}\) boxes for each \(k\in\left[n\right]\) (since \(\rho_{k}=\mu_{\sigma\left(k\right)}-\sigma\left(k\right)+k\)). In particular, the \(i\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{i}\) boxes. Since this \(i\)-th row contains at least one box (namely, the bottommost leftmost failure \(\left(i,j\right)\)), we thus obtain \(\rho_{i}\geq 1\). Claim 3 yields \(\rho_{i-1}^{\prime}=\rho_{i}-1\geq 0\) (since \(\rho_{i}\geq 1\)). Claim 2 yields \(\rho_{i}^{\prime}=\rho_{i-1}+1\geq\rho_{i-1}\geq 0\) (by (68)). Claim 1 yields that every \(k\in\left[n\right]\setminus\left\{i,i-1\right\}\) satisfies \(\rho_{k}^{\prime}=\rho_{k}\geq 0\) (by (68)). Combining the preceding three sentences, we obtain \[\rho_{k}^{\prime}\geq 0\qquad\text{ for each }k\in\left[n\right]. \tag{69}\] In other words, \(\mu_{\sigma^{\prime}\left(k\right)}-\sigma^{\prime}\left(k\right)+k\geq 0\) for each \(k\in\left[n\right]\) (since \(\rho_{k}^{\prime}\) was defined to be \(\mu_{\sigma^{\prime}\left(k\right)}-\sigma^{\prime}\left(k\right)+k\)). In other words, the permutation \(\sigma^{\prime}\) is legitimate (by the definition of "legitimate"). Thus, the diagram \(P\left(\sigma^{\prime}\right)\) is well-defined. By its definition, this diagram \(P\left(\sigma^{\prime}\right)\) consists of the boxes \(\left(k,\ell\right)\) with \(k\in\left[n\right]\) and \(\ell\leq\mu_{\sigma^{\prime}\left(k\right)}-\sigma^{\prime}\left(k\right)+k\). In other words, the diagram \(P\left(\sigma^{\prime}\right)\) consists of the boxes \(\left(k,\ell\right)\) with \(k\in\left[n\right]\) and \(\ell\leq\rho_{k}^{\prime}\) (since \(\rho_{k}^{\prime}\) was defined to be \(\mu_{\sigma^{\prime}\left(k\right)}-\sigma^{\prime}\left(k\right)+k\)). Next we claim the following: _Claim 4:_ We have \(\rho_{i}\geq j\) and \(\rho_{i-1}\geq j-1\). Proof of Claim 4.: The rows of the diagram \(P\left(\sigma\right)\) are left-aligned. Thus, from \(\left(i,j\right)\in P\left(\sigma\right)\), it follows that the \(j\) boxes \(\left(i,1\right),\ \left(i,2\right),\ \ldots,\ \left(i,j\right)\) all belong to \(P\left(\sigma\right)\). Hence, the \(i\)-th row of \(P\left(\sigma\right)\) has at least \(j\) boxes. In other words, \(\rho_{i}\geq j\) (since the \(i\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{i}\) boxes). It remains to prove that \(\rho_{i-1}\geq j-1\). Indeed, assume the contrary. Thus, \(\rho_{i-1}<j-1\). Hence, \(j-1>\rho_{i-1}\geq 0\) (by (68)), so that \(j-1\geq 1\). Therefore, \(\left(i,j-1\right)\) is one of the \(j\) boxes \(\left(i,1\right),\ \left(i,2\right),\ \ldots,\ \left(i,j\right)\). Since the latter \(j\) boxes all belong to \(P\left(\sigma\right)\), we thus conclude that \(\left(i,j-1\right)\in P\left(\sigma\right)\). If we had \(\left(i-1,j-1\right)\notin P\left(\sigma\right)\), then the box \(\left(i,j-1\right)\) would be an outer failure of \(\left(\sigma,T\right)\) (because \(i>1\)), which would contradict the fact that \(\left(i,j\right)\) is a **leftmost** failure of \(\left(\sigma,T\right)\) (since the failure \(\left(i,j-1\right)\) would lie further west than \(\left(i,j\right)\)). Thus, we cannot have \(\left(i-1,j-1\right)\notin P\left(\sigma\right)\). In other words, we must have \(\left(i-1,j-1\right)\in P\left(\sigma\right)\). The rows of \(P\left(\sigma\right)\) are left-aligned. Thus, from \(\left(i-1,j-1\right)\in P\left(\sigma\right)\), it follows that the \(j-1\) boxes \(\left(i-1,1\right),\ \left(i-1,2\right),\ \ldots,\ \left(i-1,j-1\right)\) all belong to \(P\left(\sigma\right)\). Hence, the \(\left(i-1\right)\)-st row of \(P\left(\sigma\right)\) has at least \(j-1\) boxes. In other words, \(\rho_{i-1}\geq j-1\) (since the \(\left(i-1\right)\)-st row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{i-1}\) boxes (because the \(k\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{k}\) boxes for each \(k\in[n]))\). Thus, the proof of Claim 4 is complete. Let us now recall how \(T^{\prime}\) was defined: Namely, we obtain \(T^{\prime}\) from \(T\) by swapping the top floor of \(c\) with the bottom floor of \(c\) (see Definition 8.15 for the definitions of "top floor" and "bottom floor"). Thus, the entries of \(T^{\prime}\) are precisely the entries of \(T\), but not all in the same positions; namely, the entries of the top floor have moved one unit southeast whereas the entries of the bottom floor have moved one unit northwest. We shall give names to these entries according to how they move: * The entries in the top floor of \(c\) will be called _falling entries_, since they move down (more precisely, southeast) to their new positions in \(T^{\prime}\). * The entries in the bottom floor of \(c\) will be called _rising entries_, since they move up (more precisely, northwest) to their new positions in \(T^{\prime}\). * All other entries of \(T\) will be called _staying entries_, as they keep their positions in \(T^{\prime}\). (Strictly speaking, it is the boxes, not the entries, that we should be naming, but we hope that this language is clear enough.) From the definition of \(T^{\prime}\), it is not immediately clear that \(T^{\prime}\) is a \(\sigma^{\prime}\)-array, or even that the entries of \(T^{\prime}\) appear contiguously in the rows (meaning that the rows have no "holes", i.e., boxes that don't contain any entries). Let us prove this step by step, starting with some properties of \(T\): _Claim 5:_ The \(i\)-th row of \(T\) contains exactly \(j\) staying entries and \(\rho_{i}-j\) rising entries. Proof of Claim 5.: We saw in the proof of Claim 4 that the \(j\) boxes \(\left(i,1\right),\ \left(i,2\right),\ \ldots,\ \left(i,j\right)\) all belong to \(P\left(\sigma\right)\). Hence, the \(i\)-th row of \(T\) contains \(j\) staying entries (namely, the entries in these \(j\) boxes). The remaining entries in the \(i\)-th row of \(T\) are rising entries (since the bottom floor of the failure \(c=\left(i,j\right)\) begins with the next box \(\left(i,j+1\right)\)). How many are there? Recall that the \(i\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{i}\) boxes. Thus, the \(i\)-th row of \(T\) contains \(\rho_{i}\) entries. Since it contains \(j\) staying entries, we thus conclude that it contains \(\rho_{i}-j\) rising entries. This completes the proof of Claim 5. _Claim 6:_ The \(\left(i-1\right)\)-st row of \(T\) contains exactly \(j-1\) staying entries and \(\rho_{i-1}-\left(j-1\right)\) falling entries. Proof of Claim 6.: Recall that the \(k\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{k}\) boxes for each \(k\in[n]\). Hence, the \(\left(i-1\right)\)-st row of \(P\left(\sigma\right)\) contains \(\rho_{i-1}\) boxes. These boxes are \(\left(i-1,1\right),\ \left(i-1,2\right),\ \ldots,\ \left(i-1,\rho_{i-1}\right)\). Since Claim 4 yields \(\rho_{i-1}\geq j-1\), we know that the first \(j-1\) of these boxes are \(\left(i-1,1\right),\ \left(i-1,2\right),\ \ldots,\ \left(i-1,j-1\right)\). Thus, the \(\left(i-1\right)\)-st row of \(T\) contains \(j-1\) staying entries (namely, the entries in these \(j-1\) boxes). Since it contains \(\rho_{i-1}\) entries in total (because the \((i-1)\)-st row of \(P\left(\sigma\right)\) contains \(\rho_{i-1}\) boxes), it follows that the remaining \(\rho_{i-1}-(j-1)\) entries in this row are falling entries (since the top floor of the failure \(c=(i,j)\) begins with the next box \((i-1,j)\)). This completes the proof of Claim 6. Claim 5 shows that the rightmost staying entry in the \(i\)-th row of \(T\) is \(T\left(i,j\right)\). Claim 6 shows that the rightmost staying entry in the \((i-1)\)-st row of \(T\) (if such an entry exists at all) is \(T\left(i-1,j-1\right)\). **Claim 7**: _The entries in the \(i\)-th row of \(T^{\prime}\) occupy precisely the boxes \((i,1)\,,\ (i,2)\,,\ \ldots\,,\ (i,\rho_{i}^{\prime})\)._ Proof of Claim 7.: The \(i\)-th row of \(T^{\prime}\) has two kinds of entries: the staying entries of the \(i\)-th row of \(T\) (which remain in their places in \(T^{\prime}\)), and the falling entries of the \((i-1)\)-st row of \(T\) (which are moved to the \(i\)-th row by the flip operation). There are \(j\) of the former (by Claim 5) and \(\rho_{i-1}-(j-1)\) of the latter (by Claim 6). Moreover, the former occupy the boxes \((i,1)\,,\ (i,2)\,,\ \ldots\,,\ (i,j)\) in \(T^{\prime}\), whereas the latter occupy the boxes \((i,j+1)\,,\ (i,j+2)\,,\ \ldots\,,\ (i,j+\rho_{i-1}-(j-1))\) in \(T^{\prime}\) (by the definition of the flip operation, and because there are \(\rho_{i-1}-(j-1)\) of them). Thus, altogether, there are_ \[j+(\rho_{i-1}-(j-1))=\rho_{i-1}+1=\rho_{i}^{\prime}\qquad\quad\text{(by Claim 2)}\] _of these entries, and they occupy the boxes_ \[\underbrace{(i,1)\,,\ (i,2)\,,\ \ldots\,,\ (i,j)}_{\text{staying entries}},\ \underbrace{(i,j+1)\,,\ (i,j+2)\,,\ \ldots\,,\ (i,j+\rho_{i-1}-(j-1))}_{\text{ falling entries}},\] _i.e., the boxes_ \[\underbrace{(i,1)\,,\ (i,2)\,,\ \ldots\,,\ (i,j)}_{\text{staying entries}},\ \underbrace{(i,j+1)\,,\ (i,j+2)\,,\ \ldots\,,\ (i,\rho_{i}^{\prime})}_{\text{ falling entries}}\] _(since \(j+(\rho_{i-1}-(j-1))=\rho_{i}^{\prime}\)). This proves Claim 7. _ **Claim 8**: _The entries in the \((i-1)\)-st row of \(T^{\prime}\) occupy precisely the boxes \((i-1,1)\,,\ (i-1,2)\,,\ \ldots\,,\ (i-1,\rho_{i-1}^{\prime})\)._ Proof of Claim 8.: The \((i-1)\)-st row of \(T^{\prime}\) has two kinds of entries: the staying entries of the \((i-1)\)-st row of \(T\) (which remain in their places in \(T^{\prime}\)), and the rising entries of the \(i\)-th row of \(T\) (which are moved to the \((i-1)\)-st row by the flip operation). There are \(j-1\) of the former (by Claim 6) and \(\rho_{i}-j\) of the latter (by Claim 5). Moreover, the former occupy the boxes \((i-1,1)\,,\ (i-1,2)\,,\ \ldots\,,\ (i-1,j-1)\) in \(T^{\prime}\), whereas the latter occupy the boxes \((i-1,j)\,,\ (i-1,j+1)\,,\ \ldots\,,\ (i-1,(j-1)+(\rho_{i}-j))\) in \(T^{\prime}\) (by the definition of the flip operation, and because there are \(\rho_{i}-j\) of them). Thus, altogether, there are_ \[(j-1)+(\rho_{i}-j)=\rho_{i}-1=\rho_{i-1}^{\prime}\qquad\quad\text{(by Claim 3)}\] of these entries, and they occupy the boxes \[\underbrace{\left(i-1,1\right),\ \left(i-1,2\right),\ \ldots,\ \left(i-1,j-1 \right)}_{\text{staying entries}},\ \underbrace{\left(i-1,j\right),\ \left(i-1,j+1\right),\ \ldots,\ \left(i-1,\left(j-1\right)+\left(\rho_{i}-j \right)\right)}_{\text{rising entries}},\] i.e., the boxes \[\underbrace{\left(i-1,1\right),\ \left(i-1,2\right),\ \ldots,\ \left(i-1,j-1 \right)}_{\text{staying entries}},\ \underbrace{\left(i-1,j\right),\ \left(i-1,j+1\right),\ \ldots,\ \left(i-1,\rho_{i-1}^{\prime} \right)}_{\text{rising entries}}\] (since \(\left(j-1\right)+\left(\rho_{i}-j\right)=\rho_{i-1}^{\prime}\)). This proves Claim 8. _Claim 9:_ For each \(k\in\left[n\right]\), the entries in the \(k\)-th row of \(T^{\prime}\) occupy precisely the boxes \(\left(k,1\right),\ \left(k,2\right),\ \ldots,\ \left(k,\rho_{k}^{\prime}\right)\). Proof of Claim 9.: Let \(k\in\left[n\right]\). We must prove that the entries in the \(k\)-th row of \(T^{\prime}\) occupy precisely the boxes \(\left(k,1\right),\ \left(k,2\right),\ \ldots,\ \left(k,\rho_{k}^{\prime}\right)\). For \(k=i\), this follows from Claim 7. For \(k=i-1\), this follows from Claim 8. Thus, we can WLOG assume that \(k\) equals neither \(i\) nor \(i-1\). Assume this. Hence, the \(k\)-th row of \(T^{\prime}\) has the same entries (in the same positions) as the \(k\)-th row of \(T\) (because the flip operation affects only the \(i\)-th and \(\left(i-1\right)\)-st rows). Moreover, \(k\in\left[n\right]\setminus\left\{i,i-1\right\}\) (since \(k\) equals neither \(i\) nor \(i-1\)) and thus \(\rho_{k}^{\prime}=\rho_{k}\) (by Claim 1). Recall that the \(k\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{k}\) boxes, and these boxes are \(\left(k,1\right),\ \left(k,2\right),\ \ldots,\ \left(k,\rho_{k}\right)\) (since the diagram \(P\left(\sigma\right)\) is left-aligned). Thus, the \(k\)-th row of \(T\) has \(\rho_{k}\) entries, and these entries occupy the boxes \(\left(k,1\right),\ \left(k,2\right),\ \ldots,\ \left(k,\rho_{k}\right)\) (since \(T\) is a filling of \(P\left(\sigma\right)\)). Therefore, the same is true for \(T^{\prime}\) instead of \(T\) (since the \(k\)-th row of \(T^{\prime}\) has the same entries (in the same positions) as the \(k\)-th row of \(T\)). In other words, the entries in the \(k\)-th row of \(T^{\prime}\) occupy precisely the boxes \(\left(k,1\right),\ \left(k,2\right),\ \ldots,\ \left(k,\rho_{k}^{\prime}\right)\). This proves Claim 9. Claim 9 shows that the entries of \(T^{\prime}\) occupy precisely the boxes \(\left(k,1\right),\ \left(k,2\right),\ \ldots,\ \left(k,\rho_{k}^{\prime}\right)\) for all \(k\in\left[n\right]\). In other words, they occupy precisely the boxes \(\left(k,\ell\right)\) with \(k\in\left[n\right]\) and \(\ell\leq\rho_{k}^{\prime}\). Thus, \(T^{\prime}\) is a filling of the diagram \(P\left(\sigma^{\prime}\right)\) (since the diagram \(P\left(\sigma^{\prime}\right)\) consists of the boxes \(\left(k,\ell\right)\) with \(k\in\left[n\right]\) and \(\ell\leq\rho_{k}^{\prime}\)). Next, we shall show that the entries of \(T^{\prime}\) weakly increase left-to-right along each row. Again, we prove this via several simple claims: _Claim 10:_ If \(j-1\geq 1\), then \(\left(i-1,j-1\right)\in P\left(\sigma\right)\) and \(\left(i,j-1\right)\in P\left(\sigma\right)\) and \(T\left(i-1,j-1\right)<T\left(i,j-1\right)\). Proof of Claim 10.: Assume that \(j-1\geq 1\). Claim 6 shows that the \(\left(i-1\right)\)-st row of \(T\) contains exactly \(j-1\) staying entries. Hence, in particular, it has length \(\geq j-1\). Thus, \(\left(i-1,j-1\right)\in P\left(\sigma\right)\) (since \(j-1\geq 1\)). Also, from \(\left(i,j\right)\in P\left(\sigma\right)\) and \(j-1<j\), we obtain \(\left(i,j-1\right)\in P\left(\sigma\right)\) (again since \(j-1\geq 1\)). Recall that \((i,j)\) is a **leftmost** failure of \((\sigma,T)\). Thus, \((\sigma,T)\) has no failures that lie further west than \((i,j)\). Hence, the box \((i,j-1)\) cannot be a failure of \((\sigma,T)\) (since it lies further west than \((i,j)\)). In particular, this box \((i,j-1)\) cannot be an inner failure of \((\sigma,T)\). In other words, we cannot have \(T\left(i-1,j-1\right)\geq T\left(i,j-1\right)\) (by the definition of an inner failure). Hence, we must have \(T\left(i-1,j-1\right)<T\left(i,j-1\right)\). This finishes the proof of Claim 10. _Claim 11:_ The entries in the \(i\)-th row of \(T^{\prime}\) weakly increase left-to-right. Proof of Claim 11.: The \(i\)-th row of \(T^{\prime}\) has two kinds of entries: the staying entries of the \(i\)-th row of \(T\) (which remain in their places in \(T^{\prime}\)), and the falling entries of the \((i-1)\)-st row of \(T\) (which are moved to the \(i\)-th row by the flip operation). Of course, the staying entries all lie to the left of the falling entries. But \(T\) is a \(\sigma\)-array, and thus the entries of \(T\) weakly increase left-to-right along each row. Thus, in particular, the entries in the \(i\)-th row of \(T\) weakly increase left-to-right, and so do the entries in the \((i-1)\)-st row of \(T\). Therefore, the staying entries in the \(i\)-th row of \(T^{\prime}\) weakly increase left-to-right (since they are copied unchanged from \(T\)), and the falling entries in the \(i\)-th row of \(T^{\prime}\) also weakly increase left-to-right (since they have been moved from the \((i-1)\)-st row of \(T\), without changing their order). If we can furthermore show that the rightmost staying entry in this row is \(\leq\) to the leftmost falling entry21, then we will be able to conclude from this that **all** entries in the \(i\)-th row of \(T^{\prime}\) weakly increase left-to-right. Thus, we only need to show that the rightmost staying entry in the \(i\)-th row of \(T^{\prime}\) is \(\leq\) to the leftmost falling entry. Footnote 21: We are implicitly assuming that there **is** a rightmost staying entry and there **is** a leftmost falling entry. But this is unproblematic, because if one of these entries does not exist, then Claim 11 follows immediately. In other words, we need to show that \(T\left(i,j\right)\leq T\left(i-1,j\right)\) (since the rightmost staying entry in the \(i\)-th row of \(T^{\prime}\) is \(T\left(i,j\right)\) ) 22, whereas the leftmost falling entry is \(T\left(i-1,j\right)\)). But this is easy: Since \(T\left(i-1,j\right)\) is well-defined in the first place, we must have \(\left(i-1,j\right)\in P\left(\sigma\right)\), and therefore the failure \(c=\left(i,j\right)\) of \((\sigma,T)\) cannot be an outer failure. Hence, \(c\) must be an inner failure (since \(c\) is a failure). Thus, \(T\left(i-1,j\right)\geq T\left(i,j\right)\) (by the definition of an inner failure). In other words, \(T\left(i,j\right)\leq T\left(i-1,j\right)\). As we explained, this completes the proof of Claim 11. _Claim 12:_ The entries in the \((i-1)\)-st row of \(T^{\prime}\) weakly increase left-to-right. Proof of Claim 12.: The \((i-1)\)-st row of \(T^{\prime}\) has two kinds of entries: the staying entries of the \((i-1)\)-st row of \(T\) (which remain in their places in \(T^{\prime}\)), and the rising entries of the \(i\)-th row of \(T\) (which are moved to the \((i-1)\)-st row by the flip operation). Of course, the staying entries all lie to the left of the rising entries. But \(T\) is a \(\sigma\)-array, and thus the entries of \(T\) weakly increase left-to-right along each row. Thus, in particular, the entries in the \(i\)-th row of \(T\) weakly increase left-to-right, and so do the entries in the \((i-1)\)-st row of \(T\). Therefore, the staying entries in the \((i-1)\)-st row of \(T^{\prime}\) weakly increase left-to-right (since they are copied unchanged from \(T\)), and the rising entries in the \((i-1)\)-st row of \(T^{\prime}\) also weakly increase left-to-right (since they have been moved from the \(i\)-th row of \(T\), without changing their order). If we can furthermore show that the rightmost staying entry in this row is \(\leq\) to the leftmost rising entry23, then we will be able to conclude from this that **all** entries in the \((i-1)\)-st row of \(T^{\prime}\) weakly increase left-to-right. Thus, we only need to show that the rightmost staying entry in the \((i-1)\)-st row of \(T^{\prime}\) is \(\leq\) to the leftmost rising entry. Footnote 23: We are implicitly assuming that there **is** a rightmost staying entry and there **is** a leftmost rising entry. But this is unproblematic, because if one of these entries does not exist, then Claim 12 follows immediately. In other words, we need to show that \(T\left(i-1,j-1\right)\leq T\left(i,j+1\right)\) (since the rightmost staying entry in the \((i-1)\)-st row of \(T^{\prime}\) is \(T\left(i-1,j-1\right)\)24, whereas the left-most rising entry is \(T\left(i,j+1\right)\)). But this is easy: Since the entry \(T\left(i-1,j-1\right)\) is well-defined in the first place, we have \(j-1\geq 1\). Thus, Claim 10 yields \(\left(i-1,j-1\right)\in P\left(\sigma\right)\) and \(\left(i,j-1\right)\in P\left(\sigma\right)\) and \(T\left(i-1,j-1\right)<T\left(i,j-1\right)\). Furthermore, \(T\left(i,j-1\right)\leq T\left(i,j+1\right)\) (since the entries in the \(i\)-th row of \(T\) weakly increase left-to-right). Hence, \(T\left(i-1,j-1\right)<T\left(i,j-1\right)\leq T\left(i,j+1\right)\). As we explained, this completes the proof of Claim 12. Footnote 24: because the rightmost staying entry in the \((i-1)\)-st row of \(T\) is \(T\left(i-1,j-1\right)\) _Claim 13:_ For each \(k\in[n]\), the entries in the \(k\)-th row of \(T^{\prime}\) weakly increase left-to-right. Proof of Claim 13.: Let \(k\in[n]\). We must prove that the entries in the \(k\)-th row of \(T^{\prime}\) weakly increase left-to-right. If \(k=i\), then this follows from Claim 11. If \(k=i-1\), then this follows from Claim 12. Thus, we WLOG assume that \(k\) equals neither \(i\) nor \(i-1\). Hence, the \(k\)-th row of \(T^{\prime}\) has the same entries (in the same positions) as the \(k\)-th row of \(T\) (because the flip operation affects only the \(i\)-th and \((i-1)\)-st rows). But the entries in the \(k\)-th row of \(T\) weakly increase left-to-right (since \(T\) is a \(\sigma\)-array). Hence, the entries in the \(k\)-th row of \(T^{\prime}\) weakly increase left-to-right (since the \(k\)-th row of \(T^{\prime}\) has the same entries (in the same positions) as the \(k\)-th row of \(T\)). This proves Claim 13. Recall that \(T^{\prime}\) is a filling of the diagram \(P\left(\sigma^{\prime}\right)\) with positive integers. Moreover, these integers weakly increase left-to-right along each row (by Claim 13). Hence, \(T^{\prime}\) is a \(\sigma^{\prime}\)-array (by the definition of a \(\sigma^{\prime}\)-array). In other words, \((\sigma^{\prime},T^{\prime})\) is a twisted array. Next, we claim: _Claim 14:_ The box \((i,j)\) is a failure of the twisted array \((\sigma^{\prime},T^{\prime})\). Proof of Claim 14.: We know that \(i>1\) and \((i,j)\in P\left(\sigma^{\prime}\right)\) (since the entry \(T\left(i,j\right)\) of \(T\) does not change under the flip operation). If \((i-1,j)\notin P\left(\sigma^{\prime}\right)\), then the box \((i,j)\) is therefore an outer failure of \((\sigma^{\prime},T^{\prime})\), and thus Claim 14 is proved. Hence, for the rest of this proof, we WLOG assume that \((i-1,j)\in P\left(\sigma^{\prime}\right)\). Therefore, \(T^{\prime}\left(i-1,j\right)\) is a rising entry and equals \(T\left(i,j+1\right)\) (by the construction of \(T^{\prime}\)). Meanwhile, \(T^{\prime}\left(i,j\right)\) is a staying entry and thus equals \(T\left(i,j\right)\). However, the entries in the \(i\)-th row of \(T\) weakly increase left-to-right (since \(T\) is a \(\sigma\)-array). Thus, \(T\left(i,j\right)\leq T\left(i,j+1\right)\). In other words, \(T^{\prime}\left(i,j\right)\leq T^{\prime}\left(i-1,j\right)\) (since \(T^{\prime}\left(i,j\right)\) equals \(T\left(i,j\right)\) whereas \(T^{\prime}\left(i-1,j\right)\) equals \(T\left(i,j+1\right)\)). In other words, \(T^{\prime}\left(i-1,j\right)\geq T^{\prime}\left(i,j\right)\). Hence, the box \(\left(i,j\right)\) is an inner failure of \(\left(\sigma^{\prime},T^{\prime}\right)\) (by the definition of an inner failure). This proves Claim 14. Claim 14 shows that the twisted array \(\left(\sigma^{\prime},T^{\prime}\right)\) has a failure, i.e., is failing. Thus, Lemma 8.17**(a)** is proved. **(b)** We make two further claims: _Claim 15:_ The twisted array \(\left(\sigma^{\prime},T^{\prime}\right)\) has no failures in columns \(1,2,\ldots,j-1\). Proof of Claim 15.: We can restate the definition of a failure in terms of the entries: A failure of a twisted array \(\left(\tau,S\right)\) is either an inner failure (i.e., an entry of \(S\) that is smaller than or equal to its northern neighbor25) or an outer failure (i.e., an entry of \(S\) that has no northern neighbor even though it is not in the first row). Thus, whether or not a given box \(\left(p,q\right)\) is a failure of a twisted array \(\left(\tau,S\right)\) depends only on the entries in the boxes \(\left(p,q\right)\) and \(\left(p-1,q\right)\) of \(S\). In particular, it depends only on the entries in the \(q\)-th column of \(S\). Therefore, for any given integer \(q\geq 1\), the failures of a twisted array \(\left(\tau,S\right)\) in the \(q\)-th column depend only on the entries in the \(q\)-th column of \(S\). Footnote 25: Formally, of course, a failure is not an entry but rather the box containing this entry; but we ignore this distinction to keep the writing clearer. Hence, if a \(\tau\)-array \(S\) and a \(\tau^{\prime}\)-array \(S^{\prime}\) have the same entries26 in the \(q\)-th column (for a given \(q\geq 1\)), then the two twisted arrays \(\left(\tau,S\right)\) and \(\left(\tau^{\prime},S^{\prime}\right)\) have the same failures in the \(q\)-th column. Footnote 26: “The same entries” means “the same entries in the same position”. Now, recall that \(T^{\prime}\) is obtained from \(T\) by swapping the top floor of \(c=(i,j)\) with the bottom floor. This swap operation does not affect the columns \(1,2,\ldots,j-1\) (since all the entries being swapped belong to columns \(j,j+1,j+2,\ldots\)). Thus, in the columns \(1,2,\ldots,j-1\), the array \(T^{\prime}\) has the same entries as \(T\) (and in the same positions as well). Therefore, in the columns \(1,2,\ldots,j-1\), the twisted array \(\left(\sigma^{\prime},T^{\prime}\right)\) has the same failures as \(\left(\sigma,T\right)\) (because if a \(\tau\)-array \(S\) and a \(\tau^{\prime}\)-array \(S^{\prime}\) have the same entries in the \(q\)-th column (for a given \(q\geq 1\)), then the two twisted arrays \(\left(\tau,S\right)\) and \(\left(\tau^{\prime},S^{\prime}\right)\) have the same failures in the \(q\)-th column). Since \(\left(\sigma,T\right)\) has no failures in these columns (because \(\left(i,j\right)\) is a **leftmost** failure of \(\left(\sigma,T\right)\)), we thus conclude that \(\left(\sigma^{\prime},T^{\prime}\right)\) has no failures in these columns either. This proves Claim 15. _Claim 16:_ The twisted array \(\left(\sigma^{\prime},T^{\prime}\right)\) has no failures in column \(j\) lying south of the box \(\left(i,j\right)\). Proof of Claim 16.: Recall that \((i,j)\) is the **bottommost** leftmost failure of \((\sigma,T)\). Thus, the twisted array \((\sigma,T)\) has no failures in column \(j\) lying south of the box \((i,j)\). In other words, the twisted array \((\sigma,T)\) has no failures in the boxes \((i+1,j)\,,\ (i+2,j)\,,\ (i+3,j)\,,\ \ldots\). (Of course, a box that does not belong to \(P\left(\sigma\right)\) is not considered as a failure of \((\sigma,T)\).) Recall the following fact (which we proved in our proof of Claim 15): Whether or not a given box \((p,q)\) is a failure of a twisted array \((\tau,S)\) depends only on the entries in the boxes \((p,q)\) and \((p-1,q)\) of \(S\). Therefore, the failures of a twisted array \((\tau,S)\) in the boxes \((i+1,j)\,,\ (i+2,j)\,,\ (i+3,j)\,,\ \ldots\) depend only on the entries of \(S\) in the boxes \((i,j)\,,\ (i+1,j)\,,\ (i+2,j)\,,\ \ldots\). Hence, if a \(\tau\)-array \(S\) and a \(\tau^{\prime}\)-array \(S^{\prime}\) have the same entries27 in the boxes \((i,j)\,,\ (i+1,j)\,,\ (i+2,j)\,,\ \ldots\), then the two twisted arrays \((\tau,S)\) and \((\tau^{\prime},S^{\prime})\) have the same failures in the boxes \((i+1,j)\,,\ (i+2,j)\,,\ (i+3,j)\,,\ \ldots\). Footnote 27: “The same entries” means “the same entries in the same position”. Now, recall that \(T^{\prime}\) is obtained from \(T\) by swapping the top floor of \(c=(i,j)\) with the bottom floor. This swap operation only affects the entries in the boxes \[(i-1,j)\,,\ \ (i-1,j+1)\,,\ \ (i-1,j+2)\,,\ \ \ldots\] and in the boxes \[(i,j+1)\,,\ \ (i,j+2)\,,\ \ (i,j+3)\,,\ \ \ldots\,.\] Thus, the only box **in column**\(j\) that is affected by this swap operation is the box \((i-1,j)\). In particular, the entries in boxes \((i,j)\,,\ (i+1,j)\,,\ (i+2,j)\,,\ \ldots\) are not affected by this swap operation. Hence, the arrays \(T\) and \(T^{\prime}\) have the same entries in these boxes \((i,j)\,,\ (i+1,j)\,,\ (i+2,j)\,,\ \ldots\). Consequently, the twisted arrays \((\sigma,T)\) and \((\sigma^{\prime},T^{\prime})\) have the same failures in the boxes \((i+1,j)\,,\ (i+2,j)\,,\ (i+3,j)\,,\ \ldots\) (because if a \(\tau\)-array \(S\) and a \(\tau^{\prime}\)-array \(S^{\prime}\) have the same entries in the boxes \((i,j)\,,\ (i+1,j)\,,\ (i+2,j)\,,\ \ldots\), then the two twisted arrays \((\tau,S)\) and \((\tau^{\prime},S^{\prime})\) have the same failures in the boxes \((i+1,j)\,,\ (i+2,j)\,,\ (i+3,j)\,,\ \ldots\). Since we know that the twisted array \((\sigma,T)\) has no failures in these boxes, we thus conclude that the twisted array \((\sigma^{\prime},T^{\prime})\) has no failures in these boxes either. In other words, \((\sigma^{\prime},T^{\prime})\) has no failures in the \(j\)-th column lying south of the box \((i,j)\). This proves Claim 16. The box \(c=(i,j)\) is a failure of the twisted array \((\sigma^{\prime},T^{\prime})\) (by Claim 14). It is thus a leftmost failure of \((\sigma^{\prime},T^{\prime})\) (since Claim 15 shows that \((\sigma^{\prime},T^{\prime})\) has no failures in any column to its west), and therefore the bottommost leftmost failure of \((\sigma^{\prime},T^{\prime})\) (since Claim 16 shows that \((\sigma^{\prime},T^{\prime})\) has no failures in the \(j\)-th column to the south of \((i,j)\)). This proves Lemma 8.17**(b)**. **(c)** Lemma 8.17**(b)** shows that the box \(c\) is again the bottommost leftmost failure of \((\sigma^{\prime},T^{\prime})\). Hence, the twisted array \(\operatorname{flip}\left(\sigma^{\prime},T^{\prime}\right)\) is obtained from \((\sigma^{\prime},T^{\prime})\) by exchanging the values of \(\sigma^{\prime}\) on \(i-1\) and \(i\) and by swapping the top floor and the bottom floor of \(c\). But \((\sigma^{\prime},T^{\prime})\) was obtained from \((\sigma,T)\) in the exact same way - i.e., by exchanging the values of \(\sigma\) on \(i-1\) and \(i\) and by swapping the top floor and the bottom floor of \(c\). The nature of these operations is such that doing them twice in succession returns the \(\sigma\) and the \(T\) to their original values (because exchanging the values of \(\sigma\) on \(i-1\) and \(i\) twice in succession returns the original \(\sigma\), whereas swapping the top floor and the bottom floor twice in succession returns all entries to their original places). Thus, flip \(\left(\sigma^{\prime},T^{\prime}\right)=\left(\sigma,T\right)\). This proves Lemma 8.17**(c)**. **(d)** The definition of \(\sigma^{\prime}\) yields \(\sigma^{\prime}=\sigma\circ s_{i-1}\). But it is well-known that the simple transposition \(s_{i-1}\) has sign \(\left(-1\right)^{s_{i-1}}=-1\). Furthermore, it is well-known that any two permutations \(\alpha,\beta\in S_{n}\) satisfy \(\left(-1\right)^{\alpha\circ\beta}=\left(-1\right)^{\alpha}\cdot\left(-1 \right)^{\beta}\). Thus, \(\left(-1\right)^{\sigma\circ s_{i-1}}=\left(-1\right)^{\sigma}\cdot\underbrace {\left(-1\right)^{s_{i-1}}}_{=-1}=\left(-1\right)^{\sigma}\cdot\left(-1 \right)=-\left(-1\right)^{\sigma}\). In other words, \(\left(-1\right)^{\sigma^{\prime}}=-\left(-1\right)^{\sigma}\) (since \(\sigma^{\prime}=\sigma\circ s_{i-1}\)). Thus, Lemma 8.17**(d)** is proved. **(e)** We shall abuse notation somewhat and speak of entries when we really mean boxes. If \(e\) is the entry of \(T\) in a given box \(\left(p,q\right)\), then we let \(d\left(e\right)\) denote the number \(q-p\) (so that the entry \(e\) belongs to the \(d\left(e\right)\)-th diagonal). Strictly speaking, this depends on the box \(\left(p,q\right)\), not on the entry \(e\), but we will pretend that each entry "remembers" what box it is in. We shall refer to the number \(d\left(e\right)\) as the _diagonal position_ of \(e\) in \(T\). Now, the definition of the weight \(w\left(T\right)\) yields \[w\left(T\right)=\prod_{\left(p,d\right)\in P\left(\sigma\right)}u_{T\left(p, q\right),\;q-p}=\prod_{e\text{ is an entry of }T}u_{e,\;d\left(e\right)} \tag{70}\] (where we still pretend that each entry "remembers" its box, so that equal entries in different positions create different factors of the product). Similarly, \[w\left(T^{\prime}\right)=\prod_{e\text{ is an entry of }T^{\prime}}u_{e,\;d \left(e\right)}, \tag{71}\] where \(d\left(e\right)\) now refers to the diagonal position of \(e\) in \(T^{\prime}\). Now, recall that \(T^{\prime}\) is obtained from \(T\) by swapping the top floor of \(c\) with the bottom floor of \(c\). During this swapping operation, some entries of \(T\) get moved (namely, the rising entries are moved by \(1\) unit to the northwest, whereas the falling entries are moved by \(1\) unit to the southeast), but their diagonal positions remain unchanged (since a move by \(1\) unit to the northwest or southeast does not change the diagonal position). Hence, the array \(T^{\prime}\) contains the same entries as \(T\), possibly in different positions but in the same diagonal positions. Thus, \[\prod_{e\text{ is an entry of }T^{\prime}}u_{e,\;d\left(e\right)}=\prod_{e\text{ is an entry of }T}u_{e,\;d\left(e\right)}.\] In view of (70) and (71), we can rewrite this as \(w\left(T^{\prime}\right)=w\left(T\right)\). This proves Lemma 8.17**(e)**. **(f)** Assume that \(\left(\sigma,T\right)\) is \(\mathbf{b}\)-flagged. We must prove that \(\left(\sigma^{\prime},T^{\prime}\right)\) is \(\mathbf{b}\)-flagged. In other words, we must prove that each entry in the \(k\)-th row of \(T^{\prime}\) is \(\leq b_{\sigma^{\prime}\left(k\right)}\) for each \(k\in\left[n\right]\) (by the definition of "\(\mathbf{b}\)-flagged \(\sigma\)-array"). We have assumed that \(\left(\sigma,T\right)\) is \(\mathbf{b}\)-flagged. In other words, the \(\sigma\)-array \(T\) is \(\mathbf{b}\)-flagged. In other words, \[\text{each entry in the $k$-th row of $T$ is }\leq b_{\sigma\left(k\right)} \tag{72}\] for each \(k\in\left[n\right]\) (by the definition of "\(\mathbf{b}\)-flagged"). Again, we proceed by a series of claims: _Claim 17:_ All staying entries in the \(i\)-th row of \(T\) are \(\leq b_{\sigma^{\prime}\left(i\right)}\). Proof of Claim 17.: Let \(e\) be a staying entry in the \(i\)-th row of \(T\). We must prove that \(e\leq b_{\sigma^{\prime}\left(i\right)}\). The staying entries in the \(i\)-th row of \(T\) are \(T\left(i,1\right)\), \(T\left(i,2\right)\), \(\ldots\), \(T\left(i,j\right)\). Hence, \(e=T\left(i,\ell\right)\) for some \(\ell\in\left[j\right]\) (since \(e\) is a staying entry in the \(i\)-th row of \(T\)). Consider this \(\ell\). But \(T\) is a \(\sigma\)-array, and thus the entries of \(T\) weakly increase left-to-right along each row. Hence, \(T\left(i,\ell\right)\leq T\left(i,j\right)\) (since \(\ell\leq j\)). It thus remains to prove that \(T\left(i,j\right)\leq b_{\sigma^{\prime}\left(i\right)}\) (because once this is proved, it will follow that \(e=T\left(i,\ell\right)\leq T\left(i,j\right)\leq b_{\sigma^{\prime}\left(i \right)}\), which is precisely our goal). In other words, it remains to prove that \(T\left(i,j\right)\leq b_{\sigma\left(i-1\right)}\) (since \(\sigma^{\prime}\left(i\right)=\sigma\left(i-1\right)\)). Recall that \(c=\left(i,j\right)\) is a failure of \(\left(\sigma,T\right)\). Hence, we are in one of the following two cases: _Case 1:_ The failure \(c=\left(i,j\right)\) is an inner failure of \(\left(\sigma,T\right)\). _Case 2:_ The failure \(c=\left(i,j\right)\) is an outer failure of \(\left(\sigma,T\right)\). Let us consider Case 1 first. In this case, \(c=\left(i,j\right)\) is an inner failure of \(\left(\sigma,T\right)\). Hence, \(T\left(i-1,j\right)\geq T\left(i,j\right)\) (by the definition of an inner failure), so that \(T\left(i,j\right)\leq T\left(i-1,j\right)\). But (72) shows that each entry in the \(\left(i-1\right)\)-st row of \(T\) is \(\leq b_{\sigma\left(i-1\right)}\). Hence, in particular, \(T\left(i-1,j\right)\leq b_{\sigma\left(i-1\right)}\) (since \(T\left(i-1,j\right)\) is an entry in the \(\left(i-1\right)\)-st row of \(T\)). Therefore, \(T\left(i,j\right)\leq T\left(i-1,j\right)\leq b_{\sigma\left(i-1\right)}\). Thus we have proved \(T\left(i,j\right)\leq b_{\sigma\left(i-1\right)}\) in Case 1. Let us now consider Case 2. In this case, \(c=\left(i,j\right)\) is an outer failure of \(\left(\sigma,T\right)\). Thus, \(\left(i-1,j\right)\notin P\left(\sigma\right)\). Hence, the \(\left(i-1\right)\)-st row of \(P\left(\sigma\right)\) has fewer than \(j\) boxes (since the rows of \(P\left(\sigma\right)\) are left-aligned). Recall that the \(k\)-th row of the diagram \(P\left(\sigma\right)\) contains \(\rho_{k}\) boxes for each \(k\in\left[n\right]\). Hence, the \(\left(i-1\right)\)-st row of \(P\left(\sigma\right)\) has \(\rho_{i-1}\) boxes. Thus, \(\rho_{i-1}<j\) (since the \(\left(i-1\right)\)-st row of \(P\left(\sigma\right)\) has fewer than \(j\) boxes). But Claim 4 yields \(\rho_{i}\geq j\). Hence, \(\rho_{i}\geq j>\rho_{i-1}\) (since \(\rho_{i-1}<j\)). Next, we claim that \(\sigma\left(i\right)\leq\sigma\left(i-1\right)\). Indeed, assume the contrary. Thus, \(\sigma\left(i\right)>\sigma\left(i-1\right)\), so that \(\sigma\left(i\right)\geq\sigma\left(i-1\right)+1\). Now, recall that \(\mu_{1}\geq\mu_{2}\geq\mu_{3}\geq\cdots\) (since \(\mu\) is a partition). Hence, \(\mu_{p}\geq\mu_{q}\) for any two positive integers \(p\) and \(q\) satisfying \(p<q\). Applying this to \(p=\sigma\left(i-1\right)\) and \(q=\sigma\left(i\right)\), we obtain \(\mu_{\sigma\left(i-1\right)}\geq\mu_{\sigma\left(i\right)}\) (since \(\sigma\left(i-1\right)<\sigma\left(i\right)\)). In other words, \(\mu_{\sigma\left(i\right)}\leq\mu_{\sigma\left(i-1\right)}\). Now, the definition of \(\rho_{i}\) yields \[\rho_{i} =\underbrace{\mu_{\sigma\left(i\right)}}_{\leq\mu_{\sigma\left(i \right)}}-\underbrace{\sigma\left(i\right)}_{\geq\sigma\left(i-1\right)+1}+ \underbrace{i}_{=\left(i-1\right)+1}\] \[\leq\mu_{\sigma\left(i-1\right)}-\left(\sigma\left(i-1\right)+1 \right)+\left(\left(i-1\right)+1\right)\] \[=\mu_{\sigma\left(i-1\right)}-\sigma\left(i-1\right)+\left(i-1 \right)=\rho_{i-1}\] (since \(\rho_{i-1}\) is defined to be \(\mu_{\sigma\left(i-1\right)}-\sigma\left(i-1\right)+\left(i-1\right)\)). But this contradicts \(\rho_{i}>\rho_{i-1}\). This contradiction shows that our assumption was false. Hence, \(\sigma\left(i\right)\leq\sigma\left(i-1\right)\) is proved. But the flagging \(\mathbf{b}\) is weakly increasing; thus, \(b_{1}\leq b_{2}\leq b_{3}\leq\cdots\). In other words, \(b_{p}\leq b_{q}\) for any two positive integers \(p\) and \(q\) satisfying \(p\leq q\). Applying this to \(p=\sigma\left(i\right)\) and \(q=\sigma\left(i-1\right)\), we obtain \(b_{\sigma\left(i\right)}\leq b_{\sigma\left(i-1\right)}\) (since \(\sigma\left(i\right)\leq\sigma\left(i-1\right)\)). But (72) shows that each entry in the \(i\)-th row of \(T\) is \(\leq b_{\sigma\left(i\right)}\). Hence, in particular, \(T\left(i,j\right)\leq b_{\sigma\left(i\right)}\) (since \(T\left(i,j\right)\) is an entry in the \(i\)-th row of \(T\)). Therefore, \(T\left(i,j\right)\leq b_{\sigma\left(i\right)}\leq b_{\sigma\left(i-1\right)}\). Thus we have proved \(T\left(i,j\right)\leq b_{\sigma\left(i-1\right)}\) in Case 2. We have now proved \(T\left(i,j\right)\leq b_{\sigma\left(i-1\right)}\) in both Cases 1 and 2. Hence, \(T\left(i,j\right)\leq b_{\sigma\left(i-1\right)}\) always holds. As explained above, this completes the proof of Claim 17. **Claim 18**: _All falling entries in the \(\left(i-1\right)\)-st row of \(T\) are \(\leq b_{\sigma^{\prime}\left(i\right)}\)._ Proof of Claim 18.: All entries in the \(\left(i-1\right)\)-st row of \(T\) are \(\leq b_{\sigma\left(i-1\right)}\) (by (72), applied to \(k=i-1\)). In other words, all entries in the \(\left(i-1\right)\)-st row of \(T\) are \(\leq b_{\sigma^{\prime}\left(i\right)}\) (since \(\sigma^{\prime}\left(i\right)=\sigma\left(i-1\right)\)). Hence, in particular, all falling entries in the \(\left(i-1\right)\)-st row of \(T\) are \(\leq b_{\sigma^{\prime}\left(i\right)}\). This proves Claim 18. **Claim 19**: _All staying entries in the \(\left(i-1\right)\)-st row of \(T\) are \(\leq b_{\sigma^{\prime}\left(i-1\right)}\)._ Proof of Claim 19.: Let \(e\) be a staying entry in the \(\left(i-1\right)\)-st row of \(T\). We must prove that \(e\leq b_{\sigma^{\prime}\left(i-1\right)}\). The staying entries in the \(\left(i-1\right)\)-st row of \(T\) are \(T\left(i-1,1\right)\), \(T\left(i-1,2\right)\), \(\ldots\), \(T\left(i-1,j-1\right)\). Hence, \(e=T\left(i-1,\ell\right)\) for some \(\ell\in\left[j-1\right]\) (since \(e\) is a staying entry in the \(\left(i-1\right)\)-st row of \(T\)). Consider this \(\ell\). From \(\ell\in\left[j-1\right]\), we obtain \(\ell\leq j-1\) and thus \(j-1\geq\ell\geq 1\). Hence, Claim 10 yields \(\left(i-1,j-1\right)\in P\left(\sigma\right)\) and \(\left(i,j-1\right)\in P\left(\sigma\right)\) and \(T\left(i-1,j-1\right)<T\left(i,j-1\right)\). But \(T\) is a \(\sigma\)-array, and thus the entries of \(T\) weakly increase left-to-right along each row. Hence, from \(\ell\leq j-1\), we obtain \(T\left(i-1,\ell\right)\leq T\left(i-1,j-1\right)\). Altogether, we now have \(e=T\left(i-1,\ell\right)\leq T\left(i-1,j-1\right)<T\left(i,j-1\right)\). But (72) shows that each entry in the \(i\)-th row of \(T\) is \(\leq b_{\sigma\left(i\right)}\). Hence, in particular, \(T\left(i,j-1\right)\leq b_{\sigma\left(i\right)}\) (since \(T\left(i,j-1\right)\) is an entry in the \(i\)-th row of \(T\)). Therefore, \(e<T\left(i,j-1\right)\leq b_{\sigma\left(i\right)}\). This rewrites as \(e<b_{\sigma^{\prime}\left(i-1\right)}\) (since \(\sigma^{\prime}\left(i-1\right)=\sigma\left(i\right)\)). Hence, of course, \(e\leq b_{\sigma^{\prime}\left(i-1\right)}\). This completes the proof of Claim 19. **Claim 20**: _All rising entries in the \(i\)-th row of \(T\) are \(\leq b_{\sigma^{\prime}\left(i-1\right)}\)._ Proof of Claim 20.: All entries in the \(i\)-th row of \(T\) are \(\leq b_{\sigma(i)}\) (by (72), applied to \(k=i\)). In other words, all entries in the \(i\)-th row of \(T\) are \(\leq b_{\sigma^{\prime}(i-1)}\) (since \(\sigma^{\prime}\left(i-1\right)=\sigma\left(i\right)\)). Hence, in particular, all rising entries in the \(i\)-th row of \(T\) are \(\leq b_{\sigma^{\prime}(i-1)}\). This proves Claim 20. _Claim 21:_ Let \(k\in[n]\). Then, all entries in the \(k\)-th row of \(T^{\prime}\) are \(\leq b_{\sigma^{\prime}(k)}\). Proof of Claim 21.: The \(i\)-th row of \(T^{\prime}\) has two kinds of entries: the staying entries of the \(i\)-th row of \(T\) (which remain in their places in \(T^{\prime}\)), and the falling entries of the \((i-1)\)-st row of \(T\) (which are moved to the \(i\)-th row by the flip operation). Both kinds of entries are \(\leq b_{\sigma^{\prime}(i)}\) (by Claim 17 and Claim 18, respectively). Hence, all entries in the \(i\)-th row of \(T^{\prime}\) are \(\leq b_{\sigma^{\prime}(i)}\). In other words, Claim 21 is proved for \(k=i\). The \((i-1)\)-st row of \(T^{\prime}\) has two kinds of entries: the staying entries of the \((i-1)\)-st row of \(T\) (which remain in their places in \(T^{\prime}\)), and the rising entries of the \(i\)-th row of \(T\) (which are moved to the \((i-1)\)-st row by the flip operation). Both kinds of entries are \(\leq b_{\sigma^{\prime}(i-1)}\) (by Claim 19 and Claim 20, respectively). Hence, all entries in the \((i-1)\)-st row of \(T^{\prime}\) are \(\leq b_{\sigma^{\prime}(i-1)}\). In other words, Claim 21 is proved for \(k=i-1\). We have now proved Claim 21 for \(k=i\) and for \(k=i-1\). Hence, for the rest of this proof of Claim 21, we WLOG assume that \(k\) equals neither \(i\) nor \(i-1\). Hence, the entries in the \(k\)-th row of \(T^{\prime}\) are precisely the entries in the \(k\)-th row of \(T\) (because the flip operation affects only the \(i\)-th and \((i-1)\)-st rows). But all the latter entries are \(\leq b_{\sigma(k)}\) (by (72)). Hence, all the former entries are \(\leq b_{\sigma(k)}\) as well. So we have shown that all entries in the \(k\)-th row of \(T^{\prime}\) are \(\leq b_{\sigma(k)}\). But \(k\in[n]\setminus\{i,i-1\}\) (since \(k\) equals neither \(i\) nor \(i-1\)). Hence, \(\sigma^{\prime}\left(k\right)=\sigma\left(k\right)\) (by (67)). Recall that all entries in the \(k\)-th row of \(T^{\prime}\) are \(\leq b_{\sigma(k)}\). In other words, all entries in the \(k\)-th row of \(T^{\prime}\) are \(\leq b_{\sigma^{\prime}(k)}\) (since \(\sigma^{\prime}\left(k\right)=\sigma\left(k\right)\)). This proves Claim 21. Claim 21 shows that the \(\sigma^{\prime}\)-array \(T^{\prime}\) is \(\mathbf{b}\)-flagged (by the definition of "\(\mathbf{b}\)-flagged \(\sigma\)-array"). In other words, the twisted array \((\sigma^{\prime},T^{\prime})\) is \(\mathbf{b}\)-flagged. This proves Lemma 8.17 **(f)**. The proof of Lemma 8.18 uses the flip operation (defined in Lemma 8.17) to cancel all the unwanted addends (i.e., the addends corresponding to failing twisted arrays \((\sigma,T)\)) from the sum on the left hand side. This is best formalized using the following general cancellation rule: **Lemma 15.1**.: Let \(A\) be a finite set. Let \(R\) be a ring. For each \(a\in A\), let \(m_{a}\) be an integer, and let \(r_{a}\) be an element of \(R\). Let \(f:A\to A\) be a bijection. Assume that each \(a\in A\) satisfies \[m_{f(a)}=-m_{a} \tag{73}\] and \[r_{f(a)}=r_{a}. \tag{74}\] Then, \[\sum\limits_{a\in A}m_{a}r_{a}=0.\] Proof of Lemma 15.1.: The map \(f\) is a bijection, thus injective and surjective. For each \(a\in A\), the number \(m_{a}\) is an integer, thus is either \(>0\) or \(=0\) or \(<0\). Hence, we can split up the sum \(\sum\limits_{a\in A}m_{a}r_{a}\) as follows: \[\sum\limits_{a\in A}m_{a}r_{a}=\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}>0\end{subarray}}m_{a}r_{a}+\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}=0\end{subarray}}\underbrace{m_{a}}_{=0}r_{a}+\sum\limits_{\begin{subarray}{ c}a\in A;\\ m_{a}<0\end{subarray}}m_{a}r_{a}=\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}>0\end{subarray}}m_{a}r_{a}+\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}=0\end{subarray}}\underbrace{m_{a}r_{a}}_{=0}m_{a}+\sum\limits_{ \begin{subarray}{c}a\in A;\\ m_{a}<0\end{subarray}}m_{a}r_{a}\] \[=\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}>0\end{subarray}}m_{a}r_{a}+\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}<0\end{subarray}}m_{a}r_{a}. \tag{75}\] Now, let us define two subsets \[A_{+}:=\{a\in A\ \ |\ \ m_{a}>0\}\qquad\quad\text{and}\qquad\quad A_{-}:=\{a \in A\ \ |\ \ m_{a}<0\}\] of \(A\). Then, the summation signs \(\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}>0\end{subarray}}\) and \(\sum\limits_{\begin{subarray}{c}a\in A;\\ m_{a}<0\end{subarray}}\) can be rewritten as \(\sum\limits_{a\in A_{+}}\) and \(\sum\limits_{a\in A_{-}}\), respectively. Therefore, we can rewrite (75) as \[\sum\limits_{a\in A}m_{a}r_{a}=\sum\limits_{a\in A_{+}}m_{a}r_{a}+\sum\limits _{a\in A_{-}}m_{a}r_{a}. \tag{76}\] Each \(a\in A_{+}\) satisfies \(f\left(a\right)\in A_{-}\)28. Hence, the map Footnote 28: Proof. Let \(a\in A_{+}\). Thus, \(a\in A\) and \(m_{a}>0\) (by the definition of \(A_{+}\)). However, (73) yields \(m_{f\left(a\right)}=-m_{a}<0\) (since \(m_{a}>0\)). Hence, \(f\left(a\right)\in A\) and \(m_{f\left(a\right)}<0\). In other words, \(f\left(a\right)\in A_{-}\) (by the definition of \(A_{-}\)), qed. \[g:A_{+} \to A_{-},\] \[a \mapsto f\left(a\right)\] is well-defined. This map \(g\) is a restriction of the injective map \(f\), and thus is injective itself. Furthermore, \(g\) is surjective29. Hence, \(g\) is a bijection (since \(g\) is both injective and surjective). Therefore, we can substitute \(g\left(a\right)\) for \(a\) in the sum \(\sum\limits_{a\in A_{-}}m_{a}r_{a}\). We thus obtain \[\sum\limits_{a\in A_{-}}m_{a}r_{a} =\sum\limits_{a\in A_{+}}\underbrace{m_{g\left(a\right)}r_{g\left( a\right)}}_{=m_{f\left(a\right)}r_{f\left(a\right)}}=\sum\limits_{a\in A_{+}} \underbrace{m_{f\left(a\right)}}_{=-m_{a}}\underbrace{r_{f\left(a\right)}}_{ =r_{a}}\] \[=\sum\limits_{a\in A_{+}}\left(-m_{a}\right)r_{a}=-\sum\limits_{a \in A_{+}}m_{a}r_{a}.\] Therefore, \[\sum\limits_{a\in A_{+}}m_{a}r_{a}+\sum\limits_{a\in A_{-}}m_{a}r_{a}=0.\] In light of this, we can rewrite (76) as \(\sum\limits_{a\in A}m_{a}r_{a}=0\). This proves Lemma 15.1. Proof of Lemma 8.18.: Each twisted array \(\left(\sigma,T\right)\) is either failing or unfailing. Hence, \[=\sum\limits_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a }\text{ }\text{\bf b-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)+ \sum\limits_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is an}\\ \text{unfailing }\text{\bf b-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right). \tag{77}\] Our next goal is to show that the first sum on the right hand side is 0. Let \(A\) be the set of all failing \(\mathbf{b}\)-flagged twisted arrays. This set \(A\) is finite30. Moreover, the following follows easily from Lemma 8.17: Footnote 30: Proof.: Let \(b_{\max}:=\max\left\{b_{1},b_{2},\ldots,b_{n}\right\}\) (this should be understood to mean 0 when \(n=0\)). If \(\left(\sigma,T\right)\) is a \(\mathbf{b}\)-flagged twisted array, then \(\sigma\) is a permutation in \(S_{n}\), whereas \(T\) is a filling of the diagram \(P\left(\sigma\right)\) with entries from the set \(\left\{1,2,\ldots,b_{\max}\right\}\) (since the “\(\mathbf{b}\)-flagged” condition forces each entry of \(T\) to be \(\leq b_{i}\) for an appropriate \(i\in[n]\), and thus to be \(\leq b_{\max}\)). Clearly, this leaves only finitely many options for \(\sigma\) and only finitely many options for \(T\). Thus, there are finitely many \(\mathbf{b}\)-flagged twisted arrays. Hence, a fortiori, there are only finitely many failing \(\mathbf{b}\)-flagged twisted arrays. In other words, the set \(A\) is finite. _Claim 1:_ For each \(\left(\sigma,T\right)\in A\), we have \(\text{flip}\left(\sigma,T\right)\in A\). Proof of Claim 1.: Let \(\left(\sigma,T\right)\in A\). Thus, \(\left(\sigma,T\right)\) is a failing \(\mathbf{b}\)-flagged twisted array (by the definition of \(A\)). Let \(\left(\sigma^{\prime},T^{\prime}\right)\) be the pair \(\text{flip}\left(\sigma,T\right)\). Then, Lemma 8.17**(a)** shows that the pair \(\left(\sigma^{\prime},T^{\prime}\right)\) is again a failing twisted array, and furthermore, Lemma 8.17**(f)** shows that this twisted array \(\left(\sigma^{\prime},T^{\prime}\right)\) is again \(\mathbf{b}\)-flagged. Thus, \(\left(\sigma^{\prime},T^{\prime}\right)\) is a failing \(\mathbf{b}\)-flagged twisted array. In other words, \(\left(\sigma^{\prime},T^{\prime}\right)\in A\) (by the definition of \(A\)). In other words, \(\text{flip}\left(\sigma,T\right)\in A\) (since \(\left(\sigma^{\prime},T^{\prime}\right)\) is the pair \(\text{flip}\left(\sigma,T\right)\)). This proves Claim 1. Thanks to Claim 1, we can define a map \[\begin{split}\text{flip}:A&\to A,\\ (\sigma,T)&\mapsto\text{flip}\left(\sigma,T\right). \end{split}\] Lemma 8.17 **(c)** shows that this map is inverse to itself (i.e., if we apply it twice in succession to some twisted array \(\left(\sigma,T\right)\), then we obtain the original \(\left(\sigma,T\right)\) back). Hence, this map is invertible, i.e., a bijection. For each element \(a=\left(\sigma,T\right)\) of \(A\), we define the integer \(m_{a}:=\left(-1\right)^{\sigma}\) and the element \(r_{a}:=w\left(T\right)\) of \(R\) (where \(R\) is as in Definition 7.3**(a)**). Then, we claim the following: _Claim 2:_ Each \(a\in A\) satisfies \(m_{\text{flip}\left(a\right)}=-m_{a}\) and \(r_{\text{flip}\left(a\right)}=r_{a}\). Proof of Claim 2.: Let \(a\in A\). Write \(a\) as \(\left(\sigma,T\right)\). Thus, \(\left(\sigma,T\right)=a\in A\). Hence, \(\left(\sigma,T\right)\) is a failing \(\mathbf{b}\)-flagged twisted array (by the definition of \(A\)). Let \(\left(\sigma^{\prime},T^{\prime}\right)\) be the pair \(\text{flip}\left(\sigma,T\right)\). Then, \(\left(\sigma^{\prime},T^{\prime}\right)=\text{flip}\left(\sigma,T\right)= \text{flip}\left(a\right)\) (since \(\left(\sigma,T\right)=a\)). From \(a=\left(\sigma,T\right)\), we obtain \(m_{a}=m_{\left(\sigma,T\right)}=\left(-1\right)^{\sigma}\) (by the definition of \(m_{\left(\sigma,T\right)}\)) and \(r_{a}=r_{\left(\sigma,T\right)}=w\left(T\right)\) (by the definition of \(r_{\left(\sigma,T\right)}\)). Similarly, from \(\text{flip}\left(a\right)=\left(\sigma^{\prime},T^{\prime}\right)\), we obtain \(m_{\text{flip}\left(a\right)}=\left(-1\right)^{\sigma^{\prime}}\) and \(r_{\text{flip}\left(a\right)}=w\left(T^{\prime}\right)\). However, Lemma 8.17**(d)** yields \(\left(-1\right)^{\sigma^{\prime}}=-\left(-1\right)^{\sigma}\). In other words, \(m_{\text{flip}\left(a\right)}=-m_{a}\) (since \(m_{\text{flip}\left(a\right)}=\left(-1\right)^{\sigma^{\prime}}\) and \(m_{a}=\left(-1\right)^{\sigma}\)). Furthermore, Lemma 8.17**(e)** yields that \(w\left(T^{\prime}\right)=w\left(T\right)\). In other words, \(r_{\text{flip}\left(a\right)}=r_{a}\) (since \(r_{\text{flip}\left(a\right)}=w\left(T^{\prime}\right)\) and \(r_{a}=w\left(T\right)\)). This completes the proof of Claim 2. Thus we know that \(\text{flip}:A\to A\) is a bijection and satisfies Claim 2. Hence, we can apply Lemma 15.1 to \(f=\text{flip}\). As a result, we obtain \[\sum_{a\in A}m_{a}r_{a}=0.\] In view of \[\sum_{a\in A}m_{a}r_{a} =\sum_{\left(\sigma,T\right)\in A}\underbrace{m_{\left(\sigma,T \right)}}_{\begin{subarray}{c}=\left(-1\right)^{\sigma}\\ \text{(by the definition }\\ \text{of }m_{\left(\sigma,T\right)}\end{subarray}\end{subarray}}\underbrace{r_{ \left(\sigma,T\right)}}_{\begin{subarray}{c}=w\left(T\right)\\ \text{of }r_{\left(\sigma,T\right)}\end{subarray}}\left(\begin{array}{c} \text{here, we have renamed the}\\ \text{summation index }a\text{ as }\left(\sigma,T\right)\end{array}\right)\] \[=\sum_{\left(\sigma,T\right)\in A}\left(-1\right)^{\sigma}w \left(T\right)\] \[=\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a}\\ \text{failing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)\] since \[A\] is the set of all \[\text{failing }\mathbf{b}\text{-flagged}\] failing \[\text{b}\text{-flagged}\] twisted array \[\text{(since }A\] is the set of all \[\text{failing }\mathbf{b}\text{-flagged}\] failing \[\text{b}\text{-flagged we can rewrite this as \[\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a}\\ \text{failing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)=0.\] Thus, (77) becomes \[\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right) =\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a}\\ \text{failing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)+ \sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is an}\\ \text{unfailing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)\] \[=\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is an}\\ \text{unfailing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right).\] This proves Lemma 8.18. Proof of Theorem 8.3.: Lemma 8.9 yields \[\det\left(h_{b_{i};\text{ }\mu_{i}-i+j}\left[j\right]\right)_{i,j \in\left[n\right]} =\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is a }\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)\] \[=\sum_{\begin{subarray}{c}\left(\sigma,T\right)\text{ is an}\\ \text{unfailing }\mathbf{b}\text{-flagged}\\ \text{twisted array}\end{subarray}}\left(-1\right)^{\sigma}w\left(T\right)\] (by Lemma 8.18 ) \[=\sum_{T\in\text{FSSYT}\left(\mu,\mathbf{b}\right)}\prod_{ \left(i,j\right)\in Y\left(\mu\right)}u_{T\left(i,j\right),\ j-i}\] (by Lemma 8.14 ). This proves Theorem 8.3. Proof of Proposition 8.2.: We set \[u_{i,j}:=x_{i}+y_{i+j}\text{ \ \ \ \ \ \ \ \ \ \ for each \ }\left(i,j\right)\in\mathbb{Z}\times\mathbb{Z}.\] Then, for any \(b\in\mathbb{N}\) and \(q,d\in\mathbb{Z}\), the elements \(h_{b;\ q}\left[d\right]\) defined in Theorem 8.3 are given by \[h_{b;\ q}\left[d\right] =\sum_{\begin{subarray}{c}\left(i_{1},i_{2},\ldots,i_{d}\right) \in\left[b\right]^{q};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{q}\end{subarray}}\prod_{j=1}^{q}\underbrace{ u_{i_{j},\ j-d}}_{\begin{subarray}{c}\left(\text{since }j-d=(j-1)+(1-d)\right) \\ \left(\text{otherwise }\right)\end{subarray}}\] \[=\sum_{\begin{subarray}{c}\left(i_{1},i_{2},\ldots,i_{d}\right) \in\left[b\right]^{q};\\ i_{1}\leq i_{2}\leq\cdots\leq i_{q}\end{subarray}}\prod_{j=1}^{q}\left(x_{i} +y_{i_{j}+(j-1)+(1-d)}\right)\] \[=h\left(q,\ \ b,\ \ 1-d\right) \tag{78}\] (by Definition 7.3**(c)**). Furthermore, the definition of the \(u_{i,j}\) yields \[\sum_{T\in\text{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)} \underbrace{u_{T(i,j),\ j-i}}_{=x_{T(i,j)}+y_{T(i,j)+j-i}}\] \[=\sum_{T\in\text{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)} \left(x_{T(i,j)}+y_{T(i,j)+j-i}\right).\] Hence, \[\sum_{T\in\text{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)} \left(x_{T(i,j)}+y_{T(i,j)+j-i}\right)\] \[=\sum_{T\in\text{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)} u_{T(i,j),\ j-i}\] \[=\det\left(\underbrace{h_{b_{i}};\ \mu_{i-i+j}\,[j]}_{=h(\mu_{i}-i+j,\ b_{i},\ \ 1-j)}\right)_{i,j\in[n]}\ \ ### To Section 9 Proof of Lemma 9.2.: Let \(k\in[n]\). Applying (24) to \(A=P\xleftarrow{k}Q\), we obtain \[\det\left(P\xleftarrow{k}Q\right) =\sum_{\ell=1}^{n}\left(-1\right)^{k+\ell} \underbrace{\left(P\xleftarrow{k}Q\right)_{k,\ell}}_{=Q_{k,\ell}} \det\underbrace{\left(\left(P\xleftarrow{k}Q\right)_{\sim k,\sim\ell} \right)}_{=P\sim k,\sim\ell} \tag{79}\] \[=\sum_{\ell=1}^{n}\left(-1\right)^{k+\ell}Q_{k,\ell}\det\left(P_{ \sim k,\sim\ell}\right).\] Forget that we fixed \(k\). We thus have proved the equality (79) for each \(k\in[n]\). Summing this equality over all \(k\in[n]\), we find \[\sum_{k=1}^{n}\det\left(P\xleftarrow{k}Q\right)=\sum_{k=1}^{n}\ \ \sum_{\ell=1}^{n}\left(-1\right)^{k+\ell}Q_{k,\ell}\det\left(P_{\sim k,\sim\ell }\right).\] An analogous argument (using Laplace expansion along columns rather than rows) shows that \[\sum_{k=1}^{n}\det\left(P\xleftarrow{k}Q\right) =\sum_{k=1}^{n}\ \ \sum_{\ell=1}^{n}\left(-1\right)^{\ell+k}Q_{\ell k}\det \left(P_{\sim\ell,\sim k}\right)\] \[=\sum_{\ell=1}^{n}\ \ \sum_{k=1}^{n}\left(-1\right)^{\ell+k}Q_{\ell k }\det\left(P_{\sim\ell,\sim k}\right)\] \[=\sum_{k=1}^{n}\ \ \sum_{\ell=1}^{n}\left(-1\right)^{k+\ell}Q_{k, \ell}\det\left(P_{\sim k,\sim\ell}\right)\] (here, we have renamed the summation indices \(\ell\) and \(k\) as \(k\) and \(\ell\)). Comparing these two equalities, we obtain \[\sum_{k=1}^{n}\det\left(P\xleftarrow{k}Q\right)=\sum_{k=1}^{n}\det\left(P \xleftarrow{k}Q\right).\] This proves Lemma 9.2. Proof of Lemma 9.4.: Let \(P\) be the \(n\times n\)-matrix \(\left(u_{i,j}\right)_{i,j\in[n]}\), and let \(Q\) be the \(n\times n\)-matrix \(\left(u_{i,j+1}\right)_{i,j\in[n]}\). Consider the matrices \(P\xleftarrow{k}Q\) and \(P\xleftarrow{k}Q\) defined in Lemma 9.2. It is easy to see the following: _Claim 1:_ Every \(k\in[n]\) satisfies \[P\xleftarrow{k}Q=\left(u_{i,j+[k=i]}\right)_{i,j\in[n]} \tag{80}\] and \[P\underset{\mathrm{col}}{\overset{k}{\leftarrow}}Q=\left(u_{i,j+[k=j]}\right)_{i, j\in[n]}. \tag{81}\] Proof of Claim 1.: Let \(k\in[n]\). As we recall, the matrix \(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q\) is obtained from \(P\) by replacing the \(k\)-th row by the \(k\)-th row of \(Q\). Hence, its \((i,j)\)-th entry is given by \[\left(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q\right)_{i,j}=\begin{cases} P_{i,j},&\text{if }i\neq k;\\ Q_{i,j},&\text{if }i=k\end{cases} \tag{82}\] for all \(i,j\in[n]\). Now, let \(i,j\in[n]\). Then, the element \(u_{i,j+[k=i]}\) equals \(u_{i,j}\) when \(i\neq k\) (because in this case, we have \(k\neq i\) and thus \([k=i]=0\) and therefore \(j+[k=i]=j+0=j\) and thus \(u_{i,j+[k=i]}=u_{i,j}\)), but equals \(u_{i,j+1}\) when \(i=k\) (because in this case, we have \(k=i\) and thus \([k=i]=1\) and thus \(u_{i,j+[k=i]}=u_{i,j+1}\)). Hence, \[u_{i,j+[k=i]}=\begin{cases}u_{i,j},&\text{if }i\neq k;\\ u_{i,j+1},&\text{if }i=k.\end{cases}\] On the other hand, (82) yields \[\left(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q\right)_{i,j}= \begin{cases}P_{i,j},&\text{if }i\neq k;\\ Q_{i,j},&\text{if }i=k\end{cases}=\begin{cases}u_{i,j},&\text{if }i\neq k;\\ u_{i,j+1},&\text{if }i=k\end{cases}\] (since \(P_{i,j}=u_{i,j}\) (by the definition of \(P\)) and \(Q_{i,j}=u_{i,j+1}\) (by the definition of \(Q\))). Comparing these two equalities, we obtain \[\left(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q\right)_{i,j}=u_{i, j+[k=i]}.\] Forget that we fixed \(i,j\). We thus have proved the equality \(\left(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q\right)_{i,j}=u_{i,j+[k=i]}\) for all \(i,j\in[n]\). In other words, \(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q=\left(u_{i,j+[k=i]} \right)_{i,j\in[n]}\). This proves (80). Similarly, (81) can be shown. Thus, Claim 1 follows. However, Lemma 9.2 says that \[\sum_{k=1}^{n}\det\left(P\underset{\mathrm{row}}{\overset{k}{\leftarrow}}Q \right)=\sum_{k=1}^{n}\det\left(P\underset{\mathrm{col}}{\overset{k}{ \leftarrow}}Q\right).\] In view of (80) and (81), we can rewrite this as \[\sum_{k=1}^{n}\det\left(u_{i,j+[k=i]}\right)_{i,j\in[n]}=\sum_{k=1}^{n}\det \left(u_{i,j+[k=j]}\right)_{i,j\in[n]}. \tag{83}\] However, the right hand side of this equality can be simplified thanks to the following claim: _Claim 2:_ Let \(k\in[n-1]\). Then, \[\det\left(u_{i,j+\left[k=j\right]}\right)_{i,j\in[n]}=0. \tag{84}\] Proof of Claim 2.: We have \([k=k]=1\) and \([k=k+1]=0\). Thus, for each \(i\in[n]\), we have \(u_{i,k+\left[k=k\right]}=u_{i,k+1}\) and \(u_{i,(k+1)+\left[k=k+1\right]}=u_{i,(k+1)+0}=u_{i,k+1}\). Comparing these two equalities, we see that \[u_{i,k+\left[k=k\right]}=u_{i,(k+1)+\left[k=k+1\right]}\qquad\text{for each }i\in[n]\,.\] In other words, the \(k\)-th and \((k+1)\)-st columns of the matrix \(\left(u_{i,j+\left[k=j\right]}\right)_{i,j\in[n]}\) are equal. Hence, this matrix has two equal columns, and thus its determinant vanishes. This proves Claim 2. Now, (83) becomes \[\sum_{k=1}^{n}\det\left(u_{i,j+\left[k=i\right]}\right)_{i,j\in[ n]} =\sum_{k=1}^{n}\det\left(u_{i,j+\left[k=j\right]}\right)_{i,j\in[ n]}\] \[=\sum_{k=1}^{n-1}\underbrace{\det\left(u_{i,j+\left[k=j\right]} \right)_{i,j\in[n]}}_{\text{(by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eq:eqeqeqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq Now, applying (24) to \(A=V_{k}\), we obtain \[\det\left(V_{k}\right) =\sum_{\ell=1}^{n}\left(-1\right)^{k+\ell}\underbrace{\left(V_{k} \right)_{k,\ell}}_{\underbrace{=u_{k,\ell+1}}_{\text{(by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eq Forget that we fixed \(k\). Thus, for each \(k\in[n]\), we have defined two matrices \(V_{k}\) and \(U_{k}\) and proved the relation (92) between their determinants. Now, summing the equality (92) for all \(k\in[n]\), we obtain \[\sum_{k=1}^{n}\det\left(U_{k}\right) =\sum_{k=1}^{n}\left(\det\left(V_{k}\right)-\sum_{\ell=1}^{n} \left(-1\right)^{k+\ell}p_{k}u_{k,\ell}\det\left(U_{\sim k,\sim\ell}\right)\right)\] \[=\sum_{k=1}^{n}\det\left(V_{k}\right)-\sum_{k=1}^{n}\ \ \sum_{\ell=1}^{n} \left(-1\right)^{k+\ell}p_{k}u_{k,\ell}\det\left(U_{\sim k,\sim\ell}\right).\] In view of \[\sum_{k=1}^{n}\det\underbrace{\left(V_{k}\right)}_{\begin{subarray} {c}=\left(u_{i,j+[k=i]}\right)_{i,j\in[n]}\\ \text{(by the definition of $V_{k}$)}\end{subarray}} =\sum_{k=1}^{n}\det\left(u_{i,j+[k=i]}\right)_{i,j\in[n]}\] \[=\det\left(u_{i,j+[n=j]}\right)_{i,j\in[n]}\qquad\qquad\text{(by Lemma \ref{eq:V_k})}\] and \[\sum_{k=1}^{n}\ \ \sum_{\ell=1}^{n}\left(-1\right)^{k+\ell}p_{k} \underbrace{u_{k,\ell}}_{\begin{subarray}{c}=U_{k,\ell}\\ \text{(by the definition of $U$)}\end{subarray}} \det\left(U_{\sim k,\sim\ell}\right)\] \[=\sum_{k=1}^{n}p_{k}\sum_{\ell=1}^{n}\underbrace{(-1)^{k+\ell}U_ {k,\ell}\det\left(U_{\sim k,\sim\ell}\right)}_{\begin{subarray}{c}=\det U\\ \text{(by \ref{eq:V_k}), applied to $A=U$)}\end{subarray}}\] \[=\sum_{k=1}^{n}p_{k}\det U=\left(\sum_{k=1}^{n}p_{k}\right)\det \underbrace{U}_{=\left(u_{i,j}\right)_{i,j\in[n]}}\] \[=\left(\sum_{k=1}^{n}p_{k}\right)\det\left(u_{i,j}\right)_{i,j \in[n]}, \tag{93}\] this becomes \[\sum_{k=1}^{n}\det\left(U_{k}\right) =\sum_{k=1}^{n}\det\left(V_{k}\right)_{i,j\in[n]}-\underbrace{\sum _{k=1}^{n}\ \ \sum_{\ell=1}^{n}\left(-1\right)^{k+\ell}p_{k}u_{k,\ell}\det\left(U_{\sim k, \sim\ell}\right)}_{=\left(\sum\limits_{k=1}^{n}p_{k}\right)\det\left(u_{i,j} \right)_{i,j\in[n]}}\] \[=\det\left(u_{i,j+[n=j]}\right)_{i,j\in[n]}-\left(\sum_{k=1}^{n}p _{k}\right)\det\left(u_{i,j}\right)_{i,j\in[n]}.\] In view of (89), we can rewrite this as \[\sum_{k=1}^{n}\det\left(u_{i,j+[k=i]}-p_{i}u_{i,j}\,[k=i]\right)_{i, j\in[n]}\] \[=\det\left(u_{i,j+[n=j]}\right)_{i,j\in[n]}-\left(\sum_{k=1}^{n}p_ {k}\right)\det\left(u_{i,j}\right)_{i,j\in[n]}.\] This proves Lemma 9.6. Proof of Lemma 9.8.: Let \(A\) be the matrix \(\left(a_{i,j}\right)_{i,j\in[n]}\). Then, \(A_{\sim n,\sim n}=\left(a_{i,j}\right)_{i,j\in[n-1]}\). Now, (24) (applied to \(k=n\)) yields \[\det A =\sum_{\ell=1}^{n}\left(-1\right)^{n+\ell}\underbrace{A_{n,\ell} }_{\begin{subarray}{c}=a_{n,\ell}\\ \text{(by the definition of $A$)}\end{subarray}}\det\left(A_{\sim n,\sim\ell}\right)\] \[=\sum_{\ell=1}^{n}\left(-1\right)^{n+\ell}a_{n,\ell}\det\left(A_ {\sim n,\sim\ell}\right)\] \[=\sum_{\ell=1}^{n-1}\left(-1\right)^{n+\ell}\underbrace{a_{n, \ell}}_{\begin{subarray}{c}=0\\ \text{(by (\ref{eq:A_n}))}\end{subarray}}\det\left(A_{\sim n,\sim\ell}\right)+ \underbrace{(-1)^{n+n}}_{=1}a_{n,n}\det\underbrace{\left(A_{\sim n,\sim n} \right)}_{=\left(a_{i,j}\right)_{i,j\in[n-1]}}\] \[=\underbrace{\sum_{\ell=1}^{n-1}\left(-1\right)^{n+\ell}0\det \left(A_{\sim n,\sim\ell}\right)}_{=0}+a_{n,n}\cdot\det\left(a_{i,j}\right)_{ i,j\in[n-1]}=a_{n,n}\cdot\det\left(a_{i,j}\right)_{i,j\in[n-1]}.\] This proves Lemma 9.8. ### To Section 10 Proof of Lemma 10.2.: Let \(i\) be a positive integer. Then, \(\ell_{i}=\lambda_{i}-i\) (by the definition of \(\ell_{i}\)) and \(\ell_{i+1}=\lambda_{i+1}-(i+1)\) (by the definition of \(\ell_{i+1}\)). However, \(\lambda_{i}\geq\lambda_{i+1}\) (since \(\lambda\) is a partition). Hence, \(\lambda_{i}-i\geq\lambda_{i+1}-i>\lambda_{i+1}-i-1=\lambda_{i+1}-(i+1)\). In view of \(\ell_{i}=\lambda_{i}-i\) and \(\ell_{i+1}=\lambda_{i+1}-(i+1)\), we can rewrite this as \(\ell_{i}>\ell_{i+1}\). Forget that we fixed \(i\). We thus have shown that \(\ell_{i}>\ell_{i+1}\) for every positive integer \(i\). In other words, \(\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Similarly, we can show that \(m_{1}>m_{2}>m_{3}>\cdots\) and \(\ell_{1}^{\prime}>\ell_{2}^{\prime}>\ell_{3}^{\prime}>\cdots\) and \(m_{1}^{\prime}>m_{2}^{\prime}>m_{3}^{\prime}>\cdots\). Thus, Lemma 10.2 is proved. Proof of Lemma 10.5.: If \(\nu\) is a partition that satisfies \(\mu\lessdot\nu\), then \(\nu\) can be obtained from \(\mu\) by incrementing exactly one entry of \(\mu\) by \(1\). In other words, \(\nu=\mu^{+k}\) for some positive integer \(k\). This \(k\) must furthermore satisfy \(k=1\) or \(\mu_{k}\neq\mu_{k-1}\) (since otherwise, \(\mu^{+k}\) is not a partition). In other words, this \(k\) must belong to \(\operatorname{ER}\left(\mu\right)\). Thus, we have shown that every partition \(\nu\) that satisfies \(\mu\lessdot\nu\) has the form \(\mu^{+k}\) for some \(k\in\operatorname{ER}\left(\mu\right)\). Conversely, it is clear that any partition \(\mu^{+k}\) with \(k\in\operatorname{ER}\left(\mu\right)\) is a partition \(\nu\) that satisfies \(\mu\lessdot\nu\). Combining these two facts, we see that the partitions \(\nu\) that satisfy \(\mu\lessdot\nu\) are precisely the partitions \(\mu^{+k}\) for the elements \(k\in\operatorname{ER}\left(\mu\right)\). Hence, Lemma 10.5 is proved. Proof of Lemma 10.6.: This follows immediately from the definition of \(\mu^{+k}\). Proof of Lemma 10.7.: We must prove that \(k\in[n]\). Assume the contrary. Thus, \(k>n\). Hence, \(\mu_{k}=0\) (since \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\)), so that the definition of \(m_{k}\) yields \(m_{k}=\underbrace{\mu_{k}}_{=0}-k=-k<-n\) (since \(k>n\)). However, the definition of \(\ell_{j}\) yields \(\ell_{j}=\underbrace{\lambda_{j}}_{\geq 0}-j\geq-j\geq-n\) (since \(j\leq n\)). This contradicts \(\ell_{j}=m_{k}<-n\). This contradiction shows that our assumption was false. Hence, Lemma 10.7 is proved. Proof of Lemma 10.8.: We have \(\ell_{j}\in\Delta\left(\mu\right)=\left\{m_{1},m_{2},m_{3},\ldots\right\}\). In other words, \(\ell_{j}=m_{k}\) for some positive integer \(k\). Consider this \(k\). Lemma 10.7 yields \(k\in[n]\) (since \(j\in[n]\)). Thus, there exists at least one \(i\in[n]\) satisfying \(\ell_{j}=m_{i}\) (namely, \(i=k\)). Lemma 10.2 yields \(m_{1}>m_{2}>m_{3}>\cdots\). Hence, the numbers \(m_{1},m_{2},m_{3},\ldots\) are distinct. Thus, there exists at most one \(i\in[n]\) satisfying \(\ell_{j}=m_{i}\). Since we also know that there exists at least one such \(i\), we thus conclude that there exists exactly one such \(i\). Hence, the sum \(\sum\limits_{\begin{subarray}{c}i\in[n];\\ \ell_{j}=m_{i}\end{subarray}}1\) contains exactly one addend, so that it simplifies to \(1\). This proves Lemma 10.8. Proof of Lemma 10.9.: Assume the contrary. Thus, \(k>n\). Hence, \(\lambda_{k}=0\) (since \(\lambda=\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)\)). Thus, the definition of \(\ell_{k}\) yields \(\ell_{k}=\underbrace{\lambda_{k}}_{=0}-k=-k\). Similarly, \(m_{k}=-k\). Comparing these equalities, we find \(m_{k}=\ell_{k}\notin\Delta\left(\mu\right)=\left\{m_{1},m_{2},m_{3},\ldots\right\}\), which contradicts \(m_{k}\in\left\{m_{1},m_{2},m_{3},\ldots\right\}\). This contradiction shows that our assumption was false. Hence, Lemma 10.9 is proved. Proof of Lemma 10.10.: We have \[\sum\limits_{\begin{subarray}{c}i,j\in[n];\\ \ell_{j}=m_{i}\end{subarray}}g\left(j\right) =\sum\limits_{j\in[n]}\sum\limits_{\begin{subarray}{c}i\in[n];\\ \ell_{j}=m_{i}\end{subarray}}g\left(j\right)\] \[=\sum\limits_{\begin{subarray}{c}i\in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}\sum\limits_{\begin{subarray}{c}i\in[n]; \\ \ell_{j}=m_{i}\end{subarray}}g\left(j\right)\ +\ \sum\limits_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\notin\Delta(\mu)\end{subarray}}\sum\limits_{\begin{subarray}{c}i\in[n ];\\ \ell_{j}=m_{i}\end{subarray}}g\left(j\right)\] \[=\sum\limits_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}g\left(j\right)\cdot\sum\limits_{ \begin{subarray}{c}i\in[n];\\ \ell_{j}=m_{i}\end{subarray}}1\ \ \ \ \ \ \ \ +\sum\limits_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\notin\Delta(\mu)\end{subarray}}\underbrace{\left(\text{empty sum}\right)}_{=0}\] \[=\sum\limits_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}g\left(j\right)+\sum\limits_{ \begin{subarray}{c}j\in[n];\\ \ell_{j}\notin\Delta(\mu)\end{subarray}}0\] \[=\sum\limits_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}g\left(j\right)\,. \tag{94}\] An analogous argument (with the labels \(i,j,\lambda,\mu,m_{k},\ell_{k},g\) replaced by \(j,i,\mu,\lambda,\ell_{k},m_{k},f\)) shows that \[\sum\limits_{\begin{subarray}{c}j,i\in[n];\\ m_{i}=\ell_{j}\end{subarray}}f\left(i\right)=\sum\limits_{\begin{subarray}{c }i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}f\left(i\right). \tag{95}\] However, the summation sign \(\sum\limits_{\begin{subarray}{c}i,j\in[n];\\ \ell_{j}=m_{i}\end{subarray}}\) can be rewritten as \(\sum\limits_{\begin{subarray}{c}j,i\in[n];\\ m_{i}=\ell_{j}\end{subarray}}\) (since \(\ell_{j}=m_{i}\) is equivalent to \(m_{i}=\ell_{j}\)). Thus, \[\sum\limits_{\begin{subarray}{c}i,j\in[n];\\ \ell_{j}=m_{i}\end{subarray}}g\left(j\right)=\sum\limits_{\begin{subarray}{c }j,i\in[n];\\ m_{i}=\ell_{j}\end{subarray}}\underbrace{g\left(j\right)}_{=f\left(i\right)}\ =\sum\limits_{ \begin{subarray}{c}j,i\in[n];\\ m_{i}=\ell_{j}\end{subarray}}f\left(i\right).\] In other words, the left hand sides of the equalities (94) and (95) are equal. Hence, their right hand sides are equal as well. In other words, we have \[\sum\limits_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}g\left(j\right)=\sum\limits_{ \begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}f\left(i\right).\] This proves Lemma 10.10. Proof of Lemma 10.11.: The definition of \(b_{i}\) yields \[b_{i}=\max\left\{k\geq 0\mid\lambda_{k}-k\geq\mu_{i}-i\right\}. \tag{96}\] Lemma 10.2 yields \(\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Hence, the numbers \(\ell_{1},\ell_{2},\ldots,\ell_{j}\) are \(\geq\ell_{j}\), whereas the numbers \(\ell_{j+1}\), \(\ell_{j+2}\), \(\ell_{j+3},\ldots\) are not. Therefore, \(\max\left\{k\geq 0\mid\ell_{k}\geq\ell_{j}\right\}=j\). In view of \[\ell_{j}=m_{i}=\mu_{i}-i\qquad\quad\text{(by the definition of $m_{i}$)}\] and \[\ell_{k}=\lambda_{k}-k\qquad\quad\text{(by the definition of $\ell_{k}$)}\,,\] we can rewrite this as \(\max\left\{k\geq 0\mid\lambda_{k}-k\geq\mu_{i}-i\right\}=j\). Hence, (96) can be rewritten as \(b_{i}=j\). This proves Lemma 10.11. Proof of Lemma 10.12.: If \(i\) and \(j\) are two positive integers satisfying \(m_{i}=\ell_{j}\), then \(x_{b_{i}}=x_{j}\) (since Lemma 10.11 yields \(b_{i}=j\)). Hence, Lemma 10.10 (applied to \(f\left(i\right)=x_{b_{i}}\) and \(g\left(j\right)=x_{j}\)) yields \[\sum_{\begin{subarray}{c}i\in\left[n\right];\\ m_{i}\in\Delta\left(\lambda\right)\end{subarray}}x_{b_{i}}=\sum_{ \begin{subarray}{c}j\in\left[n\right];\\ \ell_{j}\in\Delta\left(\mu\right)\end{subarray}}x_{j}=\sum_{\begin{subarray}{ c}j\in\left[n\right];\\ =x_{1}+x_{2}+\cdots+x_{n}\\ =\sum\limits_{i=1}^{n}x_{i}\end{subarray}}-\sum_{\begin{subarray}{c}j\in \left[n\right];\\ \ell_{j}\notin\Delta\left(\mu\right)\end{subarray}}x_{j}=\sum_{i=1}^{n}x_{i}- \sum_{\begin{subarray}{c}j\in\left[n\right];\\ \ell_{j}\notin\Delta\left(\mu\right)\end{subarray}}x_{j}.\] In other words, \[\sum_{i=1}^{n}x_{i}-\sum_{\begin{subarray}{c}i\in\left[n\right];\\ m_{i}\in\Delta\left(\lambda\right)\end{subarray}}x_{b_{i}}=\sum_{ \begin{subarray}{c}j\in\left[n\right];\\ \ell_{j}\notin\Delta\left(\mu\right)\end{subarray}}x_{j}=\sum_{ \begin{subarray}{c}k\in\left[n\right];\\ \ell_{k}\notin\Delta\left(\mu\right)\end{subarray}}x_{k}\] (here, we have renamed the summation index \(j\) as \(k\)). This proves Lemma 10.12. Proof of Lemma 10.13.: Set \(k=m_{i}+1+b_{i}\). We make the following two observations: * We have \[\lambda_{b_{i}}\geq k\text{ if }b_{i}\geq 1.\] (97) [Proof.] Assume that \(b_{i}\geq 1\). Then, \(b_{i}\) is a positive integer. Hence, Lemma 6.11 (applied to \(j=b_{i}\)) yields that \(\lambda_{b_{i}}-b_{i}\geq\mu_{i}-i\) (since \(b_{i}\leq b_{i}\)). In view of \(m_{i}=\mu_{i}-i\), we can rewrite this as \(\lambda_{b_{i}}-b_{i}\geq m_{i}\). In other words, \(m_{i}\leq\lambda_{b_{i}}-b_{i}\). Furthermore, \(m_{i}\notin\Delta\left(\lambda\right)=\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}\). In other words, \(m_{i}\neq\ell_{k}\) for every positive integer \(k\). In other words, \(m_{i}\neq\lambda_{k}-k\) for every positive integer \(k\) (since \(\ell_{k}\) is defined to be \(\lambda_{k}-k\)). Applying this to \(k=b_{i}\), we obtain \(m_{i}\neq\lambda_{b_{i}}-b_{i}\). Combined with \(m_{i}\leq\lambda_{b_{i}}-b_{i}\), this yields \(m_{i}<\lambda_{b_{i}}-b_{i}\). In other words, \(m_{i}+b_{i}<\lambda_{b_{i}}\). Equivalently, \(\lambda_{b_{i}}>m_{i}+b_{i}\). Since both sides of this equality are integers, we thus obtain \(\lambda_{b_{i}}\geq m_{i}+b_{i}+1=m_{i}+1+b_{i}=k\). This proves (97). * We have \[\lambda_{j}<k\text{ for each integer }j>b_{i}.\] (98) [Proof.: We don't have \(b_{i}+1\leq b_{i}\). Hence, Lemma 6.11 (applied to \(j=b_{i}+1\)) yields that we don't have \(\lambda_{b_{i}+1}-(b_{i}+1)\geq\mu_{i}-i\) either. In other words, we have \(\lambda_{b_{i}+1}-(b_{i}+1)<\mu_{i}-i\). Thus, \[\lambda_{b_{i}+1}<\underbrace{\mu_{i}-i}_{\begin{subarray}{c}=m_{i}\\ \text{(by the definition of }m_{i}\end{subarray})}+(b_{i}+1)=m_{i}+(b_{i}+1)=m_{i}+1+b_{i}=k.\] Now, for each integer \(j>b_{i}\), we have \(j\geq b_{i}+1\) and thus \(\lambda_{j}\leq\lambda_{b_{i}+1}\) (since \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\cdots\)) and therefore \(\lambda_{j}\leq\lambda_{b_{i}+1}<k\). This proves (98).] Combining (97) with (98), we see that \(b_{i}\) is the largest \(j\geq 1\) satisfying \(\lambda_{j}\geq k\) (where we agree that if no such \(j\) exists, then we consider the largest such \(j\) to be \(0\)). In other words, \[b_{i}=\max\left\{j\geq 1\mid\lambda_{j}\geq k\right\}. \tag{99}\] Also, (98) (applied to \(j=b_{i}+1\)) yields \(\lambda_{b_{i}+1}<k\) (since \(b_{i}+1>b_{i}\)), so that \(k>\lambda_{b_{i}+1}\geq 0\). Thus, \(k\) is a positive integer. In other words, \(k\geq 1\). From (13), we thus obtain \[\lambda_{k}^{t}=\max\left\{j\geq 1\mid\lambda_{j}\geq k\right\}=b_{i}\] (by (99)). Now, the definition of \(\ell_{k}^{t}\) yields \[\ell_{k}^{t}=\underbrace{\lambda_{k}^{t}}_{=b_{i}}-\underbrace{k}_{=m_{i}+1+b _{i}}=b_{i}-(m_{i}+1+b_{i})=-1-m_{i}.\] We have thus shown that \(k\geq 1\) and \(\ell_{k}^{t}=-1-m_{i}\). In other words, \(m_{i}+1+b_{i}\geq 1\) and \(\ell_{m_{i}+1+b_{i}}^{t}=-1-m_{i}\) (since \(k=m_{i}+1+b_{i}\)). This proves Lemma 10.13. Proof of Lemma 10.14.: Recall that \(\lambda_{0}=\infty\) by convention. We also set \(\ell_{0}:=\infty\). We defined \(\ell_{i}\) by the equality \(\ell_{i}=\lambda_{i}-i\) for all \(i\geq 1\). This equality holds for \(i=0\) as well (since \(\ell_{0}=\infty\) and \(\lambda_{0}=\infty\)), and thus it holds for all \(i\in\mathbb{N}\). In other words, \(\ell_{k}=\lambda_{k}-k\) for all \(k\in\mathbb{N}\). The definition of \(b_{j}\) yields \[b_{j} =\max\left\{k\geq 0\mid\lambda_{k}-k\geq\mu_{j}-j\right\}\] \[=\max\left\{k\geq 0\mid\ell_{k}\geq m_{j}\right\} \tag{100}\] (since \(\ell_{k}=\lambda_{k}-k\) and \(m_{j}=\mu_{j}-j\)). Similarly, from the definition of \(b_{j-1}\), we obtain \[b_{j-1}=\max\left\{k\geq 0\mid\ell_{k}\geq m_{j-1}\right\}. \tag{101}\] However, \(\mu_{j-1}=\mu_{j}\), so that \(\mu_{j-1}-(j-1)=\mu_{j}-(j-1)=\mu_{j}-j+1\). In other words, \(m_{j-1}=m_{j}+1\) (since \(m_{j-1}=\mu_{j-1}-(j-1)\) and \(m_{j}=\mu_{j}-j\)). Thus, the inequality \(\ell_{k}\geq m_{j-1}\) (for any given \(k\geq 0\)) is equivalent to \(\ell_{k}\geq m_{j}+1\), which in turn is equivalent to \(\ell_{k}>m_{j}\) (since \(m_{j}\) is not \(\infty\)). Thus, we can rewrite (101) as \[b_{j-1}=\max\left\{k\geq 0\mid\ell_{k}>m_{j}\right\}. \tag{102}\] Lemma 10.2 yields \(\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Since \(\ell_{0}>\ell_{1}\) (because \(\ell_{0}=\infty\) while \(\ell_{1}\) is finite), we can extend this to \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). **(a)** Assume that \(m_{j}\notin\Delta\left(\lambda\right)\). Thus, \(m_{j}\notin\Delta\left(\lambda\right)=\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}\). In other words, for every \(k\geq 1\), we have \(m_{j}\neq\ell_{k}\) (by the definition of \(\ell_{k}\)). This also holds for \(k=0\) (since \(\ell_{0}=\infty\)), and thus holds for each \(k\geq 0\). In other words, we have \(\ell_{k}\neq m_{j}\) for each \(k\geq 0\). But this, in turn, entails that the weak inequality \(\ell_{k}\geq m_{j}\) is equivalent to the strict inequality \(\ell_{k}>m_{j}\) for any \(k\geq 0\). Hence, the right hand sides of the equalities (100) and (102) are equal. Therefore, the left hand sides of these equalities are equal as well. In other words, \(b_{j}=b_{j-1}\). This proves Lemma 10.14**(a)**. **(b)** Assume that \(m_{j}\in\Delta\left(\lambda\right)\). Thus, \(m_{j}\in\Delta\left(\lambda\right)=\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}\). In other words, there exists some \(i\geq 1\) such that \(m_{j}=\ell_{i}\). Consider this \(i\). We can rewrite (100) as \[b_{j}=\max\left\{k\geq 0\mid\ell_{k}\geq\ell_{i}\right\} \tag{103}\] (since \(m_{j}=\ell_{i}\)). For the same reasons, we can rewrite (102) as \[b_{j-1}=\max\left\{k\geq 0\mid\ell_{k}>\ell_{i}\right\}. \tag{104}\] But we have \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Hence, the numbers \(\ell_{0},\ell_{1},\ldots,\ell_{i}\) are \(\geq\ell_{i}\), whereas the numbers \(\ell_{i+1},\ell_{i+2},\ell_{i+3},\ldots\) are not. Therefore, \(\max\left\{k\geq 0\mid\ell_{k}\geq\ell_{i}\right\}=i\). In view of this, we can rewrite (103) as \(b_{j}=i\). We have \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Hence, the numbers \(\ell_{0},\ell_{1},\ldots,\ell_{i-1}\) are \(>\ell_{i}\), whereas the numbers \(\ell_{i},\ell_{i+1},\ell_{i+2},\ldots\) are not. Therefore, \(\max\left\{k\geq 0\mid\ell_{k}>\ell_{i}\right\}=i-1\). In view of this, we can rewrite (104) as \(b_{j-1}=i-1\). Hence, \(i=b_{j-1}+1\). Therefore, \(b_{j}=i=b_{j-1}+1\). This proves Lemma 10.14**(b)**. Proof of Lemma 10.15.: Recall that \(\lambda_{0}=\infty\) by convention. Let us also set \(\ell_{0}:=\infty\). We thus have \[\ell_{p}=\lambda_{p}-p\qquad\text{for all $p\geq 0$}\] (indeed, this is clear for \(p=0\), and is the definition of \(\ell_{p}\) for \(p>0\)). Also, \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\) (this is proved as in the proof of Lemma 10.14 above). Let \(i\) be a positive integer. The definition of \(m_{i}\) yields \(m_{i}=\mu_{i}-i\). Also, Lemma 10.6 yields \(\left(\mu^{+k}\right)_{i}=\mu_{i}+[k=i]\). Subtracting \(i\) from both sides of this equality, we obtain \[\left(\mu^{+k}\right)_{i}-i =\mu_{k}+[k=i]-i=\underbrace{\mu_{i}-i}_{=m_{i}}+[k=i]\] \[=m_{i}+[k=i]\,. \tag{105}\] The definition of \(b_{i}\) yields \[b_{i} =\ \max\left\{k\geq 0\mid\lambda_{k}-k\geq\mu_{i}-i\right\}\] \[=\ \max\left\{p\geq 0\mid\lambda_{p}-p\geq\mu_{i}-i\right\}\] \[=\ \max\left\{p\geq 0\mid\ell_{p}\geq\mu_{i}-i\right\} \tag{106}\] (since each \(p\geq 0\) satisfies \(\ell_{p}=\lambda_{p}-p\)). The same argument (applied to \(\mu^{+k}\) and \(b_{i}^{*}\) instead of \(\mu\) and \(b_{i}\)) yields \[b_{i}^{*}=\max\left\{p\geq 0\mid\ell_{p}\geq\left(\mu^{+k}\right)_{i}-i\right\}.\] Using (105), we can rewrite this as \[b_{i}^{*}=\max\left\{p\geq 0\mid\ell_{p}\geq m_{i}+\left[k=i\right]\right\}. \tag{107}\] Also, using \(\mu_{i}-i=m_{i}\), we can rewrite (106) as \[b_{i}=\max\left\{p\geq 0\mid\ell_{p}\geq m_{i}\right\}. \tag{108}\] **(c)** Assume that \(i\neq k\). Then, \(k\neq i\), so that \(\left[k=i\right]=0\) and thus \(m_{i}+\left[k=i\right]=m_{i}\). Hence, the right hand sides of the equalities (107) and (108) are equal. Therefore, their left hand sides are equal as well. In other words, \(b_{i}^{*}=b_{i}\). This proves Lemma 10.15 **(c)**. **(a)** Assume that \(m_{k}\notin\Delta\left(\lambda\right)\). We must prove that \(b_{i}^{*}=b_{i}\). If \(i\neq k\), then this follows from part **(c)**. Thus, we WLOG assume that \(i=k\). Hence, \(k=i\), so that \(\left[k=i\right]=1\). Also, recall that \(m_{k}\notin\Delta\left(\lambda\right)\). In other words, \(m_{i}\notin\Delta\left(\lambda\right)\) (since \(k=i\)). We shall now prove the following: _Claim 1:_ Let \(p\geq 0\) be an integer. Then, the statements "\(\ell_{p}\geq m_{i}+\left[k=i\right]\)" and "\(\ell_{p}\geq m_{i}\)" are equivalent. Proof of Claim 1.: This equivalence is obvious if \(p=0\) (because in this case, we have \(\ell_{p}=\ell_{0}=\infty\), and thus both statements "\(\ell_{p}\geq m_{i}+\left[k=i\right]\)" and "\(\ell_{p}\geq m_{i}\)" are true). Hence, for the rest of this proof, we WLOG assume that \(p\neq 0\). Hence, \(p\geq 1\) (since \(p\) is an integer), and thus \(\ell_{p}\) is an integer. If we had \(m_{i}=\ell_{p}\), then we would have \(m_{i}=\ell_{p}\in\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}\) (since \(p\geq 1\) is an integer), which would contradict \(m_{i}\notin\Delta\left(\lambda\right)=\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}\). Hence, we cannot have \(m_{i}=\ell_{p}\). Thus, we have \(m_{i}\neq\ell_{p}\). In other words, \(\ell_{p}\neq m_{i}\). Now, we have the following chain of logical equivalences: \[\left(\ell_{p}\geq m_{i}+\left[k=i\right]\right)\] \[\iff\left(\ell_{p}\geq m_{i}+1\right)\] \[\iff\left(\ell_{p}>m_{i}\right)\] \[\iff\left(\ell_{p}\geq m_{i}\right)\] \[\iff\left(\ell_{p}\geq m_{i}\right)\] (since we know that \[\ell_{p}\neq m_{i}\right).\] In other words, the statements "\(\ell_{p}\geq m_{i}+[k=i]\)" and "\(\ell_{p}\geq m_{i}\)" are equivalent. This proves Claim 1. Now, the right hand sides of the equalities (107) and (108) are equal (because Claim 1 shows that the statements "\(\ell_{p}\geq m_{i}+[k=i]\)" and "\(\ell_{p}\geq m_{i}\)" are equivalent). Therefore, their left hand sides are equal as well. In other words, \(b_{i}^{*}=b_{i}\). This proves Lemma 10.15**(a)**. **(b)** Assume that \(m_{k}\in\Delta\left(\lambda\right)\). We must prove that \(b_{i}^{*}=b_{i}-[k=i]\). If \(i\neq k\), then this follows from part **(c)** (since \(i\neq k\) entails \(k\neq i\) and thus \([k=i]=0\), so that \(b_{i}-[k=i]=b_{i}-0=b_{i}\), but part **(c)** yields \(b_{i}^{*}=b_{i}=b_{i}-[k=i]\)). Thus, we WLOG assume that \(i=k\). Hence, \(k=i\), so that \([k=i]=1\). We have \(i=k\), so that \(m_{i}=m_{k}\in\Delta\left(\lambda\right)=\left\{\ell_{1},\ell_{2},\ell_{3}, \ldots\right\}\). In other words, \(m_{i}=\ell_{j}\) for some positive integer \(j\). Consider this \(j\). But we have \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Hence, the numbers \(\ell_{0},\ell_{1},\ldots,\ell_{j}\) are \(\geq\ell_{j}\), whereas the numbers \(\ell_{j+1},\ell_{j+2},\ell_{j+3},\ldots\) are not. Therefore, \(\max\left\{p\geq 0\mid\ell_{p}\geq\ell_{j}\right\}=j\). In view of \(m_{i}=\ell_{j}\), we can rewrite this as \[\max\left\{p\geq 0\mid\ell_{p}\geq m_{i}\right\}=j.\] This allows us to rewrite (108) as \(b_{i}=j\). Again, recall that \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\). Hence, the numbers \(\ell_{0},\ell_{1},\ldots,\ell_{j-1}\) are \(>\ell_{j}\), whereas the numbers \(\ell_{j},\ell_{j+1},\ell_{j+2},\ldots\) are not. Therefore, \(\max\left\{p\geq 0\mid\ell_{p}>\ell_{j}\right\}=j-1\). In other words, \[\max\left\{p\geq 0\mid\ell_{p}\geq\ell_{j}+1\right\}=j-1\] (since the inequality \(\ell_{p}>\ell_{j}\) is equivalent to \(\ell_{p}\geq\ell_{j}+1\)). We can rewrite this further as \[\max\left\{p\geq 0\mid\ell_{p}\geq m_{i}+[k=i]\right\}=j-1\] (since \(m_{i}+[\underbrace{k=i}_{=1}]=\underbrace{m_{i}}_{=\ell_{j}}+1=\ell_{j}+1\)). This allows us to rewrite (107) as \(b_{i}^{*}=j-1\). Comparing this with \(\underbrace{b_{i}}_{=j}-\underbrace{[k=i]}_{=1}=j-1\), we obtain \(b_{i}^{*}=b_{i}-[k=i]\). This proves Lemma 10.15**(b)**. ### To Section 11 Proof of Lemma 11.2.: It clearly suffices to prove that \(\mathbf{s}_{\lambda}\left[\nu\right]=0\) for any partition \(\nu\) that does not satisfy \(\nu\subseteq\lambda\). But this is obvious, because if \(\nu\) does not satisfy \(\nu\subseteq\lambda\), then the set \(\mathcal{E}\left(\lambda/\nu\right)\) is empty (by Lemma 2.9, applied to \(\mu=\nu\)), and thus we have \[\mathbf{s}_{\lambda}\left[\nu\right]=\sum_{D\in\mathcal{E}\left(\lambda/\nu \right)}\ \ \prod_{(i,j)\in D}\left(x_{i}+y_{j}\right)=\left(\text{empty sum}\right)=0.\] Proof of Lemma 11.3.: Lemma 10.5 yields that the partitions \(\nu\) that satisfy \(\mu\lessdot\nu\) are precisely the partitions \(\mu^{+k}\) for the elements \(k\in\operatorname{ER}\left(\mu\right)\). Hence, \[\sum_{\mu<\nu}\mathbf{s}_{\lambda}\left[\nu\right]=\sum_{k\in\operatorname{ER} \left(\mu\right)}\mathbf{s}_{\lambda}\left[\mu^{+k}\right]. \tag{109}\] (since the partitions \(\mu^{+k}\) for different \(k\in\operatorname{ER}\left(\mu\right)\) are furthermore distinct). Recall that \(\mu_{n}=0\), so that \(\mu_{j}=0\) for all \(j\geq n\). Hence, every integer \(k>n\) satisfies \(\mu_{k-1}=\mu_{k}\) (since \(\mu_{k-1}=0\) and \(\mu_{k}=0\)) and thus \(k\notin\operatorname{ER}\left(\mu\right)\). In other words, every integer in \(\operatorname{ER}\left(\mu\right)\) is \(\leq n\). In other words, \(\operatorname{ER}\left(\mu\right)\subseteq\left[n\right]\). However, \[\sum_{k=1}^{n}\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\sum_{ \begin{subarray}{c}k\in\left[n\right];\\ k\in\operatorname{ER}\left(\mu\right)\end{subarray}}\mathbf{s}_{\lambda} \left[\mu^{+k}\right]+\sum_{\begin{subarray}{c}k\in\left[n\right];\\ k\notin\operatorname{ER}\left(\mu\right)\end{subarray}}\underbrace{\mathbf{s}_ {\lambda}\left[\mu^{+k}\right]}_{=0}\] \[=\sum_{k\in\operatorname{ER}\left(\mu\right)}\mathbf{s}_{\lambda }\left[\mu^{+k}\right].\] Comparing this with (109), we find \[\sum_{k=1}^{n}\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\sum_{\mu\lessdot\nu }\mathbf{s}_{\lambda}\left[\nu\right]=\sum_{\mu\lessdot\nu\subseteq\lambda} \mathbf{s}_{\lambda}\left[\nu\right]\] (by Lemma 11.2). This proves Lemma 11.3. Proof of Lemma 11.4.: Let \(\mathbf{b}^{*}=(b_{1}^{*},b_{2}^{*},b_{3}^{*},\ldots)\) be the flagging induced by \(\lambda/\mu^{+k}\). **(a)** Assume that \(m_{k}\notin\Delta\left(\lambda\right)\). We are in one of the following two cases: _Case 1:_ We have \(k\in\operatorname{ER}\left(\mu\right)\). _Case 2:_ We have \(k\notin\operatorname{ER}\left(\mu\right)\). Let us first consider Case 1. In this case, we have \(k\in\operatorname{ER}\left(\mu\right)\). Hence, \(\mu^{+k}\) is a partition. For each \(i\geq 1\), we have \(b_{i}^{*}=b_{i}\) (by Lemma 10.15**(a)**, since \(m_{k}\notin\Delta\left(\lambda\right)\)) and \(\left(\mu^{+k}\right)_{i}=\mu_{i}+\left[k=i\right]\) (by Lemma 10.6). However, \(\mu^{+k}\) is a partition, and \(\mathbf{b}^{*}=(b_{1}^{*},b_{2}^{*},b_{3}^{*},\ldots)\) is the flagging induced by \(\lambda/\mu^{+k}\). Thus, (23) (applied to \(\mu^{+k}\), \(\mathbf{b}^{*}\) and \(b_{i}^{*}\) instead of \(\mu\), \(\mathbf{b}\) and \(b_{i}\)) yields \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right] =\det\left(h\left(\underbrace{\left(\mu^{+k}\right)_{j}-i+j, \underbrace{b_{i}^{*}}}_{=b_{i}},\ \ 1-j\right)\right)_{i,j\in\left[n\right]}\] \[=\det\left(h\left(\mu_{i}-i+j+\left[k=i\right],\ b_{i},\ \ 1-j \right)\right)_{i,j\in\left[n\right]}.\] Thus, Lemma 11.4**(a)** is proved in Case 1. Let us now consider Case 2. In this case, we have \(k\notin\operatorname{ER}\left(\mu\right)\). Hence, \(k\neq 1\) and \(\mu_{k-1}=\mu_{k}\). Thus, Lemma 10.14**(a)** (applied to \(j=k\)) yields \(b_{k}=b_{k-1}\) (since \(m_{k}\notin\Delta\left(\lambda\right)\)). In other words, \(b_{k-1}=b_{k}\). Now, consider the matrix \[\left(h\left(\mu_{i}-i+j+\left[k=i\right],\ \ b_{i},\ \ 1-j\right)\right)_{i,j \in\left[n\right]}.\] The \(\left(k-1\right)\)-st and \(k\)-th rows of this matrix are identical31 Footnote 31: Proof.: For each \(j\in\left[n\right]\), the \(j\)-th entry of the \(\left(k-1\right)\)-st row of this matrix is \[h\left(\underbrace{\mu_{k-1}}_{=\mu_{k}}-\left(k-1\right)+j+ \underbrace{\left[k=k-1\right]}_{=0},\ \ b_{k-1},\ \ 1-j\right)\] \[=h\left(\underbrace{\mu_{k}-\left(k-1\right)+j+0}_{=\mu_{k}-k+j+ 1},\ \ b_{k},\ \ 1-j\right)=h\left(\mu_{k}-k+j+1,\ \ b_{k},\ \ 1-j\right),\] whereas the \(j\)-th entry of the \(k\)-th row of this matrix is \[h\left(\mu_{k}-k+j+\underbrace{\left[k=k\right]}_{=1},\ \ b_{k},\ \ 1-j\right)=h \left(\mu_{k}-k+j+1,\ \ b_{k},\ \ 1-j\right).\] These two entries are clearly equal. Thus, the \(\left(k-1\right)\)-st and \(k\)-th rows of this matrix are identical. However, \(\mu^{+k}\) is a partition, and \(\mathbf{b}^{*}=(b_{1}^{*},b_{2}^{*},b_{3}^{*},\ldots)\) is the flagging induced by \(\lambda/\mu^{+k}\). Thus, (23) (applied to \(\mu^{+k}\), \(\mathbf{b}^{*}\) and \(b_{i}^{*}\) instead of \(\mu\), \(\mathbf{b}\) and \(b_{i}\)) yields \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right] =\det\left(h\left(\underbrace{\left(\mu^{+k}\right)_{j}}_{=\mu_{ i}+[k=i]}-i+j,\underbrace{b_{i}^{*}}_{=b_{i}-[k=i]},\ \ 1-j\right)\right)_{i,j\in[n]}\] \[=\det\left(h\left(\mu_{i}-i+j+[k=i]\,,\ \ b_{i}-[k=i]\,,\ \ 1-j\right) \right)_{i,j\in[n]}.\] Thus, Lemma 11.4**(b)** is proved in Case 1. Let us now consider Case 2. In this case, we have \(k\notin\operatorname{ER}\left(\mu\right)\). Hence, \(k\neq 1\) and \(\mu_{k-1}=\mu_{k}\). Thus, Lemma 10.14**(b)** (applied to \(j=k\)) yields \(b_{k}=b_{k-1}+1\) (since \(m_{k}\in\Delta\left(\lambda\right)\)). In other words, \(b_{k-1}=b_{k}-1\). Now, consider the matrix \[\left(h\left(\mu_{i}-i+j+[k=i]\,,\ \ b_{i}-[k=i]\,,\ \ 1-j\right)\right)_{i,j\in[n]}.\] The \(\left(k-1\right)\)-st and \(k\)-th rows of this matrix are identical32 Footnote 32: Proof.: For each \(j\in[n]\), the \(j\)-th entry of the \(\left(k-1\right)\)-st row of this matrix is \[h\left(\underbrace{\mu_{k-1}}_{=\mu_{k}}-(k-1)+j+[\underbrace{k =k-1}_{=0}],\ \ \underbrace{b_{k-1}}_{=b_{k}-1}-[\underbrace{k =k-1}_{=0}],\ \ 1-j\right)\] \[=h\left(\underbrace{\mu_{k}-(k-1)+j}_{=\mu_{k}-k+j+1},\ \ b_{k}-1,\ \ 1-j \right)=h\left(\mu_{k}-k+j+1,\ \ b_{k}-1,\ \ 1-j\right),\] whereas the \(j\)-th entry of the \(k\)-th row of this matrix is \[h\left(\mu_{k}-k+j+[\underbrace{k=k}_{=1}],\ \ b_{k}-[\underbrace{k =k}_{=1}],\ \ 1-j\right)=h\left(\mu_{k}-k+j+1,\ \ b_{k}-1,\ \ 1-j\right).\] These two entries are clearly equal. Thus, the \(\left(k-1\right)\)-st and \(k\)-th rows of this matrix are identical. Proof of Lemma 11.6.: Recall that \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) is the flagging induced by \(\lambda/\mu\), and that we have \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\). Hence, Corollary 8.19 yields \[\mathbf{s}_{\lambda}\left[\mu\right]=\det\left(\underbrace{h\left(\mu_{i}-i+j, \begin{array}{c}b_{i},\\ =u_{i,j}\end{array}\right)}_{\left(\text{by the definition of $u_{i,j}$}\right)} \right)_{i,j\in\left[n\right]}=\det\left(u_{i,j}\right)_{i,j\in\left[n\right]}.\] This proves (28). However, the same argument can be applied to \(n-1\) instead of \(n\) (since \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n-1}\right)\)), and thus we obtain (29). Proof of Lemma 11.7.: We first recall that every \(i\in\left[n\right]\) satisfies \[\left[k=i\right]=\begin{cases}0,&\text{if $i\neq k$;}\\ 1,&\text{if $i=k$.}\end{cases}\] Hence, every \(i,j\in\left[n\right]\) satisfy \[u_{i,j+\left[k=i\right]}-p_{i}u_{i,j}\left[k=i\right]\] \[=\begin{cases}u_{i,j+0}-p_{i}u_{i,j}0,&\text{if $i\neq k$;}\\ u_{i,j+1}-p_{i}u_{i,j}1,&\text{if $i=k$}\end{cases}\] \[=\begin{cases}u_{i,j},&\text{if $i\neq k$;}\\ u_{i,j+1}-p_{i}u_{i,j},&\text{if $i=k$}\end{cases} \tag{110}\] (since \(u_{i,j+0}-p_{i}u_{i,j}0=u_{i,j+0}=u_{i,j}\) and \(u_{i,j}1=u_{i,j}\)). We shall use the notation \(A_{i,j}\) for the \((i,j)\)-th entry of a matrix \(A\). We are in one of the following two cases: _Case 1:_ We have \(m_{k}\in\Delta\left(\lambda\right)\). _Case 2:_ We have \(m_{k}\notin\Delta\left(\lambda\right)\). Let us first consider Case 1. In this case, we have \(m_{k}\in\Delta\left(\lambda\right)\). Thus, Lemma 11.4**(b)** yields \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(h\left(\mu_{i}-i+j+\left[ k=i\right],\begin{array}{c}b_{i}-\left[k=i\right],\begin{array}{c}1-j \right)\end{array}\right)_{i,j\in\left[n\right]}. \tag{111}\] Let \(A\) denote the matrix on the right hand side of this equality. Thus, (111) rewrites as \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det A. \tag{112}\] We shall now compute the \((i,j)\)-th entry \[A_{i,j}=h\left(\mu_{i}-i+j+\left[k=i\right],\begin{array}{c}b_{i}-\left[k=i \right],\begin{array}{c}1-j\end{array}\right)\] of the matrix \(A\) for any two numbers \(i,j\in\left[n\right]\): * When \(i\neq k\), we have \([k=i]=0\) and thus \[A_{i,j} =h\left(\mu_{i}-i+j+\underbrace{[k=i]}_{=0},\ \ b_{i}-\underbrace{[k=i]}_{=0},\ \ 1-j\right)\] \[=h\left(\mu_{i}-i+j,\ \ b_{i},\ \ 1-j\right)=u_{i,j}\] (113) (by the definition of \(u_{i,j}\)). * When \(i=k\), we have \([k=i]=1\) and \(b_{i}>0\)33, and thus \[A_{i,j} =h\left(\mu_{i}-i+j+\underbrace{[k=i]}_{=1},\ \ b_{i}-\underbrace{[k=i]}_{=1},\ \ 1-j\right)\] \[=h\left(\mu_{i}-i+j+1,\ \ b_{i}-1,\ \ 1-j\right)\] \[=\underbrace{h\left(\mu_{i}-i+j+1,\ \ b_{i},\ \ -j\right)}_{=u_{i,j+1}}\] (by the definition of \(u_{i,j+1}\)) \[\qquad-\left(x_{b_{i}}+\underbrace{y_{1-j}}_{=0}\right)\cdot \underbrace{h\left(\mu_{i}-i+j,\ \ b_{i},\ \ 1-j\right)}_{=u_{i,j}}\] \[\qquad\qquad\qquad\left(\begin{array}{l}\text{by Corollary \ref{cor:2.1}, applied to $a=\mu_{i}-i+j+1$}\\ \text{and $b=b_{i}$ and $c=1-j$ (since $b_{i}>0$)}\end{array}\right)\] \[=u_{i,j+1}-\underbrace{x_{b_{i}}}_{\begin{subarray}{c}=p_{i}\\ \text{since $i=k$ entails $m_{i}=m_{k}\in\Delta(\lambda)$}\end{subarray}}u_{i,j}\] \[=u_{i,j+1}-p_{i}u_{i,j}.\] (114) Combining the formulas (113) and (114), we conclude that the \((i,j)\)-th entry of \(A\) always equals \[A_{i,j}=\begin{cases}u_{i,j},&\text{if $i\neq k$;}\\ u_{i,j+1}-p_{i}u_{i,j},&\text{if $i=k$}\end{cases}=u_{i,j+[k=i]}-p_{i}u_{i,j} \left[k=i\right]\] (by (110)). Hence, \[A=\left(u_{i,j+[k=i]}-p_{i}u_{i,j}\left[k=i\right]\right)_{i,j\in[n]}.\] Therefore, (112) rewrites as \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(u_{i,j+\left[k=i\right]}-p_{ i}u_{i,j}\left[k=i\right]\right)_{i,j\in\left[n\right]}.\] Thus, Lemma 11.7 is proved in Case 1. Let us now consider Case 2. In this case, we have \(m_{k}\notin\Delta\left(\lambda\right)\). Thus, Lemma 11.4 **(a)** yields \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(h\left(\mu_{i}-i+j+\left[ k=i\right],\;\;b_{i},\;\;1-j\right)\right)_{i,j\in\left[n\right]}. \tag{115}\] Let \(A\) denote the matrix on the right hand side of this equality. Thus, (115) rewrites as \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det A. \tag{116}\] We shall now compute the \(\left(i,j\right)\)-th entry \[A_{i,j}=h\left(\mu_{i}-i+j+\left[k=i\right],\;\;b_{i},\;\;1-j\right)\] of the matrix \(A\) for any two numbers \(i,j\in\left[n\right]\): * When \(i\neq k\), we have \(\left[k=i\right]=0\) and thus \[A_{i,j} =h\left(\mu_{i}-i+j+\underbrace{\left[k=i\right]}_{=0},\;\;b_{i},\;\;1-j\right)\] \[=h\left(\mu_{i}-i+j,\;\;b_{i},\;\;1-j\right)=u_{i,j}\] (117) (by the definition of \(u_{i,j}\)). * When \(i=k\), we have \([k=i]=1\) and thus \[A_{i,j} =h\left(\mu_{i}-i+j+\underbrace{[k=i]}_{=1},\ \ b_{i},\ \ \ \underbrace{1-j}_{=-j+1}\right)\] \[=h\left(\mu_{i}-i+j+1,\ \ b_{i},\ \ -j+1\right)\] \[=\underbrace{h\left(\mu_{i}-i+j+1,\ \ b_{i},\ \ -j\right)}_{=u_{i,j+1}}\] (by the definition of \(u_{i,j+1}\)) \[\qquad+\left(y_{(\mu_{i}-i+j)+b_{i}+(-j)+1}-\underbrace{y_{-j+1} }_{=0}\right)\cdot\underbrace{h\left(\mu_{i}-i+j,\ \ b_{i},\ \ -j+1\right)}_{ =u_{i,j}}\] \[\qquad\qquad\left(\text{by Corollary \ref{cor:2},}\right.\] \[=u_{i,j+1}+\underbrace{y_{(\mu_{i}-i+j)+b_{i}+(-j)+1}}_{=y_{m_{ i}+1+b_{i}}}\cdot u_{i,j}\] \[\qquad\qquad\qquad\left(\text{since }(\mu_{i}-i+j)+b_{i}+(-j)+1=(\mu_{i}-i)+1+b_{i}=m_{i}+1+b_{i}\] (because \[\mu_{i}-i=m_{i}\])) \[=u_{i,j+1}+y_{m_{i}+1+b_{i}}\cdot u_{i,j}=u_{i,j+1}-\underbrace{ \left(-y_{m_{i}+1+b_{i}}\right)}_{=p_{i}}\qquad\cdot u_{i,j}\] (by the definition of \[p_{i}\], since \[i=k\] entails \[m_{i}=m_{k}\notin\Delta(\lambda)\] ) \[=u_{i,j+1}-p_{i}u_{i,j}.\] (118) Combining the formulas (117) and (118), we conclude that the \((i,j)\)-th entry of \(A\) always equals \[A_{i,j}=\begin{cases}u_{i,j},&\text{if }i\neq k;\\ u_{i,j+1}-p_{i}u_{i,j},&\text{if }i=k\end{cases}=u_{i,j+[k=i]}-p_{i}u_{i,j} \left[k=i\right]\] (by (110)). Hence, \[A=\left(u_{i,j+\left[k=i\right]}-p_{i}u_{i,j}\left[k=i\right]\right)_{i,j\in[n ]}.\] Therefore, (116) rewrites as \[\mathbf{s}_{\lambda}\left[\mu^{+k}\right]=\det\left(u_{i,j+\left[k=i\right]}-p _{i}u_{i,j}\left[k=i\right]\right)_{i,j\in[n]}.\] Thus, Lemma 11.7 is proved in Case 2. We have now proved Lemma 11.7 in both Cases 1 and 2. Consequently, Lemma 11.7 always holds. Proof of Lemma 11.8.: We have \[\sum_{k=1}^{n}\mathbf{s}_{\lambda}\left[\mu^{+k}\right]\] \[=\sum_{k=1}^{n}\det\left(u_{i,j+\left[k=i\right]}-p_{i}u_{i,j} \left[k=i\right]\right)_{i,j\in\left[n\right]}\qquad\quad\text{(by Lemma \ref{lem:s-1})}\] \[=\det\left(u_{i,j+\left[n=j\right]}\right)_{i,j\in\left[n\right] }-\left(\sum_{k=1}^{n}p_{k}\right)\underbrace{\det\left(u_{i,j}\right)_{i,j \in\left[n\right]}}_{\begin{subarray}{c}=\mathbf{s}_{\lambda}\left[\mu\right] \\ \text{(by \eqref{lem:s-1})}\end{subarray}}\qquad\quad\text{(by Lemma \ref{lem:s-1})}\] \[=\det\left(u_{i,j+\left[n=j\right]}\right)_{i,j\in\left[n\right] }-\left(\sum_{k=1}^{n}p_{k}\right)\mathbf{s}_{\lambda}\left[\mu\right].\] This proves the lemma. Proof of Lemma 11.9.: For each \(\ell\in\left[n-1\right]\), we have \(\underbrace{\mu_{n}}_{=0}-n+\ell=-n+\ell<0\) (since \(\ell\leq n-1<n\)) and \(\left[n=\ell\right]=0\) (since \(\ell\leq n-1<n\) and thus \(n\neq\ell\)). Hence, for each \(\ell\in\left[n-1\right]\), we have \[u_{n,\ell+\left[n=\ell\right]} =u_{n,\ell}\qquad\quad\left(\text{since }\ell+\underbrace{\left[n=\ell\right]}_{=0}=\ell\right)\] \[=h\left(u_{n}-n+\ell,\ b_{n},\ 1-\ell\right)\qquad\quad\text{(by the definition of }u_{n,\ell}\right)\] \[=0\qquad\quad\text{(by the definition of }h\left(a,b,c\right)\text{, since }\mu_{n}-n+\ell<0\text{)}\,.\] Therefore, we can apply Lemma 9.8 to \(a_{i,j}=u_{i,j+\left[n=j\right]}\). We thus obtain \[\det\left(u_{i,j+\left[n=j\right]}\right)_{i,j\in\left[n\right]}=u_{n,n+\left[ n=n\right]}\cdot\det\left(u_{i,j+\left[n=j\right]}\right)_{i,j\in\left[n-1 \right]}. \tag{119}\] However, \([n=n]=1\), so that \[u_{n,n+[n=n]} =u_{n,n+1}\] \[=h\left(\underbrace{\mu_{n}}_{=0}-n+(n+1)\,,\,\underbrace{b_{n}}_{ =n}\,,\,\underbrace{1-(n+1)}_{=-n}\right)\] (by the definition of \[u_{n,n+1}\] ) \[=h\left(\underbrace{-n+(n+1)}_{=1},\,\,n,\,-n\right)=h\left(1,\,n,\,-n\right)\] \[=\sum_{i=1}^{n}x_{i}+\sum_{j=-n+1}^{-n+n}\underbrace{y_{j}}_{ \begin{subarray}{c}=0\\ j\leq-n+n=0,\,\text{and}\\ \text{since }y_{i}=0\text{ for all }i\leq 0\end{subarray}}\qquad\qquad \qquad\text{(by Lemma \ref{lem:2.1})}\] \[=\sum_{i=1}^{n}x_{i}+\underbrace{\sum_{j=-n+1}^{-n+n}}_{=0}0= \sum_{i=1}^{n}x_{i}. \tag{120}\] Moreover, for every \(i,j\in[n-1]\), we have \(j\neq n\) (since \(j\leq n-1<n\)) and thus \([j=n]=0\), so that \[u_{i,j+[n=j]}=u_{i,j+0}=u_{i,j}.\] Hence, \[\left(u_{i,j+[n=j]}\right)_{i,j\in[n-1]}=\left(u_{i,j}\right)_{i,j\in[n-1]}. \tag{121}\] Using (120) and (121), we can rewrite (119) as \[\det\left(u_{i,j+[n=j]}\right)_{i,j\in[n]}=\left(\sum_{i=1}^{n}x_{i}\right) \cdot\underbrace{\det\left(u_{i,j}\right)_{i,j\in[n-1]}}_{\begin{subarray}{c} =\mathbf{s}_{\lambda}[\mu]\\ \text{(by \eqref{eq:2.1})}\end{subarray}}=\left(\sum_{i=1}^{n}x_{i}\right) \cdot\mathbf{s}_{\lambda}\left[\mu\right].\] This proves Lemma 11.9. Proof of Lemma 11.10.: From \(\lambda_{n}=0\), we obtain \(\lambda_{n+1}=0\). Similarly, \(\mu_{n+1}=0\). Thus, Lemma 10.12 can be applied. We have \[\sum_{k=1}^{n}p_{k} =\sum_{i=1}^{n}p_{i}\] \[=\sum_{i=1}^{n}\begin{cases}x_{b_{i}},&\text{if }m_{i}\in\Delta \left(\lambda\right)\,;\\ -y_{m_{i}+1+b_{i}},&\text{if }m_{i}\notin\Delta\left(\lambda\right)\end{cases} \qquad\qquad\text{(by the definition of }p_{i}\text{)}\] \[=\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}x_{b_{i}}-\sum_{\begin{subarray}{c}i\in[ n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}.\] Thus, \[\sum_{i=1}^{n}x_{i}-\sum_{k=1}^{n}p_{k} =\sum_{i=1}^{n}x_{i}-\left(\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}x_{b_{i}}-\sum_{\begin{subarray}{c}i\in[n] ;\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\right)\] \[=\underbrace{\sum_{i=1}^{n}x_{i}-\sum_{\begin{subarray}{c}i\in[n] ;\\ m_{i}\in\Delta(\lambda)\\ \end{subarray}}x_{b_{i}}}_{=\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}+\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\] \[=\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\\ \text{by Lemma \ref{lem:2011}}\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}.\] This proves Lemma 11.10. Proof of Lemma 11.11.: Define \(u_{i,j}\) and \(p_{i}\) as in Convention 11.5. Lemma 11.3 yields \[\sum_{\mu<v\subseteq\lambda}\mathbf{s}_{\lambda}\left[\nu\right] =\sum_{k=1}^{n}\mathbf{s}_{\lambda}\left[\mu^{+k}\right]\] \[=\underbrace{\det\left(u_{i,j+\left[n=j\right]}\right)_{i,j\in[n] }}_{=\left(\sum\limits_{i=1}^{n}x_{i}\right)\cdot\mathbf{s}_{\lambda}\left[ \mu\right]}-\left(\sum_{k=1}^{n}p_{k}\right)\mathbf{s}_{\lambda}\left[\mu \right]\hskip 42.679134pt\text{(by Lemma \ref{lem:2011})}\] \[=\left(\sum_{i=1}^{n}x_{i}\right)\cdot\mathbf{s}_{\lambda}\left[ \mu\right]-\left(\sum_{k=1}^{n}p_{k}\right)\mathbf{s}_{\lambda}\left[\mu\right]\] \[=\left(\sum_{i=1}^{n}x_{i}-\sum_{k=1}^{n}p_{k}\right)\] \[=\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\\ \text{by Lemma \ref{lem:2011}}\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n]; \\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\] \[=\left(\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\\ \text{by Lemma \ref{lem:2011}}\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n]; \\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\right)\mathbf{s}_{ \lambda}\left[\mu\right].\] This proves Lemma 11.11. ### To Section 12 and Theorem 3.1 Proof of Lemma 12.2.: From \(\left(i,j\right)\in Y\left(\lambda\right)\), we obtain \(j\leq\lambda_{i}\), so that \(\lambda_{i}\geq j\). Hence, Lemma 5.3 (applied to \(i\) and \(j\) instead of \(j\) and \(i\)) yields \(\lambda_{j}^{t}\geq i\). Now, the definition of \(\ell_{i}\) yields \(\ell_{i}=\underbrace{\lambda_{i}}_{\geq j}-i\geq j-i\). Furthermore, the definition of \(\ell_{j}^{t}\) yields \(\ell_{j}^{t}=\lambda_{j}^{t}-j\), so that \(-\ell_{j}^{t}=-\left(\lambda_{j}^{t}-j\right)=j-\lambda_{j}^{t}\). In other words, \(j-\lambda_{j}^{t}=-\ell_{j}^{t}\). Moreover, we thus obtain \(-\ell_{j}^{t}=j-\underbrace{\lambda_{j}^{t}}_{\geq i}\leq j-i\). Thus, \(-\ell_{j}^{t}\leq j-i\leq\ell_{i}\) (since \(\ell_{i}\geq j-i\)). Next, we shall show that \(n\geq\ell_{j}^{t}\). Indeed, Lemma 5.3 (applied to \(j\) and \(n\) instead of \(i\) and \(j\)) shows that we have the equivalence \(\left(\lambda_{j}^{t}\geq n\right)\iff\left(\lambda_{n}\geq j\right)\). Since the statement \(\left(\lambda_{n}\geq j\right)\) is false (because \(\lambda_{n}=0<j\)), we thus conclude that the statement \(\left(\lambda_{j}^{t}\geq n\right)\) is false as well. In other words, we have \(\lambda_{j}^{t}<n\). However, the definition of \(\ell_{j}^{t}\) yields \(\ell_{j}^{t}=\lambda_{j}^{t}-\underbrace{j}_{>0}<\lambda_{j}^{t}<n\). Hence, \(n\geq\ell_{j}^{t}\), so that \(-n\leq-\ell_{j}^{t}\). Now, the definition of \(w_{k}\) yields \(w_{\ell_{i}}=\sum\limits_{k=-n}^{\ell_{i}}z_{k}\) and \(w_{-\ell_{j}^{t}-1}=\sum\limits_{k=-n}^{-\ell_{j}^{t}-1}z_{k}\). Hence, \[w_{\ell_{i}} =\sum\limits_{k=-n}^{\ell_{i}}z_{k}=\sum\limits_{\begin{subarray} {c}k=-n\\ =w_{-\ell_{j}^{t}-1}\end{subarray}}^{-\ell_{i}^{t}-1}z_{k}+\sum\limits_{k=- \ell_{j}^{t}}^{\ell_{i}}z_{k}\hskip 36.135pt\left(\text{since }-n\leq-\ell_{j}^{t}\leq\ell_{i}\right)\] \[=w_{-\ell_{j}^{t}-1}+\sum\limits_{k=-\ell_{j}^{t}}^{\ell_{i}}z_{k}.\] In other words, \[w_{\ell_{i}}-w_{-\ell_{j}^{t}-1}=\sum\limits_{k=-\ell_{j}^{t}}^{\ell_{i}}z_{k}. \tag{122}\] Meanwhile, the equality (6) (with \(c\), \(i\) and \(j\) renames as \(\left(i,j\right)\), \(k\) and \(p\)) says that \[h_{\lambda}\left(\left(i,j\right);z\right)=\sum\limits_{\left(k,p\right)\in H _{\lambda}\left(\left(i,j\right)\right)}z_{p-k}. \tag{123}\] Now, let us study the boxes \(\left(k,p\right)\) that belong to the hook \(H_{\lambda}\left(\left(i,j\right)\right)\). These are the boxes of \(Y\left(\lambda\right)\) that lie either in the \(i\)-th row to the east of \(\left(i,j\right)\) (including \(\left(i,j\right)\) itself), or in the \(j\)-th column to the south of \(\left(i,j\right)\). Thus, these boxes come in two types: 1. The boxes \(\left(k,p\right)\in H_{\lambda}\left(\left(i,j\right)\right)\) that belong to the \(i\)-th row (i.e., that satisfy \(k=i\)) are the boxes \[\left(i,j\right),\;\left(i,j+1\right),\;\left(i,j+2\right),\;\ldots,\;\left(i, \lambda_{i}\right)\] (since the \(i\)-th row of \(Y\left(\lambda\right)\) has \(\lambda_{i}\) boxes). 2. The boxes \(\left(k,p\right)\in H_{\lambda}\left(\left(i,j\right)\right)\) that do not belong to the \(i\)-th row (i.e., that satisfy \(k\neq i\)) are the boxes \[\left(i+1,j\right),\ \left(i+2,j\right),\ \left(i+3,j\right),\ \ldots,\ \left(\lambda_{j}^{t},j\right)\] (since the \(j\)-th column of \(Y\left(\lambda\right)\) has \(\lambda_{j}^{t}\) boxes (by (12), applied to \(k=j\))). Altogether, the boxes in \(H_{\lambda}\left(\left(i,j\right)\right)\) are therefore the boxes \[\left(i,j\right),\ \left(i,j+1\right),\ \left(i,j+2\right),\ \ldots,\ \left(i, \lambda_{i}\right),\] \[\left(i+1,j\right),\ \left(i+2,j\right),\ \left(i+3,j\right),\ \ldots,\ \left( \lambda_{j}^{t},j\right)\] (and clearly, all these boxes are distinct). Hence, \[\sum_{\left(k,p\right)\in H_{\lambda}\left(\left(i,j\right) \right)}z_{p-k}=\underbrace{\left(z_{j-i}+z_{\left(j+1\right)-i}+z_{\left(j+2 \right)-i}+\cdots+z_{\lambda_{i}-i}\right)}_{=z_{j-i}+z_{\left(j-i\right)+1}+z _{\left(j-i\right)+2}+\cdots+z_{\lambda_{i}-i}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ Proof of Lemma 12.3.: We are in one of the following two cases: _Case 1:_ We have \(i\leq n\). _Case 2:_ We have \(i>n\). Let us first consider Case 1. In this case, \(i\leq n\). Thus, \(m_{i}=\underbrace{\mu_{i}}_{\geq 0}-\underbrace{i}_{\leq n}\geq-n\). Since \(\lambda\supseteq\mu\), we have \(\lambda_{i}\geq\mu_{i}\), and thus \(\ell_{i}\geq m_{i}\). However, the definition of the \(w_{k}\) yields \(w_{\ell_{i}}=\sum\limits_{k=-n}^{\ell_{i}}z_{k}\) and \(w_{m_{i}}=\sum\limits_{k=-n}^{m_{i}}z_{k}\). Hence, \[w_{\ell_{i}}-w_{m_{i}} =\sum\limits_{k=-n}^{\ell_{i}}z_{k}-\sum\limits_{k=-n}^{m_{i}}z_{k}\] \[=\sum\limits_{k=m_{i}+1}^{\ell_{i}}z_{k}\qquad\quad(\text{since } \ell_{i}\geq m_{i}\geq-n)\,. \tag{124}\] However, the integers \(j\geq 1\) satisfying \((i,j)\in Y\left(\lambda/\mu\right)\) are exactly the integers \(j\) such that \(\mu_{i}<j\leq\lambda_{i}\). Hence, \[\sum\limits_{\begin{subarray}{c}j\geq 1;\\ (i,j)\in Y\left(\lambda/\mu\right)\end{subarray}}z_{j-i} =\sum\limits_{j=\mu_{i}+1}^{\lambda_{i}}z_{j-i}\] \[=\sum\limits_{k=\mu_{i}-i+1}^{\lambda_{i}-i}z_{k}\qquad\quad \left(\begin{array}{c}\text{here, we have substituted }k\\ \text{for }j-i\text{ in the sum}\end{array}\right)\] \[=\sum\limits_{k=m_{i}+1}^{\ell_{i}}z_{k}\qquad\quad(\text{since }\mu_{i}-i=m_{i}\text{ and }\lambda_{i}-i=\ell_{i})\,.\] Comparing this with (124), we obtain \(w_{\ell_{i}}-w_{m_{i}}=\sum\limits_{\begin{subarray}{c}j\geq 1;\\ (i,j)\in Y\left(\lambda/\mu\right)\end{subarray}}z_{j-i}\). Hence, Lemma 12.3 is proved in Case 1. Let us now consider Case 2. In this case, \(i>n\). Thus, both \(\lambda_{i}\) and \(\mu_{i}\) equal \(0\) (by the definition of \(n\)), so that we have \(\lambda_{i}=\mu_{i}\), and therefore we have \(\ell_{i}=m_{i}\) (since \(\ell_{i}=\lambda_{i}-i\) and \(m_{i}=\mu_{i}-i\)). Hence, \(w_{\ell_{i}}=w_{m_{i}}\), so that \(w_{\ell_{i}}-w_{m_{i}}=0\). On the other hand, the diagram \(Y\left(\lambda/\mu\right)\) has no boxes in the \(i\)-th row (since \(\lambda_{i}=0\)), so that the sum \(\sum\limits_{\begin{subarray}{c}j\geq 1;\\ (i,j)\in Y\left(\lambda/\mu\right)\end{subarray}}z_{j-i}\) is empty and thus equals \(0\). Comparing this with \(w_{\ell_{i}}-w_{m_{i}}=0\), we obtain \(w_{\ell_{i}}-w_{m_{i}}=\sum\limits_{\begin{subarray}{c}j\geq 1;\\ (i,j)\in Y\left(\lambda/\mu\right)\end{subarray}}z_{j-i}\). Thus, Lemma 12.3 is proved in Case 2. We have now proved Lemma 12.3 in both possible cases. Proof of Lemma 12.4.: Lemma 10.10 (applied to \(f\left(i\right)=w_{m_{i}}\) and \(g\left(j\right)=w_{\ell_{j}}\)) yields \[\sum\limits_{\begin{subarray}{c}i\in\left[n\right];\\ m_{i}\in\Delta\left(\lambda\right)\end{subarray}}w_{m_{i}}=\sum\limits_{ \begin{subarray}{c}j\in\left[n\right];\\ \ell_{j}\in\Delta\left(\mu\right)\end{subarray}}w_{\ell_{j}} \tag{125}\] (since \(w_{m_{i}}=w_{\ell_{j}}\) whenever \(m_{i}=\ell_{j}\)). From \(\lambda_{n}=0\), we see that every \((i,j)\in Y\left(\lambda/\mu\right)\) satisfies \(i<n\) and thus \(i\leq n\) and therefore \(i\in[n]\). Hence, \[\sum_{(i,j)\in Y\left(\lambda/\mu\right)}z_{j-i} =\sum_{i\in[n]}\underbrace{\sum_{\begin{subarray}{c}j\geq 1;\\ (i,j)\in Y\left(\lambda/\mu\right)\\ =w_{\ell_{i}}-w_{m_{i}}\end{subarray}}z_{j-i}}_{=w_{\ell_{i}}-w_{m_{i}}}=\sum_ {i\in[n]}\left(w_{\ell_{i}}-w_{m_{i}}\right)\] \[=\sum_{\begin{subarray}{c}i\in[n];\\ \ell_{i}\in\Delta(\mu)\end{subarray}}w_{\ell_{i}+}\underbrace{\sum_{i\in[n]; }w_{\ell_{i}}}_{\begin{subarray}{c}i\in[n];\\ \ell_{i}\notin\Delta(\mu)\end{subarray}}w_{\ell_{i}}}_{\begin{subarray}{c}i \in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}-\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}w_{m_{i}}\] \[=\sum_{\begin{subarray}{c}i\in[n];\\ \ell_{i}\in\Delta(\mu)\end{subarray}}w_{\ell_{i}}+\sum_{\begin{subarray}{c}i \in[n];\\ \ell_{i}\notin\Delta(\mu)\end{subarray}}w_{\ell_{i}}\quad=\sum_{\begin{subarray} {c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}\underbrace{\sum_{i\in[n];}w_{m_{i}}}_{ \begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}\] \[=\sum_{\begin{subarray}{c}i\in[n];\\ \ell_{i}\in\Delta(\mu)\end{subarray}}w_{\ell_{i}}+\sum_{\begin{subarray}{c}i \in[n];\\ \ell_{i}\notin\Delta(\mu)\end{subarray}}w_{\ell_{i}}-\sum_{\begin{subarray}{c}i \in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}\underbrace{\sum_{i\in[n];}w_{m_{i}}}_{ \begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}-\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}\] \[=\sum_{\begin{subarray}{c}j\in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}w_{\ell_{j}}+\sum_{\begin{subarray}{c}i \in[n];\\ \ell_{i}\notin\Delta(\mu)\end{subarray}}w_{\ell_{i}}-\sum_{\begin{subarray}{c}i \in[n];\\ \ell_{j}\in\Delta(\mu)\end{subarray}}\underbrace{\sum_{i\in[n];}w_{\ell_{j}}}_{ \begin{subarray}{c}i\in[n];\\ m_{i}\in\Delta(\lambda)\end{subarray}}-\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}\] \[=\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}w_{\ell_{k}}-\sum_{\begin{subarray}{c}i \in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}.\] This proves Lemma 12.4. Proof of Lemma 12.5.: Let \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be the flagging induced by \(\lambda/\mu\). (This was introduced in Definition 6.8.) Recall that \(x_{1},x_{2},x_{3},\ldots\) and \(y_{1},y_{2},y_{3},\ldots\) have been indeterminates so far. But now let us instead set \[x_{i}:=w_{\ell_{i}}\qquad\text{ and }\qquad y_{i}:=-w_{-\ell_{i}^{t}-1} \qquad\text{ for each }i\geq 1.\] Thus, for any \((i,j)\in Y\left(\lambda\right)\), we have \[\underbrace{x_{i}}_{=w_{\ell_{i}}}+\underbrace{y_{j}}_{=-w_{-\ell_{ j}^{t}-1}} =w_{\ell_{i}}+\left(-w_{-\ell_{j}^{t}-1}\right)=w_{\ell_{i}}-w_{- \ell_{j}^{t}-1}\] \[=h_{\lambda}\left((i,j)\,;z\right) \tag{126}\] (by Lemma 12.2). Now, Lemma 11.11 yields \[\left(\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n]; \\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\right)\mathbf{s}_{ \lambda}\left[\mu\right]\] \[=\sum_{\mu<\nu\subseteq\lambda}\mathbf{s}_{\lambda}\left[\nu\right] \tag{127}\] (where the sum on the right hand side ranges over all partitions \(\nu\) that satisfy \(\mu<\nu\subseteq\lambda\)). However, for each \(k\in[n]\) satisfying \(\ell_{k}\notin\Delta\left(\mu\right)\), we have \(x_{k}=w_{\ell_{k}}\) (by the definition of \(x_{k}\)). Thus, \[\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}=\sum_{\begin{subarray}{c}k\in[n]; \\ \ell_{k}\notin\Delta(\mu)\end{subarray}}w_{\ell_{k}}. \tag{128}\] Furthermore, for each \(i\in[n]\) satisfying \(m_{i}\notin\Delta\left(\lambda\right)\), we have \(m_{i}+1+b_{i}\geq 1\) and \(\ell_{m_{i}+1+b_{i}}^{t}=-1-m_{i}\) (both by Lemma 10.13) and \[y_{m_{i}+1+b_{i}} =-w_{-\ell_{m_{i}+1+b_{i}}^{t}-1}\qquad\quad\left(\text{by the definition of }y_{m_{i}+1+b_{i}}\right)\] \[=-w_{-\left(-1-m_{i}\right)-1}\qquad\quad\left(\text{since }\ell_{m_{i}+1+b_{i}}^{t}=-1-m_{i}\right)\] \[=-w_{m_{i}}\qquad\quad\left(\text{since }-\left(-1-m_{i}\right)-1=m_{i}\right).\] Hence, \[\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}=\sum_{ \begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}(-w_{m_{i}})=-\sum_{\begin{subarray}{ c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}.\] Adding this equality to the equality (128), we find \[\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n ];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}} =\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}w_{\ell_{k}}+\left(-\sum_{ \begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}\right)\] \[=\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}w_{\ell_{k}}-\sum_{\begin{subarray}{c}i \in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}w_{m_{i}}\] \[=\sum_{\begin{subarray}{c}(i,j)\in\Upsilon(\lambda/\mu)\end{subarray} }z_{j-i} \tag{129}\] (by Lemma 12.4). Furthermore, if \(\nu\) is any partition, then the definition of \(\mathbf{s}_{\lambda}\left[\nu\right]\) yields \[\mathbf{s}_{\lambda}\left[\nu\right]=\sum\limits_{D\in\mathcal{E}\left(\lambda/ \nu\right)}\ \ \prod\limits_{\begin{subarray}{c}(i,j)\in D\\ \text{ }\begin{subarray}{c}(x_{i}+y_{j})\\ \text{ }\begin{subarray}{c}(y\text{ }126),\\ \text{ }\begin{subarray}{c}(i,j)\in D\subseteq Y(\lambda)\end{subarray} \end{subarray}\end{subarray}}\ \ \ =\sum\limits_{D\in\mathcal{E}\left(\lambda/\nu\right)}\ \ \prod\limits_{ \begin{subarray}{c}(i,j)\in D\end{subarray}}h_{\lambda}\left(\left(i,j\right);z\right)\] \[=\sum\limits_{E\in\mathcal{E}\left(\lambda/\nu\right)}\ \ \prod\limits_{c\in E}h_{ \lambda}\left(c;z\right)\ Proof of Theorem 3.1.: We induct on \(\left|Y\left(\lambda/\mu\right)\right|\): _Base case:_ Assume that \(\left|Y\left(\lambda/\mu\right)\right|=0\). Thus, \(\mu=\lambda\) (since \(\lambda\supseteq\mu\) by assumption), so that the set \(\mathrm{SYT}\left(\lambda/\mu\right)\) consists of a single tableau \(T\) (with no entries), and this tableau \(T\) satisfies \(\mathbf{z}_{T}=1\). Hence, \(\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=1\). On the other hand, from \(\mu=\lambda\), we obtain \[\mathcal{E}\left(\lambda/\mu\right)=\mathcal{E}\left(\lambda/\lambda\right)= \left\{Y\left(\lambda\right)\right\}\qquad\quad\text{(by Lemma \ref{lem:2.1})}\,,\] and thus \[\sum\limits_{E\in\mathcal{E}\left(\lambda/\mu\right)}\;\;\prod\limits_{c\in Y \left(\lambda\right)\backslash E}\frac{1}{h_{\lambda}\left(c;z\right)}=\prod \limits_{c\in Y\left(\lambda\right)\backslash Y\left(\lambda\right)}\frac{1}{ h_{\lambda}\left(c;z\right)}=\left(\text{empty product}\right)=1.\] Comparing this with \(\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=1\), we find \[\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=\sum \limits_{E\in\mathcal{E}\left(\lambda/\mu\right)}\;\;\prod\limits_{c\in Y \left(\lambda\right)\backslash E}\frac{1}{h_{\lambda}\left(c;z\right)}.\] Hence, Theorem 3.1 is proved in the case when \(\left|Y\left(\lambda/\mu\right)\right|=0\). The base case is thus finished. _Induction step:_ Fix a positive integer \(N\). Assume (as the induction hypothesis) that Theorem 3.1 holds for \(\left|Y\left(\lambda/\mu\right)\right|=N-1\). We now fix a skew partition \(\lambda/\mu\) with \(\left|Y\left(\lambda/\mu\right)\right|=N\). Our goal is to prove that Theorem 3.1 holds for this \(\lambda/\mu\). From \(\left|Y\left(\lambda/\mu\right)\right|=N>0\), we obtain \(\mu\neq\lambda\). Hence, Lemma 12.5 yields \[\sum\limits_{E\in\mathcal{E}\left(\lambda/\mu\right)}\;\;\prod \limits_{c\in Y\left(\lambda\right)\backslash E}\frac{1}{h_{\lambda}\left(c;z \right)}\] \[=\frac{1}{\sum\limits_{\left(i,j\right)\in Y\left(\lambda/\mu \right)}z_{j-i}}\;\;\sum\limits_{\mu\prec\nu\subseteq\lambda}\;\;\sum\limits_{ E\in\mathcal{E}\left(\lambda/\nu\right)}\;\;\prod\limits_{c\in Y\left( \lambda\right)\backslash E}\frac{1}{h_{\lambda}\left(c;z\right)}. \tag{132}\] On the other hand, Lemma 4.5**(b)** shows that \[\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\mu\right)}\mathbf{z}_{T}=\frac{1} {\sum\limits_{\left(i,j\right)\in Y\left(\lambda/\mu\right)}z_{j-i}}\cdot\sum \limits_{\mu\prec\nu\subseteq\lambda}\;\;\sum\limits_{T\in\mathrm{SYT}\left( \lambda/\nu\right)}\mathbf{z}_{T}. \tag{133}\] Now, if \(\nu\) is any partition satisfying \(\mu\lessdot\nu\subseteq\lambda\), then \(\left|Y\left(\lambda/\nu\right)\right|=\left|Y\left(\lambda/\mu\right)\right|-1\) (since \(\mu\lessdot\nu\) entails that the diagram \(Y\left(\nu\right)\) has exactly one more box than \(Y\left(\mu\right)\), and thus \(Y\left(\lambda/\nu\right)\) has one fewer box than \(Y\left(\lambda/\mu\right)\)), and thus \(\left|Y\left(\lambda/\nu\right)\right|=\underbrace{\left|Y\left(\lambda/\mu \right)\right|}_{=N}-1=N-1\), which allows us to apply our induction hypothesis to \(\nu\) instead of \(\mu\). As a consequence, we see that any such partition \(\nu\) satisfies \[\sum\limits_{T\in\mathrm{SYT}\left(\lambda/\nu\right)}\mathbf{z}_{T}=\sum \limits_{E\in\mathcal{E}\left(\lambda/\nu\right)}\;\;\prod\limits_{c\in Y\left( \lambda\right)\backslash E}\frac{1}{h_{\lambda}\left(c;z\right)}.\] Hence, we can rewrite (133) as \[\sum_{T\in\mathrm{SYT}(\lambda/\mu)}\mathbf{z}_{T}=\frac{1}{\sum\limits_{(i,j)\in \mathrm{SY}(\lambda/\mu)}z_{j-i}}\cdot\sum_{\mu\preccurlyeq v\subseteq\lambda} \ \ \sum_{E\in\mathcal{E}(\lambda/\nu)}\ \ \prod_{c\in\mathrm{SY}(\lambda)\setminus E}\frac{1}{h_{ \lambda}\left(c;z\right)}.\] Comparing this with (132), we obtain \[\sum_{T\in\mathrm{SYT}(\lambda/\mu)}\mathbf{z}_{T}=\sum_{E\in\mathcal{E}( \lambda/\mu)}\ \ \prod_{c\in\mathrm{SY}(\lambda)\setminus E}\frac{1}{h_{\lambda} \left(c;z\right)}.\] In other words, Theorem 3.1 holds for our \(\lambda/\mu\). This completes the induction step, and thus Theorem 3.1 is proved. ### To Section 13 and Theorem 5.8 Proof of Lemma 13.1.: We are in one of the following two cases: _Case 1_: We have \(\lambda_{j}\geq i\). _Case 2_: We have \(\lambda_{j}<i\). Let us first consider Case 1. In this case, we have \(\lambda_{j}\geq i\). Thus, Lemma 5.3 yields \(\lambda_{i}^{t}\geq j\). Hence, \(\underbrace{\lambda_{j}}_{\geq i}+\underbrace{\lambda_{i}^{t}}_{\geq j}-i-j \geq i+j-i-j=0>-1\), so that \(\lambda_{j}+\lambda_{i}^{t}-i-j\neq-1\). This proves Lemma 13.1 in Case 1. Let us now consider Case 2. In this case, we have \(\lambda_{j}<i\). Hence, we don't have \(\lambda_{j}\geq i\). According to Lemma 5.3, this shows that we don't have \(\lambda_{i}^{t}\geq j\) either. Hence, we have \(\lambda_{i}^{t}<j\), so that \(\lambda_{i}^{t}\leq j-1\) (since \(\lambda_{i}^{t}\) and \(j\) are integers). Now, \(\underbrace{\lambda_{j}}_{<i}+\underbrace{\lambda_{i}^{t}}_{\leq j-1}-i-j<i+j -1-i-j=-1\), so that \(\lambda_{j}+\lambda_{i}^{t}-i-j\neq-1\). This proves Lemma 13.1 in Case 2. Lemma 13.1 has now been proved in both cases, and thus always holds. Proof of Lemma 13.2.: Well-known and easy consequence of the definition of \(\lambda^{t}\). Proof of Lemma 13.3.: Assume the contrary. Thus, \(-1-p\in\Delta\left(\lambda^{t}\right)\). We have \(p\in\Delta\left(\lambda\right)=\left\{\lambda_{i}-i\ \mid\ i\geq 1\right\}\) (by the definition of \(\Delta\left(\lambda\right)\)). In other words, \(p=\lambda_{j}-j\) for some \(j\geq 1\). Consider this \(j\). Moreover, \(-1-p\in\Delta\left(\lambda^{t}\right)=\left\{\lambda_{i}^{t}-i\ \mid\ i\geq 1\dots\right\}\) (by the definition of \(\Delta\left(\lambda^{t}\right)\)). In other words, \(-1-p=\lambda_{i}^{t}-i\) for some \(i\geq 1\). Consider this \(i\). Adding the equalities \(p=\lambda_{j}-j\) and \(-1-p=\lambda_{i}^{t}-i\) together, we obtain \[p+\left(-1-p\right)=\left(\lambda_{j}-j\right)+\left(\lambda_{i}^{t}-i\right) =\lambda_{j}+\lambda_{i}^{t}-i-j\neq-1\] (by Lemma 13.1). But this contradicts the obvious equality \(p+\left(-1-p\right)=-1\). Hence, our assumption was wrong, and Lemma 13.3 is proved. Proof of Lemma 13.4.: This is just Lemma 10.9, with the letters \(\lambda\), \(\mu\), \(\ell_{j}\) and \(k\) renamed as \(\mu\), \(\lambda\), \(m_{j}\) and \(i\). Proof of Lemma 13.5.: We shall proceed in multiple steps. 1. The set \(\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\}\) is finite. [Proof:] Pick an \(n\in\mathbb{N}\) that is large enough that \(\lambda=\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)\) (that is, \(\lambda_{n+1}=0\)) and \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) (that is, \(\mu_{n+1}=0\)). (Such an \(n\) exists, since \(\lambda\) and \(\mu\) are partitions.) Then, Lemma 13.4 shows that every positive integer \(i\) that satisfies \(m_{i}\notin\Delta\left(\lambda\right)\) must satisfy \(i\in[n]\). In other words, \(\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\}\subseteq[n]\). Hence, the set \(\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\}\) is finite (since \([n]\) is finite).] 2. If \(i\) is a positive integer satisfying \(m_{i}\notin\Delta\left(\lambda\right)\), then \(m_{i}+1+b_{i}\) is a positive integer \(p\) satisfying \(\ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\). [Proof:] Let \(i\) be a positive integer satisfying \(m_{i}\notin\Delta\left(\lambda\right)\). Lemma 10.13 then shows that \(m_{i}+1+b_{i}\geq 1\) and \(\ell_{m_{i}+1+b_{i}}^{t}=-1-m_{i}\). From \(m_{i}+1+b_{i}\geq 1\), we see that \(m_{i}+1+b_{i}\) is a positive integer. Moreover, recall that \(\Delta\left(\mu\right)=\left\{m_{1},m_{2},m_{3},\ldots\right\}\), so that \(m_{i}\in\Delta\left(\mu\right)\). Hence, Lemma 13.3 (applied to \(\mu\) and \(m_{i}\) instead of \(\lambda\) and \(p\)) yields that \(-1-m_{i}\notin\Delta\left(\mu^{t}\right)\). Therefore, \(\ell_{m_{i}+1+b_{i}}^{t}=-1-m_{i}\notin\Delta\left(\mu^{t}\right)\). Hence, \(m_{i}+1+b_{i}\) is a positive integer \(p\) satisfying \(\ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\).] 3. The map \[\Phi:\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\} \rightarrow\left\{p\geq 1\ \mid\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\right\},\] \[i \mapsto m_{i}+1+b_{i}\] is well-defined. [Proof:] This is saying that if \(i\) is a positive integer satisfying \(m_{i}\notin\Delta\left(\lambda\right)\), then \(m_{i}+1+b_{i}\) is a positive integer \(p\) satisfying \(\ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\). But we just showed this in the previous step.] 4. This map \(\Phi\) is furthermore injective. [Proof:] Let \(u\) and \(v\) be two distinct elements of \(\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\}\) that satisfy \(\Phi\left(u\right)=\Phi\left(v\right)\). We must show that \(u=v\). The definition of \(\Phi\) yields \(\Phi\left(u\right)=m_{u}+1+b_{u}\). However, we have \(u\in\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\}\), so that \(u\geq 1\) and \(m_{u}\notin\Delta\left(\lambda\right)\). Hence, Lemma 10.13 (applied to \(i=u\)) shows that \(m_{u}+1+b_{u}\geq 1\) and \(\ell_{m_{u}+1+b_{u}}^{t}=-1-m_{u}\). In view of \(\Phi\left(u\right)=m_{u}+1+b_{u}\), we can rewrite these facts as \(\Phi\left(u\right)\geq 1\) and \(\ell_{\Phi\left(u\right)}^{t}=-1-m_{u}\). Similarly, we find \(\Phi\left(v\right)\geq 1\) and \(\ell_{\Phi\left(v\right)}^{t}=-1-m_{v}\). Now, consider the two equalities \(\ell_{\Phi\left(u\right)}^{t}=-1-m_{u}\) and \(\ell_{\Phi\left(v\right)}^{t}=-1-m_{v}\). Their left hand sides are equal, since \(\Phi\left(u\right)=\Phi\left(v\right)\). Thus, their right hand sides are equal as well. In other words, \(-1-m_{u}=-1-m_{v}\), so that \(m_{u}=m_{v}\). However, Lemma 10.2 yields \(m_{1}>m_{2}>m_{3}>\cdots\). In particular, the numbers \(m_{1},m_{2},m_{3},\ldots\) are distinct. Hence, from \(m_{u}=m_{v}\), we obtain \(u=v\). This completes the proof of the injectivity of \(\Phi\).] 5. We have \(|\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}|\leq\Big{|}\Big{\{}p \geq 1\ \ |\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\Big{\}}\Big{|}\). [_Proof:_ We have just shown that there is an injective map (namely, \(\Phi\)) from the set \(\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}\) to the set \(\Big{\{}p\geq 1\ \ |\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\Big{\}}\). Thus, the size of the former set is at most as large as the size of the latter. In other words, \(|\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}|\leq\Big{|}\Big{\{}p \geq 1\ \ |\ \ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\Big{\}}\Big{|}\).] 6. The sets \(\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}\) and \(\Big{\{}p\geq 1\ \ |\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\Big{\}}\) are finite and have the same size. [_Proof:_ We have just shown that \[|\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}| \leq\Big{|}\Big{\{}p\geq 1\ \ |\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\Big{\}}\Big{|}\] \[=\big{|}\{i\geq 1\ \ |\ \ \ell_{i}^{t}\notin\Delta\left(\mu^{t}\right)\}\big{|}\] (134) (here, we renamed the index \(p\) as \(i\)). But we can apply the same reasoning to the partitions \(\mu^{t}\) and \(\lambda^{t}\) instead of \(\lambda\) and \(\mu\). As a result, we obtain \[\big{|}\big{\{}i\geq 1\ \ |\ \ \ell_{i}^{t}\notin\Delta\left(\mu\right)\big{\}} \big{|}\leq\Big{|}\Big{\{}i\geq 1\ \ |\ \ \ m_{i}^{tt}\notin\Delta\left(\left(\lambda^{t}\right)^{t}\right)\Big{\}} \Big{|}\,,\] where we set \(m_{i}^{tt}=\left(\mu^{t}\right)_{i}^{t}-i\) for each \(i\geq 1\). This can be simplified to \[|\big{\{}i\geq 1\ \ |\ \ \ell_{i}^{t}\notin\Delta\left(\mu\right)\big{\}}|\leq|\{ i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}|\] (since Lemma 13.2 yields \(\left(\lambda^{t}\right)^{t}=\lambda\) and \(\left(\mu^{t}\right)^{t}=\mu\), so that \(m_{i}^{tt}=\underbrace{\left(\mu^{t}\right)_{i}^{t}}_{=\mu_{i}}\ \ \ \ \ -i=\mu_{i}-i=m_{i}\) for every \(i\geq 1\)). Combining this inequality with the inequality (134), we obtain \[|\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}| =\big{|}\big{\{}i\geq 1\ \ |\ \ \ell_{i}^{t}\notin\Delta\left(\mu^{t}\right)\big{\}}\big{|}\] \[=\Big{|}\Big{\{}p\geq 1\ \ |\ \ \ell_{p}^{t}\notin\Delta\left(\mu^{t} \right)\Big{\}}\Big{|}\,.\] Thus, the sets \(\{i\geq 1\ \ |\ \ m_{i}\notin\Delta\left(\lambda\right)\}\) and \(\Big{\{}p\geq 1\ \ |\ \ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\Big{\}}\) have the same size. Since the first of them is finite (by Step 1 above), we conclude that they are both are finite.] 7. The map \(\Phi\) is bijective. [Proof:] The sets \(\left\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\right\}\) and \(\left\{p\geq 1\ \mid\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\right\}\) are finite and have the same size (as we have just seen), and the map \(\Phi\) between these sets is injective (by Step 4 above). It is known that any injective map between two finite sets of the same size must be bijective. Applying this to the map \(\Phi\), we thus conclude that \(\Phi\) is bijective.] Thus, the map \(\Phi\) is a bijection. This proves Lemma 13.5 (since the map \(\Phi\) is exactly the map described in Lemma 13.5). Proof of Lemma 13.6.: Lemma 13.4 shows that every positive integer \(i\) that satisfies \(m_{i}\notin\Delta\left(\lambda\right)\) must satisfy \(i\in[n]\). Hence, the summation sign \(\sum\limits_{\begin{subarray}{c}i\geq 1;\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}\) is equivalent to \(\sum\limits_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}\). Therefore, \[\sum\limits_{\begin{subarray}{c}i\geq 1;\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}y_{m_{i}+1+b_{i}}=\sum \limits_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}y_{m_{i}+1+b_{i}}. \tag{135}\] On the other hand, Lemma 13.5 shows that the map \[\left\{i\geq 1\ \mid\ m_{i}\notin\Delta\left(\lambda\right)\right\} \rightarrow\left\{p\geq 1\ \mid\ \ell_{p}^{t}\notin\Delta\left(\mu^{t}\right)\right\},\] \[i \mapsto m_{i}+1+b_{i}\] is a bijection. Hence, we can substitute \(k\) for \(m_{i}+1+b_{i}\) in the sum \(\sum\limits_{\begin{subarray}{c}i\geq 1;\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}y_{m_{i}+1+b_{i}}\). We thus obtain \[\sum\limits_{\begin{subarray}{c}i\geq 1;\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}y_{m_{i}+1+b_{i}}=\sum \limits_{\begin{subarray}{c}k\geq 1;\\ \ell_{k}^{t}\notin\Delta\left(\mu^{t}\right)\end{subarray}}y_{k}.\] Comparing this with (135), we obtain \[\sum\limits_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta\left(\lambda\right)\end{subarray}}y_{m_{i}+1+b_{i}}=\sum \limits_{\begin{subarray}{c}k\geq 1;\\ \ell_{k}^{t}\notin\Delta\left(\mu^{t}\right)\end{subarray}}y_{k}.\] This proves Lemma 13.6. Proof of Theorem 5.8.: If \(k\) is a positive integer that satisfies \(\ell_{k}\notin\Delta\left(\mu\right)\), then \(k\in[n]\) (by Lemma 10.9). Thus, the summation sign "\(\sum\limits_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta\left(\mu\right)\end{subarray}}\)" is equivalent to the summation sign "\(\sum\limits_{\begin{subarray}{c}k\geq 1;\\ \ell_{k}\notin\Delta\left(\mu\right)\end{subarray}}\)". Hence, \[\sum\limits_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta\left(\mu\right)\end{subarray}}x_{k}=\sum\limits_{ \begin{subarray}{c}k\geq 1;\\ \ell_{k}\notin\Delta\left(\mu\right)\end{subarray}}x_{k}. \tag{136}\] Lemma 13.6 yields \[\sum_{\begin{subarray}{c}i\in[n];\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}=\sum_{ \begin{subarray}{c}k\geq 1;\\ \ell_{k}^{\dagger}\notin\Delta(\mu^{t})\end{subarray}}y_{k}. \tag{137}\] Now, Lemma 11.11 yields \[\left(\sum_{\begin{subarray}{c}k\in[n];\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}i\in[n] ;\\ m_{i}\notin\Delta(\lambda)\end{subarray}}y_{m_{i}+1+b_{i}}\right)\mathbf{s}_{ \lambda}\left[\mu\right]=\sum_{\mu<\nu\subseteq\lambda}\mathbf{s}_{\lambda} \left[\nu\right].\] In view of (136) and (137), we can rewrite this as \[\left(\sum_{\begin{subarray}{c}k\geq 1;\\ \ell_{k}\notin\Delta(\mu)\end{subarray}}x_{k}+\sum_{\begin{subarray}{c}k\geq 1 ;\\ \ell_{k}^{\dagger}\notin\Delta(\mu^{t})\end{subarray}}y_{k}\right)\mathbf{s}_{ \lambda}\left[\mu\right]=\sum_{\mu<\nu\subseteq\lambda}\mathbf{s}_{\lambda} \left[\nu\right].\] This proves Theorem 5.8. ## 16 Appendix: Odds and ends In this short section, we shall discuss how a few of the auxiliary results shown above can be extended or generalized. ### To Section 8 The most general result in Section 8 is Theorem 8.3. We can generalize it further by replacing the partition \(\mu\) by a skew partition \(\mu/\nu\) and adding a second flagging \(\mathbf{a}\). Here are the relevant definitions: **Definition 16.1**.: Let \(\mu/\nu\) be a skew partition. * A _semistandard tableau_ of shape \(\mu/\nu\) is defined just like a semistandard tableau of shape \(\mu\) (Definition 6.1), but with each "\(\mu\)" replaced by "\(\mu/\nu\)". * Let \(\mathbf{a}=(a_{1},a_{2},a_{3},\ldots)\) and \(\mathbf{b}=(b_{1},b_{2},b_{3},\ldots)\) be two flaggings (i.e., sequences of positive integers). A semistandard tableau \(T\) of shape \(\mu/\nu\) is said to be \(\mathbf{b/a}\)-_flagged_ if and only if it satisfies \[a_{i}\leq T\left(i,j\right)\leq b_{i}\qquad\text{for all }\left(i,j\right)\in Y \left(\mu/\nu\right)\] (that is, all entries in row \(i\) are \(\geq a_{i}\) and \(\leq b_{i}\)). We let \(\operatorname{FSSYT}\left(\mu/\nu,\,\mathbf{b/a}\right)\) be the set of all \(\mathbf{b/a}\)-flagged semistandard tableaux of shape \(\mu/\nu\). Now we can generalize Theorem 8.3 as follows: **Theorem 16.2**.: Let \(R\) be a commutative ring. Let \(u_{i,j}\) be an element of \(R\) for each pair \(\left(i,j\right)\in\mathbb{Z}\times\mathbb{Z}\). For each \(a,b\in\mathbb{N}\) and \(r,q,d\in\mathbb{Z}\), we define an element \(h_{a,\ b;\ r,\ q}\left[d\right]\in R\) by \[h_{a,\ b;\ r,\ q}\left[d\right]:=\sum_{\begin{subarray}{c}\left(i_{+1},i_{r+2},\ldots,i_{q}\right)\in\left[a,b\right]^{q-r};\\ i_{r+1}\leq t_{r+2}\leq\cdots\leq i_{q}\end{subarray}}\ \prod_{j=r+1}^{q}u_{i_{j},\ j-d}\] (where \(\left[a,b\right]:=\left\{a,a+1,a+2,\ldots,b\right\}\)). This sum is understood to be \(0\) if \(q<r\), and to be \(1\) if \(q=r\). Let \(\mu=\left(\mu_{1},\mu_{2},\ldots,\mu_{n}\right)\) and \(\nu=\left(\nu_{1},\nu_{2},\ldots,\nu_{n}\right)\) be two partitions such that \(\nu\subseteq\mu\). Let \(\mathbf{a}=\left(a_{1},a_{2},a_{3},\ldots\right)\) and \(\mathbf{b}=\left(b_{1},b_{2},b_{3},\ldots\right)\) be two weakly increasing flaggings. Then, \[\sum_{T\in\text{FSSYT}\left(\mu/\nu,\ \mathbf{b}/\mathbf{a}\right)}\ \ \prod_{\left(i,j\right)\in Y\left(\mu/\nu\right)}u_{T\left(i,j\right),\ j-i}= \det\left(h_{a_{j},\ b_{i};\ \nu_{j},\ \mu_{i}-i+j}\left[j\right]\right)_{i,j\in\left[n \right]}.\] The proof of Theorem 16.2 is mostly analogous to our proof of Theorem 8.3, with some occasional complications due to the \(a_{i}\leq T\left(i,j\right)\) conditions and due to the presence of \(\nu\). Some changes need to be made in Definition 8.5: In the definition of a legitimate permutation \(\sigma\in S_{n}\), the inequality \(\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\geq 0\) must be replaced by \(\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\geq\nu_{i}\). In the definition of \(P\left(\sigma\right)\), the inequality \(j\leq\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\) must be replaced by \(\nu_{i}<j\leq\mu_{\sigma\left(i\right)}-\sigma\left(i\right)+i\). The \(\mathbf{b}\)-flagged \(\sigma\)-arrays should be replaced by "\(\mathbf{b}/\mathbf{a}\)-flagged \(\sigma\)-arrays", which are defined similarly but require that every entry of \(T\) in the \(i\)-th row is \(\leq b_{\sigma\left(i\right)}\) and \(\geq a_{i}\) (note the different subscripts!). The definition of an outer failure (Definition 8.10 **(a)**) should be adapted by replacing "\(\left(i-1,j\right)\notin P\left(\sigma\right)\)" by "\(\left(i-1,j\right)\notin P\left(\sigma\right)\cup Y\left(\nu\right)\)" (so that \(\left(i,j\right)\) does \(\mathbf{not}\) count as a failure if \(\nu_{i-1}\geq j\)). We leave it to the reader to check that the argument (most importantly, the proof of Lemma 8.17) survives all these changes (and some others that are forced by these). ### To Section 9 As we said, Lemma 9.2 is just the \(r=1\) case of [10, SS319]. Here is the general case: **Lemma 16.3**.: Let \(P\) and \(Q\) be two \(n\times n\)-matrices over some commutative ring. For each subset \(K\) of \(\left[n\right]\), we let \(P\underset{\text{row}}{\overset{K}{\underset{\text{row}}{\rightleftarrows}}}Q\) denote the \(n\times n\)-matrix that is obtained from \(P\) by replacing the \(k\)-th row by the \(k\)-th row of \(Q\) for all \(k\in K\). (That is, the \(\left(i,j\right)\)-th entry of this matrix is \(\begin{cases}P_{i,j},&\text{ if }i\notin K;\\ Q_{i,j},&\text{ if }i\in K\end{cases}\) for every \(i,j\in\left[n\right]\).) For each subset \(K\) of \([n]\), we let \(P\underset{\mathrm{col}}{\overset{K}{\underset{\mathrm{col}}{\times}}}Q\) denote the \(n\times n\)-matrix that is obtained from \(P\) by replacing the \(k\)-th column by the \(k\)-th column of \(Q\) for all \(k\in K\). (That is, the \((i,j)\)-th entry of this matrix is \(\begin{cases}P_{i,j},&\text{ if }j\notin K;\\ Q_{i,j},&\text{ if }j\in K\end{cases}\) for every \(i,j\in[n]\).) Let \(r\in\{0,1,\ldots,n\}\). Then, \[\sum_{\begin{subarray}{c}K\subseteq[n];\\ |K|=r\end{subarray}}\det\left(P\underset{\mathrm{row}}{\overset{K}{\underset {\mathrm{row}}{\times}}}Q\right)=\sum_{\begin{subarray}{c}K\subseteq[n];\\ |K|=r\end{subarray}}\det\left(P\underset{\mathrm{col}}{\overset{K}{\underset {\mathrm{col}}{\times}}}Q\right).\] This can be proved in a similar way as we proved Lemma 9.2, but using Laplace expansion along multiple rows/columns (see, e.g., [10, Theorem 6.156]). Finally, we note that Lemma 9.6 has an analogue in which \(p_{i}\) is replaced by \(p_{j}\): **Lemma 16.4**.: Let \(n\) be a positive integer. Let \(R\) be a commutative ring. Let \(u_{i,j}\) be an element of \(R\) for each \(i\in[n]\) and each \(j\in[n+1]\). Let \(p_{1},p_{2},\ldots,p_{n}\) be \(n\) further elements of \(R\). Then, \[\sum_{k=1}^{n}\det\left(u_{i,j+[k=i]}-p_{j}u_{i,j}\,[k=i]\right)_ {i,j\in[n]}\] \[=\det\left(u_{i,j+[n=j]}\right)_{i,j\in[n]}-\left(\sum_{k=1}^{n}p _{k}\right)\det\left(u_{i,j}\right)_{i,j\in[n]}.\] Proof of Lemma 16.4.: Proceed exactly as in the above proof of Lemma 9.6, replacing the \(p_{j}\) by \(p_{i}\). The only nontrivial change is replacing the computation (93) by \[\sum_{k=1}^{n}\ \ \sum_{\ell=1}^{n}\,\left(-1\right)^{k+\ell}\,p_{ \ell}u_{k,\ell}\det\left(U_{\sim k,\sim\ell}\right)\] \[=\sum_{\ell=1}^{n}p_{\ell}\sum_{k=1}^{n}\,\left(-1\right)^{k+ \ell}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \underbrace{u_{k,\ell}}_{\begin{subarray}{c}=U_{k,\ell}\\ \text{(by the definition of }U\end{subarray})\end{subarray}}\det\left(U_{\sim k,\sim\ell}\right)\] \[=\sum_{\ell=1}^{n}p_{\ell}\underset{\mathrm{(}}{\overset{K}{ \underset{\mathrm{col}}{\times}}}(-1)^{k+\ell}\,U_{k,\ell}\det\left(U_{\sim k, \sim\ell}\right)=\sum_{\ell=1}^{n}p_{\ell}\det U=\sum_{k=1}^{n}p_{k}\det U\] \[=\left(\sum_{k=1}^{n}p_{k}\right)\det\underset{\mathrm{(}}{ \overset{U}{\underset{\mathrm{(}}{\times}}}(2),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, ### To Section 13 Finally, we note that Lemma 13.3 has a converse: **Proposition 16.5**.: Let \(\lambda\) be any partition. Let \(p\in\mathbb{Z}\). Then, \(p\in\Delta\left(\lambda\right)\) if and only if \(-1-p\notin\Delta\left(\lambda^{t}\right)\). Proof of Proposition 16.5.: The "only if" direction was Lemma 13.3. It remains to prove the "if" direction. Thus, we assume that \(-1-p\notin\Delta\left(\lambda^{t}\right)\). Our goal is to prove that \(p\in\Delta\left(\lambda\right)\). We will use Convention 10.1. Set \(\lambda_{0}:=\infty\) and \(\ell_{0}:=\infty\). Thus, the equality \(\ell_{i}=\lambda_{i}-i\) holds not only for all \(i\geq 1\) (for which it follows from Convention 10.1), but also for all \(i=0\). In other words, \(\ell_{i}=\lambda_{i}-i\) for each \(i\in\mathbb{N}\). Also, \(\ell_{0}>\ell_{1}>\ell_{2}>\ell_{3}>\cdots\) (this is proved as in the proof of Lemma 10.14 above). Hence, there are only finitely many \(i\in\mathbb{N}\) that satisfy \(\ell_{i}\geq p\). Moreover, there exists at least one such \(i\) (namely, \(i=0\), since \(\ell_{0}=\infty\geq p\)). Consider the **largest** such \(i\). Then, \(\ell_{i}\geq p\) but \(\ell_{i+1}<p\). We shall show that \(\ell_{i}=p\). Indeed, assume the contrary. Thus, \(\ell_{i}\neq p\). Combining this with \(\ell_{i}\geq p\), we obtain \(\ell_{i}>p\). In other words, \(\lambda_{i}-i>p\) (since \(\ell_{i}=\lambda_{i}-i\)). In other words, \(\lambda_{i}>i+p\). Hence, \(\lambda_{i}\geq i+p+1\) (since \(\lambda_{i}\) and \(i+p\) are integers or \(\infty\)). Also, we have \(\ell_{i+1}=\lambda_{i+1}-(i+1)=\lambda_{i+1}-i-1\) and thus \(\lambda_{i+1}-i-1=\ell_{i+1}<p\). In other words, \(\lambda_{i+1}-i<p+1\). Since \(\lambda_{i+1}-i\) and \(p+1\) are integers, this entails \(\lambda_{i+1}-i\leq(p+1)-1=p\). Therefore, \(\lambda_{i+1}\leq i+p<i+p+1\). Thus, \(i+p+1>\lambda_{i+1}\geq 0\). This shows that \(i+p+1\) is a positive integer. Thus, Lemma 5.3 (applied to \(i+p+1\) and \(i+1\) instead of \(i\) and \(j\)) yields the logical equivalence \[\left(\lambda_{i+p+1}^{t}\geq i+1\right)\iff\left(\lambda_{i+1}\geq i+p+1 \right).\] Since the statement \(\left(\lambda_{i+1}\geq i+p+1\right)\) is false (because \(\lambda_{i+1}<i+p+1\)), we thus conclude that the statement \(\left(\lambda_{i+p+1}^{t}\geq i+1\right)\) is also false. In other words, \(\lambda_{i+p+1}^{t}<i+1\). Hence, \(\lambda_{i+p+1}^{t}\leq i\) (since both sides of the inequality are integers). Moreover, Lemma 5.3 (applied to \(i+p+1\) and \(i\) instead of \(i\) and \(j\)) yields the logical equivalence34 Footnote 34: To be precise, this argument only works when \(i>0\). However, in the remaining case, the conclusion (\(\lambda_{i+p+1}^{t}\geq i\)) is obvious anyway (since \(\lambda_{i+p+1}^{t}\geq 0\)), so we don’t need this argument. \[\left(\lambda_{i+p+1}^{t}\geq i\right)\iff\left(\lambda_{i}\geq i+p+1\right).\] Since the statement \(\lambda_{i}\geq i+p+1\) holds, we thus conclude that \(\lambda_{i+p+1}^{t}\geq i\) holds as well. Combining \(\lambda_{i+p+1}^{t}\geq i\) with \(\lambda_{i+p+1}^{t}\leq i\), we find \(\lambda_{i+p+1}^{t}=i\). Now, the definition of \(\ell_{i+p+1}^{t}\) yields \[\ell_{i+p+1}^{t}=\underbrace{\lambda_{i+p+1}^{t}}_{=i}-(i+p+1)=i-(i+p+1)=-1-p.\] Hence, \[-1-p =\ell_{i+p+1}^{t}\in\left\{\ell_{1}^{t},\ell_{2}^{t},\ell_{3}^{t} \ldots\right\}\qquad\quad(\text{since $i+p+1$ is a positive integer})\] \[=\Delta\left(\lambda^{t}\right),\] which contradicts \(-1-p\notin\Delta\left(\lambda^{t}\right)\). This contradiction shows that our assumption was false. Hence, \(\ell_{i}=p\) is proved. Now, \(i\neq 0\) (since \(\ell_{i}=p\neq\infty=\ell_{0}\)) and thus \(i\geq 1\). Hence, \(\ell_{i}\in\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}\). Thus, from \(\ell_{i}=p\), we obtain \[p=\ell_{i}\in\left\{\ell_{1},\ell_{2},\ell_{3},\ldots\right\}=\Delta\left( \lambda\right),\] which is precisely what we desired to prove. Thus, the proof of Proposition 16.5 is complete. ## 17 Appendix: Deriving Proposition 8.2 from the literature In this short appendix, we shall briefly outline how Proposition 8.2 can be derived from some known results. ### Deriving Proposition 8.2 as a consequence of Gessel-Viennot First, we shall show how Proposition 8.2 can be obtained from [10, Theorem 3]. Let us use the notations of [10, Theorem 3], but set \(b_{i}:=1\) and \(\mu:=\varnothing\) and \(k:=n\). Furthermore, we define a labelling set \(L:=\mathbb{Z}^{2}\) (consisting of all possible boxes), a relabeling function \(\mathbf{f}\) given by \(f_{r}\left(s\right):=\left(r,s\right)\in L\), and a weight function \(w\) given by \(w\left(r,s\right)=x_{s}+y_{s+r}\) for each \(\left(r,s\right)\in L\) (where we set \(x_{i}=y_{i}=0\) for all \(i\leq 0\)). Then, \(w\left(f_{r}\left(s\right)\right)=x_{s}+y_{s+r}\) for any \(r\) and \(s\). Now, [10, Theorem 3] says that \[(\text{the sum of the weights of }\mathbf{f}\left(T\right)\text{ over all tableaux }T\text{ of shape }\lambda\] \[\qquad\qquad\text{satisfying }1\leq T_{i,j}\leq d_{i}\text{ for all }\left(i,j\right)\in Y\left(\lambda\right)\bigr{)}\] \[=\det\left(H_{\mathbf{f}}\left(-i+1,\ 1,\ \lambda_{j}-j+1,\ d_{j}\right) \right)_{i,j\in\left[n\right]}. \tag{138}\] However, the left hand side of this equality is easily seen to be exactly our \[\sum_{T\in\mathrm{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)}\left(x_{T(i,j)}+y_{T(i,j)+j-i} \right),\] if we rename \(\lambda\) and \(d_{i}\) as \(\mu\) and \(b_{i}\). On the other hand, the right hand side of (138) (after the same renaming) becomes our \[\det\left(h\left(\mu_{j}-j+i,\ \ b_{j},\ \ 1-i\right)\right)_{i,j\in[n]},\] because every \(i,j\in[n]\) satisfy \[H_{\mathbf{f}}\left(-i+1,\ 1,\ \lambda_{j}-j+1,\ d_{j}\right)\] \[=\sum_{1\leq n_{-i+1}\leq n_{-i+2}\leq\cdots\leq n_{\lambda_{j}- j}\leq d_{j}}w\left(f_{-i+1}\left(n_{-i+1}\right)\right)w\left(f_{-i+2} \left(n_{-i+2}\right)\right)\cdots w\left(f_{\lambda_{j}-j}\left(n_{\lambda_{ j}-j}\right)\right)\] \[=\sum_{1\leq n_{-i+1}\leq n_{-i+2}\leq\cdots\leq n_{\lambda_{j}- j}\leq d_{j}}\ \ \prod_{g=-i+1}^{\lambda_{j}-j}\underbrace{w\left(f_{g}\left(n_{g}\right) \right)}_{=x_{ng}+y_{ng}+g}\] \[=\sum_{1\leq n_{-i+1}\leq n_{-i+2}\leq\cdots\leq n_{\lambda_{j}- j}\leq d_{j}}\ \ \prod_{g=-i+1}^{\lambda_{j}-j}\left(x_{ng}+y_{ng}+g\right)\] \[=\sum_{1\leq m_{1}\leq m_{2}\leq\cdots\leq m_{\lambda_{j}-j+i} \leq d_{j}}\ \ \prod_{g=1}^{\lambda_{j}-j+i}\left(x_{ng}+y_{ng}+g-i\right)\] \[\qquad\qquad\qquad\left(\text{here, we have shifted the indices by setting }m_{g}=n_{g-i}\right)\] \[=\sum_{\begin{subarray}{c}\left(m_{1},m_{2},\ldots,m_{\lambda_{j }-i+j}\right)\in[d_{j}]^{\lambda_{j}-j+i};\\ m_{1}\leq m_{2}\leq\cdots\leq m_{\lambda_{j}-j+i}\end{subarray}}\ \ \prod_{g=1}^{\lambda_{j}-j+i}\left(x_{ng}+y_{m_{g}+g-i}\right)\] \[=h\left(\lambda_{j}-j+i,\ \ d_{j},\ \ 1-i\right)\qquad\qquad \left(\text{using our Definition \ref{def:f1}}\right).\] Hence, the equality (138) becomes \[\sum_{T\in\mathrm{FSSYT}(\mu,\mathbf{b})}\ \ \prod_{(i,j)\in Y(\mu)} \left(x_{T(i,j)}+y_{T(i,j)+j-i}\right) =\det\left(h\left(\mu_{j}-j+i,\ \ b_{j},\ \ 1-i\right)\right)_{i,j\in[n]}\] \[=\det\left(h\left(\mu_{i}-i+j,\ \ b_{i},\ \ 1-j\right)\right)_{i,j\in[n]}\] (since \(\det\left(A^{T}\right)=\det A\) for any square matrix \(A\)). Proposition 8.2 thus follows. ### Deriving Proposition 8.2 from Chen-Li-Louck In their study of flagged double Schur functions, Chen, Li and Louck have obtained a determinantal formula [ChLiLo02, second bullet point after Theorem 4.2] that can, too, be used to prove Proposition 8.2. We shall recall this formula, and then very tersely outline the derivation. For any \(a,b,c\in\mathbb{Z}\), we define an element \(h_{a}\left(X_{b}/Y_{c}\right)\) of \(R\) by \[h_{a}\left(X_{b}/Y_{c}\right):=\left(\begin{array}{c}\text{the coefficient of $t^{a}$ in the power series }\dfrac{\prod\limits_{i=1}^{c}\left(1+y_{i}t\right)}{\prod\limits_{j=1}^{b} \left(1-x_{j}t\right)}\in R\left[[t]\right]\\ \end{array}\right)\] (understanding empty products as \(1\) as usual). Then, [10, second bullet point after Theorem 4.2] (translated into our language, and upon the substitution \(y_{j}\mapsto-y_{j}\)) says that \[\sum\limits_{T\in\text{FSSYT}(\mu,\mathbf{b})}\ \ \prod\limits_{(i,j)\in Y(\mu)} \left(x_{T(i,j)}+y_{T(i,j)+j-i}\right)=\det\left(h_{\mu_{i}-i+j}\left(X_{b_{i} }/Y_{\lambda_{i}+b_{i}-i}\right)\right)_{i,j\in[n]}\] (with the notations of Theorem 8.2). In order to derive Proposition 8.2 from this, it suffices to show that \[h\left(a,b,c\right)=h_{a}\left(X_{b}/Y_{a+b+c-1}\right) \tag{139}\] for any \(a,b,c\in\mathbb{Z}\) satisfying \(c\leq 0\). This equality (139) can, in turn, be derived again from [10, second bullet point after Theorem 4.2] (applied to \(n=1\), \(\lambda_{1}=a\) and \(b_{1}=b\)). (To be more precise, this proves (139) for \(c=0\), but then the general case follows by shifting the \(y\)-variables.)
2308.02067
Bayesian Decision Curve Analysis with bayesDCA
Clinical decisions are often guided by clinical prediction models or diagnostic tests. Decision curve analysis (DCA) combines classical assessment of predictive performance with the consequences of using these strategies for clinical decision-making. In DCA, the best decision strategy is the one that maximizes the so-called net benefit: the net number of true positives (or negatives) provided by a given strategy. In this decision-analytic approach, often only point estimates are published. If uncertainty is reported, a risk-neutral interpretation is recommended: it motivates further research without changing the conclusions based on currently-available data. However, when it comes to new decision strategies, replacing the current Standard of Care must be carefully considered -- prematurely implementing a suboptimal strategy poses potentially irrecoverable costs. In this risk-averse setting, quantifying uncertainty may also inform whether the available data provides enough evidence to change current clinical practice. Here, we employ Bayesian approaches to DCA addressing four fundamental concerns when evaluating clinical decision strategies: (i) which strategies are clinically useful, (ii) what is the best available decision strategy, (iii) pairwise comparisons between strategies, and (iv) the expected net benefit loss associated with the current level of uncertainty. While often consistent with frequentist point estimates, fully Bayesian DCA allows for an intuitive probabilistic interpretation framework and the incorporation of prior evidence. We evaluate the methods using simulation and provide a comprehensive case study. Software implementation is available in the bayesDCA R package. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt better-informed decisions.
Giuliano N. F. Cruz, Keegan Korthauer
2023-08-03T22:36:03Z
http://arxiv.org/abs/2308.02067v1
# Bayesian Decision Curve Analysis with bayesDCA ###### Abstract Clinical decisions are often guided by clinical prediction models or diagnostic tests. Decision curve analysis (DCA) combines classical assessment of predictive performance with the consequences of using these strategies for clinical decision-making. In DCA, the best decision strategy is the one that maximizes the so-called net benefit: the net number of true positives (or negatives) provided by a given strategy. In this decision-analytic approach, often only point estimates are published. If uncertainty is reported, a risk-neutral interpretation is recommended: it motivates further research without changing the conclusions based on currently-available data. However, when it comes to new decision strategies, replacing the current Standard of Care must be carefully considered - prematurely implementing a suboptimal strategy poses potentially irrecoverable costs. In this risk-averse setting, quantifying uncertainty may also inform whether the available data provides enough evidence to change current clinical practice. Here, we employ Bayesian approaches to DCA addressing four fundamental concerns when evaluating clinical decision strategies: (i) which strategies are clinically useful, (ii) what is the best available decision strategy, (iii) pairwise comparisons between strategies, and (iv) the expected net benefit loss associated with the current level of uncertainty. While often consistent with frequentist point estimates, fully Bayesian DCA allows for an intuitive probabilistic interpretation framework and the incorporation of prior evidence. We evaluate the methods using simulation and provide a comprehensive case study. Software implementation is available in the bayesDCA R package. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt better-informed decisions. **Keywords:** decision curve analysis, Bayesian, R package, clinical prediction models, diagnostic tests, clinical decision-making Introduction In Decision Curve Analysis (DCA), we are typically interested in estimating the net benefit of adopting a given clinical decision strategy [1]. A decision strategy may be simply treating all patients under suspicion of a given condition because the condition is so deadly that any costs of a potentially unnecessary intervention are outweighed by devastating costs of neglecting necessary treatment, even if the risk is small - e.g., treating potentially aggressive cancer. In this "Treat all" strategy, intervention happens in all patients regardless of the true underlying disease status - e.g., whether the patient's cancer is aggressive or not. Conversely, an intervention may be high risk while the underlying condition is benign - e.g., surgical removal of a stable noncancerous brain tumour. In this case, a reasonable decision strategy is to not treat any patient - the "Treat none" strategy. In general, the relative risk conferred by the disease and the treatment does not always clearly side with either the "Treat all" or "Treat none" strategies. In addition, there is often uncertainty around a patient's disease status or prognosis. In this case, a decision strategy could be based on a predictive model that estimates, e.g., a patient's likelihood of having aggressive disease. If the patient's likelihood is above a decision threshold \(t\), then we intervene. Beyond the probability of having a disease right now (diagnostic setting), the threshold \(t\) could also be the probability of a future event like death, hospitalization, or disease progression (prognostic setting). The same idea serves for binary tests, in which case we intervene if the test is positive. In the context of DCA, the Net Benefit (NB) at the decision threshold \(t\) can be written as[1]: \[NB_{t}=\Big{(}TP_{t}-FP_{t}\cdot w_{t}\Big{)}/n \tag{1}\] where \(TP_{t}\) and \(FP_{t}\) are corresponding true and false positive counts, \(n\) is the total sample size, and \(w_{t}=t/(1-t)\). Given a decision threshold \(t\), this definition (1) fixes the weight of each true positive at 1, which mathematically implies a relative weight of \(w_{t}\) for each false positive. This allows decision analysis without the need to specify the absolute costs of each potential outcome (true and false positive/negative). Instead, we rely on a clinically-motivated decision threshold \(t\) which properly weights true and false positives/negatives based on the clinical context[2]. To consider a range of relative weights (e.g., due to disagreement between clinicians or even patients' preferences), the decision curve is then constructed by plotting \(NB_{t}\) for a reasonable range of decision thresholds \(t\). At each decision threshold, the optimal decision strategy is the one that maximizes the expected net benefit[1]. If using one strategy imposes a higher cost than using another, then its net benefit may be adjusted by subtracting a "test harm" term which represents the strategy-specific costs [3]. However, this approach requires an additional, subjective step of calculating the cost for each decision strategy under investigation. Moreover, strategy costs may be highly context-specific. For example, they may depend on resources available in a given location. This does not undermine the value of calculating the likely costs of each strategy, but it acknowledges that the challenges of this task may exceed the scope of model development and validation studies. When interpreting DCA results, there may be special considerations for making a decision that changes a well-accepted practice. Beyond the one-time cost of the implementation process itself, there is always the risk of implementing the "wrong" strategy whose apparent optimality was an artifact of chance. As with any other estimate, the observed net benefit is subject to random variation in the data. Addressing this uncertainty, however, depends on how we understand risk. Under risk neutrality, we only care about expected gains (or costs) and uncertainty quantification does not change which strategy should be used, although it can motivate further research[4, 3, 5]. On the other hand, prematurely replacing the current Standard of Care (SoC) with a new decision strategy poses potentially irrecoverable costs to individual patients, healthcare institutions, and even healthcare systems[6, 5]. This setting motivates risk aversion, which may require a more careful assessment of uncertainty to prevent premature implementation. The work presented here regards the Bayesian estimation of net benefit and is a priori indifferent as to whether end users are risk-neutral or not - a debate we do not aim to settle. To describe the full potential of the method, we include a case study that does not assume that risk neutrality is satisfied when assessing a new decision strategy to potentially replace the SoC. If we are too uncertain about the net benefit of each decision strategy under investigation, or if the net benefit gain from adopting a new decision strategy is negative with high probability, then more data may be desirable before changing current practice. Given the superiority of a decision strategy in terms of observed net benefit, different levels of uncertainty may be compatible with context-specific costs to ultimately justify implementation. Still, the risk-neutral reader is free to interpret uncertainty as a motivation for further research only, without changing the conclusions based on the observed estimates, if desired. Therefore, uncertainty quantification around estimates of net benefit and their differences allows two potential interpretations, depending on the reader's risk profile. Under risk neutrality, uncertainty informs whether more research is needed but does not change our decision based on the currently available point estimates. Under risk aversion, uncertainty may inform whether the available data provides enough evidence to change well-established clinical practice. Both interpretations of uncertainty in DCA motivate further research, but only the risk-averse wait for more data before changing clinical practice currently in place. In what follows, we employ Bayesian approaches to DCA addressing four fundamental concerns when evaluating clinical decision strategies: 1. Which strategies are clinically useful? 2. What is the best available decision strategy? 3. Direct pairwise comparisons between strategies. 4. What is the expected net benefit loss associated with the current level of uncertainty? Uncertainty around these concerns is natural to address in the Bayesian context, however the approach remains agnostic to its interpretation. While often consistent with the frequentist approach in terms of point estimates (e.g., under vague priors), the fully Bayesian estimation of net benefit allows for an intuitive probabilistic interpretation of DCA results as well as for the principled incorporation of prior evidence. Our proposal builds on the work from Wynants et al. (2018) [7] which proposed Bayesian DCA for evidence synthesis in meta-analysis of data from multiple settings (e.g., multiple hospitals). We adapt their binary outcome model to allow for the incorporation of prior information in the single-setting case and propose an alternative formulation for survival outcomes. We then compare the methodology with Frequentist alternatives using simulation and provide a case study with openly available data. The proposed approaches are implemented in the freely available bayesDCA R package. ## 2 Results ### General setting Suppose we have access to validation data to assess one or more clinical decision strategies, be it a pre-specified clinical prediction model or a binary diagnostic/prognostic test. We will use DCA to decide whether any of the strategies under investigation is clinically useful and which of them is the best. We will also examine the difference between strategies, which may be useful to evaluate alternatives under scenarios where one or more strategies is unavailable. Finally, we would like to have a sense of whether the current study is precise enough so that new studies with the same population are not necessary. In what follows, we first describe the Bayesian estimation of decision curves for binary outcomes and then extend it to survival outcomes. ### Bayesian DCA for binary outcomes The Net Benefit formulation in (1) can be rewritten in terms of the outcome prevalence (\(p\)) and the threshold-specific Sensitivity (Se) and Specificity (Sp). \[\text{NB}_{t}=\text{Se}_{t}\cdot p-(1-\text{Sp}_{t})\cdot(1-p)\cdot w_{t} \tag{2}\] During (external) validation of predictive models or binary tests, we can estimate the parameters above using a conjugate Beta-Bernoulli joint model for the indicator variables of positive predictions and disease status (see section _Bayesian DCA: model details_ for full model specification). Given conjugacy, the full posterior distribution of the parameter vector \((p,\,\text{Se}_{t},\,\text{Sp}_{t})\) is known in closed form: \[\text{Beta}(p|D+\alpha_{0},ND+\beta_{0})\ \times\ \text{Beta}(\text{Se}_{t}|TP_{t }+\alpha_{1},FN_{t}+\beta_{1})\ \times\ \text{Beta}(\text{Sp}_{t}|TN_{t}+\alpha_{2},FP_{t}+\beta_{2}) \tag{3}\] where \(TP_{t},\ FP_{t},\ TN_{t},\ FN_{t},\ D,\ \text{and}\ ND\) represent the total number of true and false positives, true and false negatives, and individuals with and without the disease, respectively. The \(\big{(}\alpha.,\beta.\big{)}\) terms are parameters of the independent Beta prior distributions. Within bayesDCA, we set \(\alpha.=\beta.=1\) as a default, representing uniform priors on the \((0,1)\) interval, though the user may choose different priors as well. We suggest a more informative prior based on the expected relationship between the decision threshold and sensitivity/specificity in the Supplement (section _Informative priors in Bayesian DCA for binary outcomes_ and Supplementary Figures (S1-S2)). In addition to conjugacy, the factorization in (3) is due to parameter orthogonality in the likelihood function and implies posterior independence. This means that Markov-Chain Monte Carlo (MCMC) is not needed: we can combine samples from the marginal posteriors to easily generate valid samples from the joint posterior, making estimation particularly fast - typically a fraction of a second for an entire DCA. The model (3) can be seen as a single-setting version of the model from Wynants et al. (2018)[7], which models the parameters of interest on the logit scale using a multivariate normal (MVN) distribution. Given the meta-analysis context, there are multiple sensitivities, specificities, and prevalences to be considered at each threshold; their MVN formulation accounts for correlation across these parameters in a random-effects fashion. Here, however, there is only one triple of sensitivity, specificity, and prevalence at each threshold, so there is no correlation estimand to be modelled. In fact, jointly modelling positive predictions and disease outcomes makes the parameters orthogonal by construction, and, hence, their estimators are also independent (see section _Bayesian DCA: model details_). On the other hand, the model from Wynants et al. (2018)[7] benefits from partial pooling, being more appropriate for evidence synthesis in the multiple-setting context (in which case model (3) represents a complete pooling alternative). Finally, unlike sampling from (3), Wynants et al. (2018)[7] method requires MCMC and is, therefore, expected to be considerably slower. We use data from the GUSTO-I trial[8], a large randomized study involving thrombol treatments for Acute Myocardial Infarction, as an example. Figure 1 shows the resulting Bayesian DCA with the Frequentist counterpart superimposed. Although point estimates mostly coincide, notice that the bootstrap-based DCA collapses to zero as the threshold increases - in particular, as the threshold approaches the maximum observed risk prediction. The same does not happen with the Bayesian approach, which continues to naturally propagate uncertainty through the net benefit equation even in the absence of events, though with an increasing influence of the prior distribution. This behaviour can be useful, e.g., in settings with small effective sample sizes such as when most risk predictions are concentrated on one side of the decision threshold. While a decision threshold higher than all risk predictions in the population does imply zero net benefit, in small samples this may happen by chance alone - in which case a positive net benefit would require more informative priors. Nonetheless, the bayes DCA R package warns the user if no events were observed above some decision threshold. Notice that, at very high thresholds, high sensitivity and specificity are required to yield a positive net benefit because of the increasing weight of false positives (i.e., large \(w_{t}\)). Thus, unless under strong prior knowledge or in face of substantial evidence of high sensitivity and specificity, the posterior distribution will tend toward net clinical harm in this region. The Bayesian DCA, therefore, indicates that harm is likely at higher thresholds for this example using vague priors. Beyond uncertainty propagation, three key advantages come naturally with the Bayesian approach for binary outcomes implemented in bayesDCA. First, one can easily use full external information to estimate the prevalence parameter. In case-control studies1, DCA needs to be adjusted for the population prevalence, which is usually done by plugging in a point estimate and completely ignoring uncertainty[10]. Here, we may simply sample \(p\) from the posterior (3) using data from an external cross-sectional study or the source population cohort. Equivalently, this could be seen as constructing an informative prior distribution from external prevalence information and sampling from the prior. Regardless of the source, the prevalence posterior distribution is then used to compute the net benefit. This takes into account the uncertainty in the prevalence parameter because we are using the raw prevalence data instead of plugging in a point estimate. The second advantage that our Bayesian approach brings is the option to use informative priors to improve estimation - in terms of both uncertainty and point estimates. For instance, we know that Se is higher for small thresholds and lower for large thresholds - and that the reverse is true for Sp. The informative prior suggested in the Figure 1: **Bayesian DCA captures uncertainty across the entire decision curve.** Bayesian DCA was computed using the bayesDCA R package, while the Frequentist alternative used the bootstrap-based rnda package. An example model was built using data from the GUSTO-I trial[8], while DCA was constructed with held-out data (\(N=500\), \(36\) events). The bootstrap intervals and point estimates collapse to zero as the threshold approaches the maximum risk prediction. The Treat all curve is nearly identical for both approaches and the Bayesian version is shown. Intervals correspond to \(95\%\) confidence and credible intervals for Frequentist and Bayesian methods, respectively. Supplement takes advantage of this reasoning (section _Informative priors in Bayesian DCA for binary outcomes_ and Supplementary Figures (S1-S2)). Additionally, prior elicitation is straightforward for binary tests since their sensitivity and specificity are fixed across thresholds. The third and most notable advantage of our Bayesian approach for DCA is the ability to interrogate posterior decision curves with an intuitive probabilistic interpretation. Since we have access to the full posterior distribution of the net benefit across all thresholds of interest, we can compute arbitrary functions to help us interpret the DCA output. Since this advantage applies to Bayesian DCA in general and not just to binary outcomes, we first propose a method of Bayesian DCA for survival outcomes and then describe the proposed probabilistic interpretation framework based on the interrogation of the posterior decision curves. ### Bayesian DCA for survival outcomes Many decision strategies address prognostic problems. For such survival outcomes, we must rewrite the net benefit formula to be able to account for censoring as follows \[\text{NB}_{t}^{\tau}=\left[1-S\left(\tau\,|\,\hat{r}_{\tau}>t\right)\,\right] \cdot\mathbb{P}\left[\hat{r}_{\tau}>t\right]-S\left(\tau\,|\,\hat{r}_{\tau}>t \right)\cdot\mathbb{P}\left[\hat{r}_{\tau}>t\right]\cdot w_{t} \tag{4}\] where \(\hat{r}_{\tau}\) is the predicted risk of the event of interest at time \(\tau\) (a probability in the case of a prognostic model and 0 or 1 in the case of a prognostic test). At time \(\tau\) and threshold \(t\), the probability of a positive prediction is given by \(\mathbb{P}\left[\hat{r}_{\tau}>t\right]\), and \(S\left(\tau\,|\,\hat{r}_{\tau}>t\right)\) is the survival probability given a positive prediction. To estimate (4), we jointly model survival times \(T\) (with censoring indicators \(C\)) and the indicator of positive predictions \(Z\) with Weibull and Bernoulli likelihoods, respectively (see section _Bayesian DCA: model details_ for full model specification). Under independent priors, the posterior distribution factorizes due to parameter orthogonality as: \[\pi(p,\mathbf{\theta}_{1}|\text{Data})\propto\pi(p|\mathcal{D}_{0})\times\pi(\mathbf{ \theta}_{1}|\mathcal{D}_{+}) \tag{5}\] where \(\mathcal{D}_{0}=\left\{Z_{i}\right\}_{i=1}^{n}\) is the set of positive prediction indicators, \(\mathcal{D}_{+}=\left\{T_{i},C_{i}\right\}_{i\in[n]:z_{i}=1}\) is the survival dataset for patients with positive predictions. Here, \(p=\mathbb{P}\left[\hat{r}_{\tau}>t\right]\) while \(\mathbf{\theta}_{1}=\left(\alpha_{1},\sigma_{1}\right)\) represents Weibull shape and scale parameters, respectively. Although the resulting posterior distribution does not have a closed form, it does benefit from orthogonality between the Weibull and the Bernoulli components, allowing us to estimate them separately. Hence, we put a Beta prior on \(\theta_{0}\) (uniform by default, as before) to take advantage of conjugacy, while \(\mathbf{\theta}_{1}\) is estimated with MCMC using Stan[11]. In bayesDCA, the default priors for the Weibull parameters are \[\alpha_{1} \sim\text{Half-Student-t}(5,0,1.5) \sigma_{1} \sim\text{Half-Student-t}(30,0,100) \tag{6}\] These priors put a nearly equal prior probability on increasing and decreasing hazards and are largely vague with respect to scale. bayesDCA also allows user specification of the prior parameters above and provides a Gamma prior option instead of Half-Student-t. Once sampling is done, we then combine the draws from the posterior distributions of \(\mathbf{\theta}_{1}\) and \(\theta_{0}\) to compute the net benefit given in (4). We can now interrogate the posterior decision curves for all decision strategies under investigation. The entire interpretation framework proposed in the next section is immediately available for both survival and binary outcomes, highlighting once again the advantages of the proposed Bayesian approach. ### Probabilistic interpretation framework for Bayesian DCA The main advantage of Bayesian DCA is the ability to arbitrarily interrogate posterior distributions of decision curves. This allows for a probabilistic interpretation framework that helps understand the degree of uncertainty imposed by the currently available data on the observed decision curves. We may ask, for instance: what is the posterior probability that the model under investigation is useful at a given threshold? Following the definition in Wynants et al. (2018)[7], that is: \[\text{P}\Big{(}\text{useful}\Big{)}=\text{P}\Big{(}NB_{\text{model}}>\max \big{\{}NB_{\text{treat all}},NB_{\text{treat none}}\big{\}}\Big{)} \tag{7}\] where \(NB_{\text{treat none}}\) is always zero. However, if we have two or more competing models, another natural question may be: what is the probability that my model is the _best_ decision strategy available? For instance, for a given "model 1": \[\text{P}\Big{(}\text{best}\Big{)}=\text{P}\Big{(}NB_{\text{model}_{1}}>\max \big{\{}NB_{\text{treat all}},NB_{\text{treat none}},NB_{\text{model}_{2}}, NB_{\text{model}_{3}},\cdots\big{\}}\Big{)} \tag{8}\] Notice that Sadatsafavi et al. (2022)[12] define P(useful) as equation (8), whereas here we define P(useful) and P(best) separately. This is because the best decision strategy might not be available everywhere, so the usefulness of remainder strategies is still relevant in that case. The above definitions can be extended to an arbitrary number of models or tests and be computed across all decision thresholds. Within bayesDCA, one may also compute pairwise comparisons between, say, two models with very similar net benefits. For instance, compute the probability that a given strategy beats another by at least \(c\) net benefit units (i.e., net true positives): \[\text{P}\Big{(}NB_{\text{model}_{1}}-NB_{\text{model}_{2}}>c\Big{)}\hskip 28.452756ptc\geq 0 \tag{9}\] Upon full posterior interrogation, we may reach a better understanding of the net benefit profiles of the decision strategies under investigation: if there is large uncertainty around the decision curves, the above probabilities will be inconclusive (i.e., far below 100%), so we may opt to collect more data before making a decision. One way to directly quantify the expected consequences of the current level of uncertainty is to compute the Expected Value of Perfect Information (EVPI) for model validation[12]: \[EVPI=E_{\text{max}}-\text{max}_{E} \tag{10}\] \[E_{\text{max}}=\mathbb{E}\bigg{[}\max\Big{\{}NB_{\text{treat all}},NB_{\text{treat none}},NB_{\text{model}_{1}},NB_{\text{model}_{2}},\cdots\Big{\}} \bigg{]}\] \[\text{max}_{E}=\max\bigg{\{}\mathbb{E}\big{[}NB_{\text{treat all}}\big{]},\mathbb{E}\big{[}NB_{\text{treat none}}\big{]},\mathbb{E}\big{[}NB_{\text{model}_{1}}\big{]}, \mathbb{E}\big{[}NB_{\text{model}_{2}}\big{]},\cdots\bigg{\}}\] where the maximum in \(E_{\text{max}}\) is computed for each draw of the joint posterior distribution, whereas \(\text{max}_{E}\) is a maximum of posterior means. The EVPI may be seen as the expected net benefit loss due to the current level of uncertainty in the decision curves. For instance, if the EVPI is 0.1, picking the _observed_ best decision strategy is associated with an expected loss of 0.1 net true positives as compared to picking the _actual_ best decision strategy. It is important to notice another advantage of the parametric Bayesian approach to computing the EVPI. As a sample statistic, the EVPI is expected to decrease monotonically with the sample size[12]. In a simulation study using the GUSTO-I trial data, however, Sadatsafavi et al. (2023)[12] showed that small effective sample sizes may cause the observed EVPI to escape this monotonic behaviour, especially at very low or very high decision thresholds. One way to avoid this issue is to employ informative prior distributions in Bayesian DCA. Reproducing the simulation code from Sadatsafavi et al. (2023)[12], we show how our Bayesian approach can recover the expected monotonic behaviour of the EVPI in the Supplement (section _Informative priors preserve EVPI monotonic behaviour_ and Supplementary Figure (S3)). In the next section, we provide a simulation study of the empirical performance of the suggested Bayesian approaches. In the following section, we present a case study to highlight the bayesDCA workflow, where we fully interrogate the posterior decision curves to quantify uncertainty around the answers to the four fundamental questions mentioned above. Here we focus on binary outcomes, though the same workflow for survival outcomes is also implemented and easily accessible through the bayesDCA R package. ### Empirical performance of Bayesian DCA estimation #### 2.5.1 Simulation study for binary outcomes To test the approach for binary outcomes, we simulate a population with an underlying logistic regression model and select an example model to be evaluated using DCA. To represent a range of scenarios, we vary the outcome prevalence and the maximum achievable discrimination, measured by AUC, representing the setting's signal-to-noise ratio. To resemble a common scenario of overfitting, the example model is miscalibrated: its predictions are overly extreme (i.e., too close to zero or to one). Each simulation run emulates a different external validation or test dataset selected at random from the setting's population, with which we perform DCA. The sample size for each simulated dataset is set so that the expected number of events is 100. See section _Simulation details_ for a full description of the simulations. For each setting, we ran 1000 simulations performing both Bayesian DCA and bootstrap-based Frequentist DCA for comparison. Figure (2) shows the resulting distributions of estimation errors. Figure 2: Bayesian and Frequentist DCA for binary outcomes show similar distributions of point estimate errors. Bayesian DCA was computed using the bayesDCA R package, while the Frequentist alternative used the bootstrap-based rmda package. For each simulation run, DCA was performed for a fixed example model using a simulated test dataset of sample size corresponding to 100 expected events. A total of 1000 Monte Carlo repetitions was run for each setting. The setting AUC corresponds to its maximum achievable AUC. The example model for each setting was fixed to approximate the maximum discrimination of that setting but was miscalibrated (overly extreme risk predictions). As expected, the distributions of point estimate errors are nearly identical for the Bayesian and the Frequentist approaches in almost all simulation settings and decision thresholds. In general, with 100 expected events, any influence of the default vague priors is unnoticeable. One exception is the setting with an AUC of 0.65 and a prevalence of 1%, in particular at a threshold of 75%. Given the low prevalence and discrimination, very high thresholds such as 75% often yield no positive prediction (i.e., no risk prediction above the decision threshold), so the effective sample size for that threshold is low and the prior matters more. This is the setting in which bootstrap-based point estimates and intervals are expected to collapse to zero, while the Bayesian estimates are regularized by the prior distribution. Still, the discrepancy between the two approaches is negligible in absolute terms - see Supplementary Figure (S4) for a plot of the point estimates in the original scale with the true net benefit overlayed. We also assessed the empirical coverage of 95% uncertainty intervals for both Bayesian and Frequentist methods. Although not required for a valid Bayesian analysis, users of the bayesDCA R package may feel more comfortable having Frequentist calibration of credible intervals (Cr.I.), at least Figure 3: Bayesian and Frequentist DCA for binary outcomes show calibrated uncertainty intervals. For both approaches, coverage matches nominal values in almost all cases. The points of miscalibration (AUC 0.65, prevalence 1%, thresholds of 10% or above) are due to very low rates of positive predictions (i.e., above the corresponding threshold). In this scenario, bootstrap intervals collapse to zero so extreme undercoverage is observed; Bayesian intervals are more dependent on the prior distribution, showing minimal overcoverage. Bayesian DCA was computed using the bayesDCA R package, while the Frequentist alternative used the bootstrap-based rmda package. For each simulation run, DCA was performed for a fixed example model using a simulated test dataset of sample size corresponding to 100 expected events. A total of 1000 Monte Carlo repetitions was run for each setting. The setting AUC corresponds to its maximum achievable AUC. The example model for each setting was fixed to approximate the maximum discrimination of that setting but was miscalibrated (overly extreme risk predictions). under vague priors. As shown in Figure (3), the empirical coverage of both the Bayesian and Frequentist decision curves closely matches the nominal value of 95% for the entire range of decision thresholds in most simulation settings. The only exception is, once again, the setting with an AUC of 0.65 and a prevalence of 1%. At thresholds of 10% or above, the bootstrap intervals show increasing undercoverage, reaching an empirical coverage of nearly 0% for the 75% threshold. This happens because although the true net benefit is not exactly zero in this setting due to the presence of positive predictions in the population, the bootstrap intervals tend to collapse to zero due to the lack of positive predictions in the observed samples. In contrast, under the same problematic scenario, Bayesian intervals show only a slight overcoverage. Despite its vagueness, the default prior distribution yields valid credible intervals with reasonable Frequentist calibration. Moreover, the Bayesian credible intervals are not significantly wider than the Frequentist confidence intervals, except when the Frequentist method fails. Overall, Supplementary Figure (S5) shows that the Bayesian intervals have reasonable width, even when the Frequentist counterpart collapses to zero. Finally, as shown in Supplementary Figure (S6), the Bayesian DCA for binary outcomes is orders of magnitude faster than its bootstrap-based Frequentist counterpart. Here, we are sampling 4000 draws from the posterior distribution (3) and using 500 bootstrap samples - the default in bayesDCA and rmda, respectively. Moreover, computation time significantly increases with the overall sample size for the bootstrap case, but not in the Bayesian case. Given the 100 expected events fixed for each simulation run, simulation settings with prevalences of 30%, 5%, and 1% imply overall sample sizes around 333, 2,000, and 10,000, respectively. While this is expected to impact bootstrap speed, the computation time for the proposed Bayesian DCA is virtually unaffected because (3) only depends on simple summary statistics. In our experience, running Bayesian DCA for binary outcomes with bayesDCA takes no more than a second in most cases (using a standard laptop with 12GB of RAM and no parallelization). #### 2.5.2 Simulation study for survival outcomes We follow the same simulation strategy as above. The underlying populations follow Weibull distributions with covariates satisfying the proportional hazards (PH) assumption, while censoring times are uniformly distributed. The decision strategies being investigated are fixed PH Cox models with exaggerated coefficients - i.e., miscalibrated due to overfitting. For each setting, we simulate 1000 datasets, with which we perform both Bayesian and Frequentist DCA using a twelve-month prediction horizon. We vary the underlying C-statistics and one-year survival rate. For brevity, we show the results for C-statistics 0.6 and 0.9 with a one-year survival of 10%. A full description of the simula tions is provided in the section _Simulation details_, and further simulation settings are reported in the Supplementary Figures (S7-S9). As shown in Figure (4), the Bayesian point estimates generally behave similarly to the ones from the Frequentist approach. At decision thresholds of 50% or above, the effective sample size can be small which causes a slight bias for both approaches depending on the simulation setting - here seen at the 75% threshold for the Frequentist approach under C-statistic 0.9, see also Supplementary Figure (S7). Because we are not aware of any Frequentist implementation of DCA for survival outcomes Figure 4: **Bayesian and Frequentist DCA for survival outcomes show similar estimation performance.** Bayesian DCA was computed using the bayesDCA R package, while the Frequentist alternative used the dcurves package. For each simulation run, DCA was performed for a fixed example model using a simulated test dataset of sample size corresponding to 100 expected events. A total of 1000 Monte Carlo repetitions was run for each setting. The setting C-statistic corresponds to its maximum achievable discrimination. The example model for each setting was fixed to approximate the maximum discrimination of that setting but was miscalibrated (overly extreme risk predictions). We did not find an easily-accessible Frequentist implementation of survival DCA with uncertainty intervals and, therefore, empirical coverage is shown for the Bayesian approach only. The prediction horizon is one year. Further simulation settings are in the Supplementary Figures (S7–S9). that provides uncertainty intervals, we show here coverage results for the Bayesian approach only. The Bayesian uncertainty intervals show reasonable empirical coverage overall, though we observed undercoverage for very high or very low thresholds depending on the simulation setting. While the setting with C-statistic 0.9 shown in Figure (4) represents the worst-case scenario, most coverage probabilities remained above 90% across all settings - Supplementary Figure (S8). Mean absolute percentage errors of point estimates were comparable between Bayesian and Frequentist approaches across all simulation settings - Supplementary Figure (S9). In summary, our Bayesian approach offers an accurate alternative for estimating decision curves for both binary and survival outcomes. It also enables uncertainty quantification for survival outcomes, and provides much faster quantification of uncertainty for binary outcomes. These methods are implemented and easily accessible in the bayesDCA R package. While the simulations shown in this section apply the default weakly-informative priors, different priors may be specified by the user if desired to further improve estimation performance. The bayesDCA R package allows sampling from the prior only so that prior predictive checks are straightforward. ### Applied case study Clinical prediction models are commonly employed to predict cancer diagnosis. For example, the ADNEX model predicts an individual's risk of having ovarian cancer with high discrimination (AUC?0.9) and adequate calibration [13]. The model employs clinical and ultrasound features from patients under suspicion of ovarian cancer due to the known or suspected presence of adnexal masses. Previously, Wynants et al. (2019) warned about the importance of utility-based decision thresholds and used the ADNEX model as a motivating example [2]. The authors suggest that a reasonable decision threshold may be as low as 6% due to the high cost of false negatives which can cause late detection and treatment of aggressive cancer. Here, we will expand on their example using Bayesian DCA on a hypothetical scenario. Suppose we perform an external validation study to assess the predictive performance of the ADNEX model. We wish to know whether the model should replace the Standard of Care (SoC) currently in place: a hypothetical diagnostic test with 81% sensitivity and 88% specificity. Using the publicly-available data from Wynants et al. (2019)[2] (\(N=2403\), 980 events), the Bayesian DCA results are shown in Figure (5). In terms of the point estimate of net benefit, the original ADNEX model is superior for most decision thresholds: under no additional costs to implement or use, ADNEX-based decisions would be the best strategy to follow. However, there is substantial uncertainty around the clinically-motivated threshold of 6%, where it is not immediately clear if the ADNEX superiority is simply due to chance. For example, implementing the model into clinical practice and using it as part of daily care may impose challenges on healthcare institutions with limited resources. Moreover, we may require a low degree of uncertainty before replacing the current Standard of Care (or default strategies) to prevent having to undo the implementation of a suboptimal strategy. Thus, inspecting the DCA alone, under the threshold of 6%, we might require more evidence confirming ADNEX superiority before implementing it into clinical practice. Additionally, uncertainty intervals from the ADNEX model and the hypothetical SoC test start overlapping at higher decision thresholds around 40%. The SoC test becomes superior in terms of estimated net benefit at very high thresholds. Here, we consider only decision thresholds below 50% due to the assumption that, in the case of ovarian cancer diagnosis, the cost of false negatives is generally larger than the cost of false positives. From its decision curve alone, it is clear that the ADNEX model is clinically useful for higher thresholds, but there is considerable uncertainty at thresholds below 10%. Besides visual inspection of the decision curves, how can we quantify our uncertainty about which decision strategies are clinically Figure 5: **Illustration of Bayesian DCA for the ADNEX model and hypothetical Standard of Care diagnostic test.** Bayesian DCA was computed using the bayesDCA R package and publicly-available data from Wynants et al. (2019)[2] (\(N=2403\), 980 events). A hypothetical Standard of Care diagnostic test was simulated to have 81% sensitivity and 88% specificity. Lines are posterior means and uncertainty intervals are 2.5% and 97.5% posterior percentiles (i.e., 95% credible intervals). useful? We can answer this question by interrogating the posterior distribution: at the clinically-motivated threshold of 6%, there is over 99.9% posterior probability that the ADNEX model is clinically useful - Figure (6A). As the threshold increases, this posterior probability is consistently close to 100%, and the SoC test becomes useful as well. Importantly, we are only able to speak of P(useful) at all and inspect posterior distributions for any decision strategy because we are adopting a Bayesian approach [12]. No natural counterpart exists under the Frequentist bootstrap. The results above may be counter-intuitive due to the overlapping uncertainty intervals in Figure (5). However, notice that P(useful) at 6% for the ADNEX model is determined by the difference between its net benefit and the one from the Treat all strategy - Figure (6B), also notice the inset element within the plot for a zoomed view at the 6% threshold. Their posterior distributions are highly correlated (R=0.98) due to their shared prevalence parameter, so the estimated net benefit difference is very precise: 0.014 (95% Cr.I. 0.009 -- 0.018). However, to justify the implementation Figure 6: Bayesian DCA estimates the probability that each decision strategy is useful or the best among all strategies considered. Bayesian DCA was computed using the bayes DCA R package (\(N=2403\), 980 events). **(A)** P(useful) is the probability that a given decision strategy has a higher net benefit than the Treat all and Treat none strategies (i.e., beats the default strategies) and is computed from **(B)** the difference in NB between each strategy and Treat all/none. **(C)** P(best) is the probability that a given strategy has a higher net benefit than all the remaining strategies (i.e., beats its best competitor) and is computed from **(D)** the difference in NB between each strategy and the maximum net benefit among other competing strategies. of the ADNEX model at the 6% threshold, we must be confident that this gain in net benefit also overcomes any additional cost of using the model in daily practice, if this isn't already factored in. Since the strategy cost may depend on the local context, we simply report the estimated net benefit difference and its uncertainty. The consumer of the DCA results can then reason if they are confident enough that this estimated difference overcomes their context-specific costs. If not, more data may still be required prior to implementation. Although usefulness is important (i.e., being better than Treat all and Treat none), ideally we would like to use the best decision strategy available. Among the strategies under consideration, we can once again interrogate the posterior distribution to quantify our uncertainty about what is the best decision strategy at each decision threshold - Figures (6C) and (6D). We do this by comparing each strategy against the best of the remaining strategies (i.e., the "best competitor"). For a given strategy at a given threshold, if the difference in net benefit against its best competitor is positive, then this strategy is better than all remaining strategies. For instance, at the 6% threshold, the best competitor against the ADNEX model is the Treat all strategy - see Figure (5). In this case, therefore, P(best) and P(useful) coincide: over 99.9%. As expected, the probability that the ADNEX model is the best available strategy is virtually 100% for most thresholds. For higher thresholds, however, there is increasingly more overlap between the uncertainty intervals from the ADNEX model and the SoC test. This translates into an increasingly higher P(best) for the SoC test and progressively lower P(best) for the ADNEX model. At the 50% threshold, there is 84% posterior probability that the SoC test is the best decision strategy available. From Figures (5) and (6), using the ADNEX model or treating all the patients is likely the best we can do under low decision thresholds. However, the decision-maker may be interested in higher thresholds. For instance, we may classify patients as high-risk if their predicted probability of cancer is higher than the prevalence in the present dataset (41%)[2]. The motivation for this is to maintain the proportion of high-risk patients, according to the model predictions, close to the actual disease prevalence[14]. From Figure (6), we see that P(best) for the ADNEX model at the 41% threshold is far less convincing: around 74%. Moreover, the difference in net benefit between the ADNEX model and its best competitor (the SoC test in this case) is almost unnoticeable. Since we have full posterior distributions, we can directly compute a pairwise comparison between the ADNEX model and the SoC test, which is shown in Figure (7). The superiority of the ADNEX model over the SoC test is clear for small thresholds. However, for high thresholds, the difference is smaller. At the 41% threshold, the estimated difference is 0.003 (95% Cr.I. -0.014 -- 0.019). At this threshold, the SoC test is the best competitor against the ADNEX model so the probability that the ADNEX model is better than the SoC test is again 74% - matches the corresponding P(best) for ADNEX. If no additional strategy costs are considered, maximizing observed net benefit would lead us to favor the implementation of the ADNEX model. However, there is still a 100%-74% = 26% posterior probability that the current Standard of Care is better than the model for this decision threshold. In face of such large uncertainty, risk aversion may lead us to oppose model implementation unless more data is made available. Finally, one might wonder if more data are needed to fully characterize the best decision strategy across all thresholds in the target population. It might be that the consequences of uncertainty at, e.g., very high thresholds are too costly in terms of net benefit. With more data, maybe we could confirm if the ADNEX model is indeed superior at the 41% threshold, for instance. To directly quantify the consequences of the current level of uncertainty, Figure (8) shows the validation EVPI for the present case study. Figure 7: **Pairwise net benefit comparison between the ADNEX model and hypothetical SoC test.****(A)** Estimated difference in net benefit between ADNEX and the SoC test (positive y-axis favors ADNEX). **(B)** Posterior distribution of the difference for the specific threshold of 41% (observed prevalence), at which there is a 74% posterior probability that the ADNEX model is superior to the hypothetical SoC test. As a summary of the posterior distributions, the EVPI shows small, noisy variations. The highest EVPI across all the decision thresholds of interest is around 0.003. Whether this value is relevant depends, again, on the specific context[12]. For instance, if 1000 clinical decisions regarding the presence of ovarian cancer are made every year for a given population, then the current level of uncertainty is associated with missing at most three net true positive cases of ovarian cancer per year. If 10,000 decisions are made yearly, we could be missing out on up to 30 net true positives every year. In summary, using the ADNEX model is the best decision strategy across small thresholds assuming no additional strategy costs according to the data. If there exists additional cost of using the model and we accept the decision threshold of 6%, treating all patients or gathering additional data may be preferred. For the 41% threshold, even though the net benefit point estimate for the ADNEX model is slightly higher than for the SoC test, the uncertainty may be too high to justify replacing the Standard of Care currently in place if we are not risk-neutral. For decision thresholds of 42% and higher, we are increasingly confident that the current Standard of Care is the best decision strategy. Finally, collecting more data from the same population is expected to yield substantial net benefit gain if thousands of clinical decisions are made every year. Figure 8: **Consequence of the current level of uncertainty in the decision curves from the case study.** The validation Expected Value of Perfect Information (EVPI) shows the expected loss in net true positives due to estimation uncertainty. Discussion Bayesian decision curve analysis was first proposed in the context of meta-analysis[7]. An immediate advantage was the possibility of calculating P(useful), the probability that a decision strategy is clinically useful. The concept was later extended to the probability of being the best decision strategy available[12]. Here, we attempt to clarify terminology by defining the latter as P(best) instead - which differs from P(useful) if we are evaluating more than one predictive model or test at the same time. The key advantage of Bayesian DCA is the ability to fully interrogate posterior decision curves with an intuitive probabilistic interpretation. We may compute multiple quantities of interest to quantify uncertainty around the answers to fundamental questions such as which decision strategies are useful, what is the best decision strategy, make pairwise comparisons between strategies, and what is the expected consequence of the current level of uncertainty in terms of expected loss in net benefit. These may help the interpretation of DCA results: if the uncertainty is too large, then more data may be needed to properly choose strategies. On the other hand, the bootstrap-based approaches for DCA currently implemented are limited to estimating confidence intervals around net benefit [15]. Moreover, the coverage of these intervals is limited by the distribution of the predictions: under a low effective sample size, the bootstrap fails. This may pose a problem when the observed risk predictions are concentrated on one side of a given decision threshold (typically very high or very low thresholds), ultimately hiding the possibility of net harm. A low effective sample size may also cause the EVPI to misbehave, varying non-monotonically with the overall sample size - see Figure 5 in Sadatsafavi et al. (2022)[12]. Bayesian DCA offers an alternative to potentially overcome these limitations. From a Frequentist perspective, the main limitations of Bayesian DCA are the choice of priors and the interpretation of the uncertainty intervals. We chose weakly-informative priors as default in bayesDCA and provided a simulation study to address this valid concern. Under these priors, Bayesian intervals for the binary outcomes case showed near-perfect empirical coverage, with similar width to the bootstrap intervals. When stronger priors exist (e.g., from past studies), this knowledge may be used to obtain more accurate estimates. For the survival outcomes case, though empirical coverage of the Bayesian approach was reasonable overall, we did observe slight undercoverage in some simulation settings. This might be due to known biases in the estimation of the shape parameter of the Weibull distribution[16]. Future research is warranted to further improve this method, potentially involving reparametrization of the Weibull likelihood to further leverage parameter orthogonality[17]. Informative prior elicitation for Bayesian DCA, in both binary and survival cases, may also be further developed in future work. Another possible limitation of the Bayesian approach is the computation time due to sampling. In the binary outcomes case, we leveraged parameter orthogonality and posterior independence to allow particularly fast estimation. The bayesDCA implementation is multiple times faster than its bootstrap counterpart and typically takes less than a second, even for large sample sizes. In the survival outcomes case, we do need to use MCMC to estimate the Weibull parameters and, hence, the Bayesian approach is slower than in the binary outcomes case. From our experience, a typical DCA may take from three to five minutes in this case (using a standard laptop with 12GB of RAM and 8 cores, and running four MCMC chains in parallel taking 4000 draws each). There is debate regarding the value of uncertainty quantification in DCA[18]. From the risk-neutral point of view, it informs whether more data is warranted to confidently compare decision strategies, but wouldn't alter our choice of strategy given the data that is available at the moment of the clinical decision. However, when it comes to new decision strategies, the decision to replace the well-accepted Standard of Care may warrant a more conservative approach [6, 5]. Implementing a suboptimal strategy poses costs that are potentially irrecoverable. Challenges such as infrastructure development, physician adoption, and regulatory approvals cannot be frequently reversed. Importantly, patients harmed by new technologies due to premature implementation may face serious unwanted consequences. This is not to advocate for questionable inferential procedures such as threshold-specific p-values but to point out that caution may be warranted against the implementation of a new strategy that alters well-established clinical practice when, for instance, its P(useful) varies just above 50% for a range of decision thresholds of interest - the risk-averse approach. Nonetheless, the present work proposes a Bayesian approach for the estimation of net benefit that is indifferent to whether the end user operates under risk neutrality and how they interpret uncertainty in DCA. Still, the bayesDCA R package allows for an easy and comprehensive characterization of uncertainty in DCA, including its expected consequences through EVPI calculation. Judging whether implementation is appropriate given the observed level of uncertainty and context-specific costs depends on both the estimated net benefits and the decision maker's subjective aversion to the potential risk of implementing a suboptimal strategy. In sum, we propose a method for Bayesian decision curve analysis and provide a freely available implementation in the bayesDCA R package. We hope our contribution will be relevant for studies that involve the validation of clinical prediction models as well as for diagnostic and prognostic test accuracy studies. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt informed decisions when choosing and implementing clinical decision strategies. ## 4 Methods details All analyses were performed with R version 4.2.3 within a fixed Docker image [19, 20]. All code and data to reproduce the results are available on GitHub ([https://github.com/giulianonetto/paperbayesdca](https://github.com/giulianonetto/paperbayesdca)). For the survival case, the bayesDCA R package employs Markov Chain Monte Carlo based on the No U-Turn Sampler implemented within Stan and accessed via the rstan R package [11, 21]. Processing of posterior samples and data visualization employed the tidyverse meta-package and the patchwork package [22, 23]. Parallel processing for the simulations was implemented using the furrr package [24]. Pipeline management was implemented using the targets package [25]. The bayesDCA R package is freely available at [https://github.com/giulianonetto/bayesdca](https://github.com/giulianonetto/bayesdca). ### GUSTO-I trial example An illustrative model was built using the GUSTO-I trial dataset[8]. From the full dataset (\(N=40,830\)), we held out a randomly selected validation set (\(N=500\), \(36\) events) and trained a simple logistic regression model on the remaining data based on age, systolic blood pressure, pulse, and Killip class (I - IV). We ran both Bayesian and Frequentist DCA of the fitted model on the validation data to provide an initial illustration under large uncertainty. ### Bayesian DCA: model details #### 4.2.1 Binary outcomes We now describe the model used to estimate the net benefit for each decision strategy at each decision threshold. For each patient \(i=1,2,\ldots,N\), suppose we observe the pair \((D_{i},Z_{i})\) where \(D_{i}=1\{i^{th}\) patient has disease\(\}\) and \(Z_{i}=1\{i^{th}\) patient has positive prediction\(\}\). We then model: \[D \sim\text{Bernoulli}(\theta_{0}) \tag{11}\] \[Z|D =1\sim\text{Bernoulli}(\theta_{1})\] \[Z|D =0\sim\text{Bernoulli}(\theta_{2})\] where \(\theta_{0}\) is the prevalence, \(\theta_{1}\) is the sensitivity, and \(\theta_{2}\) is 1 minus the specificity. The likelihood function for the observed data \(\mathcal{D}=\left\{\left(D_{i},Z_{i}\right)\right\}_{i=1}^{N}\) given the parameter vector \(\boldsymbol{\theta}=(\theta_{0}\ \theta_{1}\ \theta_{2})\) is: \[L(\mathcal{D}\,|\,\boldsymbol{\theta})=\prod_{i=1}^{N}p(d_{i},z_{i})=\prod_{i= 1}^{N}p(d_{i})p(z_{i}|d_{i})=\prod_{i=1}^{N}\text{Ber}(d_{i}|\theta_{0})\times \text{Ber}(z_{i}|\theta_{1})^{d_{i}}\times\text{Ber}(z_{i}|\theta_{2})^{1-d_{i}} \tag{12}\] The parameters \(\theta_{0}\), \(\theta_{1}\), and \(\theta_{2}\) are orthogonal because the likelihood function factorizes as: \[L(\mathcal{D}\,|\,\theta_{0},\theta_{1},\theta_{2})=\prod_{i=1}^{N}\text{Ber}( d_{i}|\theta_{0})\times\prod_{i=1}^{N}\text{Ber}(z_{i}|\theta_{1})^{d_{i}} \times\prod_{i=1}^{N}\text{Ber}(z_{i}|\theta_{2})^{1-d_{i}} \tag{13}\] Notice that each product term depends on only one parameter. From a Frequentist perspective, this implies that the maximum likelihood estimators for the parameters of interest are asymptotically independent[26]. In the Bayesian approach, under independent priors, the factorization above implies posterior independence. In our case, let \(\theta_{j}\sim\text{Beta}(\alpha_{j},\beta_{j})\) for \(j=0,1,2\) be our independent priors. The posterior distribution \(\pi(\boldsymbol{\theta}|\mathcal{D})\) is then proportional to: \[\left[\text{Beta}(\alpha_{0},\beta_{0})\prod_{i=1}^{N}\text{Ber}(d_{i}|\theta_ {0})\right]\times\left[\text{Beta}(\alpha_{1},\beta_{1})\prod_{i=1}^{N}\text{ Ber}(z_{i}|\theta_{1})^{d_{i}}\right]\times\left[\text{Beta}(\alpha_{2},\beta_{2}) \prod_{i=1}^{N}\text{Ber}(z_{i}|\theta_{2})^{1-d_{i}}\right]\] which is the numerator of the Bayes' Theorem formula and simplifies to \[\left[\theta_{0}^{D+\alpha_{0}-1}(1-\theta_{0})^{ND+\beta_{0}-1}\right]\times \left[\theta_{1}^{TP+\alpha_{1}-1}(1-\theta_{1})^{FN+\beta_{1}-1}\right]\times \left[\theta_{2}^{FP+\alpha_{2}-1}(1-\theta_{2})^{TN+\beta_{2}-1}\right] \tag{14}\] where \(D=\sum_{i}d_{i}\) and \(ND=N-D\) are the numbers of patients with and without the disease, respectively, \(TP=\sum_{i}d_{i}z_{i}\) represents the total number of true positives, \(FN=\sum_{i}d_{i}(1-z_{i})\) of false negatives, \(FP=\sum_{i}(1-d_{i})z_{i}\) of false positives, and \(TN=\sum_{i}(1-d_{i})(1-z_{i})\) of true negatives. The expression above can already be recognized as the joint density of three independent Beta random variables. Now, let \(\mathcal{B}(a,b)=\int_{0}^{1}t^{a-1}(1-t)^{b-1}\,dt\) be the Beta function, then: \[\begin{split} p(\mathcal{D})&=\int_{0}^{1}\int_{0}^{ 1}\int_{0}^{1}\pi(\theta_{0})\pi(\theta_{1})\pi(\theta_{2})\times L(\mathcal{D }\,|\,\boldsymbol{\theta})\,d\theta_{0}\,d\theta_{1}\,d\theta_{2}\\ &=\prod_{j=0}^{2}\mathcal{B}(\alpha_{j},\beta_{j})^{-1}\\ &\quad\times\int_{0}^{1}\left[\theta_{0}^{D+\alpha_{0}-1}(1- \theta_{0})^{ND+\beta_{0}-1}\right]d\theta_{0}\\ &\quad\times\int_{0}^{1}\left[\theta_{1}^{TP+\alpha_{1}-1}(1- \theta_{1})^{FN+\beta_{1}-1}\right]d\theta_{1}\\ &\quad\times\int_{0}^{1}\left[\theta_{2}^{FP+\alpha_{2}-1}(1- \theta_{2})^{TN+\beta_{2}-1}\right]d\theta_{2}\\ &=\prod_{j=0}^{2}\mathcal{B}(\alpha_{j},\beta_{j})^{-1}\times \mathcal{B}(D+\alpha_{0},ND+\beta_{0})\times\mathcal{B}(TP+\alpha_{1},FN+ \beta_{1})\times\mathcal{B}(FP+\alpha_{2},TN+\beta_{2})\end{split} \tag{15}\] Putting (14) and (15) together and noticing that the normalizing constant that multiplies expression (14) for the posterior distribution is \(\Big{[}\prod_{j=0}^{2}\mathcal{B}(\alpha_{j},\beta_{j})^{-1}\Big{]}/p( \mathcal{D})\), we have by Bayes' Theorem: \[\pi(\boldsymbol{\theta}|\mathcal{D})=\text{Beta}(\theta_{0}\,|\,D+\alpha_{0},ND+\beta_{0})\times\text{Beta}(\theta_{1}\,|\,TP+\alpha_{1},FN+\beta_{1}) \times\text{Beta}(\theta_{2}\,|\,FP+\alpha_{2},TN+\beta_{2}) \tag{16}\] Notice that the joint posterior distribution is the product of the marginal posterior distributions from each parameter - i.e., posterior independence. Hence, we can simply draw from the marginal posteriors and combine the marginal samples to form a draw from the joint posterior. This result holds for any sample size and not only asymptotically. Given samples from the joint posterior, we then compute the posterior net benefit as \(\text{NB}|\mathcal{D}=\theta_{1}\cdot\theta_{0}-w_{t}\cdot\theta_{2}\cdot(1- \theta_{0})\), where \(w_{t}=t/(1-t)\). The posterior net benefit for the treat all strategy is given by \(\text{NB}_{\text{all}}|\mathcal{D}=\theta_{0}-w_{t}\cdot(1-\theta_{0})\). Also, within bayesDCA we sample specificity \(\theta_{3}=1-\theta_{2}\) directly instead of \(\theta_{2}\) to make communication easier with end users - in agreement with equation (3) - and to make prior specification potentially more intuitive. Finally, notice that \(\theta_{0}\) is a common parameter shared by all thresholds and all decision strategies, whereas \(\theta_{1}\) and \(\theta_{2}\) are threshold- and strategy-specific. #### 4.2.2 Survival outcomes For each patient \(i=1,2,\ldots,N\), suppose we observe \((T_{i},C_{i},Z_{i})\) where \(T_{i}\) is the observed survival time for the \(i^{th}\) patient, \(C_{i}\) is the censoring indicator, and \(Z_{i}=1\{i^{th}\) patient has positive prediction\(\}\), i.e., \(Z_{i}\) is 1 if the predicted risk of an event at time horizon \(\tau\) is above the decision threshold. We then model: \[Z \sim\text{Bernoulli}(p)\] \[(T,C)\,|\,Z =1\sim\text{Weibull-Censored}(\alpha_{1},\sigma_{1}) \tag{17}\] \[(T,C)\,|\,Z =0\sim\text{Weibull-Censored}(\alpha_{2},\sigma_{2})\] The likelihood function for the observed data \(\mathcal{D}=\left\{(T_{i},C_{i},Z_{i})\right\}_{i=1}^{N}\) given the parameter vector \(\boldsymbol{\theta}=\left(p,\alpha_{1},\sigma_{1},\alpha_{2},\sigma_{2}\right)\) is: \[L\Big{(}\mathcal{D}\,|\,\boldsymbol{\theta}\Big{)} =\prod_{i=1}^{N}\text{W-Cens}(t_{i},c_{i}\,|\,\alpha_{1},\sigma_{ 1})^{z_{i}}\times\text{W-Cens}(t_{i},c_{i}\,|\,\alpha_{2},\sigma_{2})^{1-z_{i} }\times\text{Bern}(z_{i}\,|\,p)\] \[=\prod_{i=1}^{N}\text{W-Cens}(t_{i},c_{i}\,|\,\alpha_{1},\sigma_ {1})^{z_{i}}\times\prod_{i=1}^{N}\text{W-Cens}(t_{i},c_{i}\,|\,\alpha_{2}, \sigma_{2})^{1-z_{i}}\times\prod_{i=1}^{N}\text{Bern}(z_{i}\,|\,p) \tag{18}\] \[:=L_{1}\big{(}\mathcal{D}_{+}\,|\,\boldsymbol{\theta_{1}}\big{)} \times L_{2}\big{(}\mathcal{D}_{-}\,|\,\boldsymbol{\theta_{2}}\big{)}\times L _{3}\big{(}\mathcal{D}_{0}\,|\,p\big{)} \tag{19}\] where we represent the data as \(\mathcal{D}_{+}=\left\{T_{i},C_{i}\right\}_{i\in[n]:z_{i}=1}\) (survival data for patients with positive predictions), \(\mathcal{D}_{-}=\left\{T_{i},C_{i}\right\}_{i\in[n]:z_{i}=0}\) (survival under negative predictions), \(\mathcal{D}_{0}=\left\{Z_{i}\right\}_{i=1}^{n}\) (positive prediction indicators), and Weibull parameters as \(\boldsymbol{\theta_{1}}=(\alpha_{1},\sigma_{1})\) and \(\boldsymbol{\theta_{2}}=(\alpha_{2},\sigma_{2})\). Hence, under proper independent priors: \[\pi(\boldsymbol{\theta}\,|\,\mathcal{D})\propto\pi(\boldsymbol{\theta_{1}})L_ {1}\big{(}\mathcal{D}_{+}\,|\,\boldsymbol{\theta_{1}}\big{)}\times\pi( \boldsymbol{\theta_{2}})L_{2}\big{(}\mathcal{D}_{-}\,|\,\boldsymbol{\theta_{2 }}\big{)}\times\pi(p)L_{1}\big{(}\mathcal{D}_{0}\,|\,p\big{)} \tag{20}\] which then implies posterior independence and we can sample each parameter independently from their marginal posteriors. In particular, we place a conjugate Beta(a, b) prior on the Bernoulli parameter \(p\) so that we have a closed-form solution for its marginal posterior - default being \(a=1\) and \(b=1\). As there is no such closed form for the Weibull likelihood under right-censoring, we need to employ MCMC to sample from the marginal posterior of \(\boldsymbol{\theta_{1}}\). Within bayesDCA, this is implemented using Stan to sample \(\boldsymbol{\theta_{1}}\) as a parameter and \(p\) as a generated quantity[11]. The default priors for \(\boldsymbol{\theta_{1}}\) are as in (6), though Gamma priors are also allowed. Due to parameter orthogonality and posterior independence, we don't need to sample the nuisance parameter \(\boldsymbol{\theta_{2}}\). We can then compute the posterior conditional survival at the time horizon \(\tau\) as \(S=\exp\left(\tau/\sigma_{1}\right)^{\alpha_{1}}\) and the posterior net benefit as \(\text{NB}\,|\,\mathcal{D}=\left(1-S\right)\cdot p-w_{t}\cdot S\cdot p\). Notice that here both \(S\) and \(p\) are threshold- and strategy-specific. For the treat-all strategy, \(p\) is a fixed number at 1. ### Simulation details For each simulation setting, we simulate a large population dataset (\(N=2\times 10^{6}\)) from which we randomly generate reasonably-sized samples to perform DCA. We set an expected value of 100 events for the simulated samples and varied the sample size according to each setting's outcome prevalence or incidence. Bayesian DCA was compared with the Frequentist counterpart using available open-source software, including the packages rnda and dcurves [15, 27]. #### 4.3.1 Binary outcomes For the binary outcome simulations, the underlying data-generating process is as follows: \[y_{i} \sim\text{Bernoulli}(p_{i})\quad\text{ for }i=1,2,\ldots,N\] \[p_{i} =\text{logit}^{-1}(\mathbf{z}_{i}^{T}\mathbf{\beta})\,\quad\quad\widehat{p}_{i}= \text{logit}^{-1}(\mathbf{z}_{i}^{T}\widehat{\mathbf{\beta}})\] \[\mathbf{z}_{i}^{T} =\begin{bmatrix}1&x_{i1}&x_{i2}\end{bmatrix}\,\quad\quad x_{i1},\ x_{i2} \stackrel{{\text{iid}}}{{\sim}}\text{Exp}(1)\] where \(\beta\) is a vector of true coefficients used to generate the data, and \(\widehat{\beta}\) is a vector of "estimated" coefficients from the model under investigation - the model we are validating with DCA. The true underlying disease probabilities are represented by \(p_{i}\), and the estimated probabilities are \(\widehat{p}_{i}\). We choose values of \(\beta\) and \(\widehat{\beta}\) to yield settings with a range of values for outcome prevalence, signal-to-noise ratio, and model discrimination. The signal-to-noise ratio is represented by the maximum possible AUC any model could achieve in a given setting (i.e., the AUC computed using the true disease probabilities \(p_{i}\)). The discrimination of the model under investigation is the true AUC computed using \(\widehat{p}_{i}\) in the entire population data. Here, we choose \(\mathbf{\beta}\) and \(\widehat{\mathbf{\beta}}\) to fix the true prevalence at 1%, 5%, or 30%, and the maximum AUC at either 0.65 or 0.85 - these represent low and high signal-to-noise ratio settings, respectively, with varying prevalence. We choose \(\widehat{\mathbf{\beta}}\) so that the example model approximates the maximum AUC in a given setting as well as the true underlying prevalence, but with poor calibration slope (i.e., the coefficients for \(x_{i1}\) and \(x_{i2}\) are exaggerated). This resembles a common scenario of overfitting in risk prediction. The fixed parameters for each simulation setting are provided in Table 1. #### 4.3.2 Survival outcomes For the survival outcome simulations, we employed the simsurv package[28]. Briefly, we sample survival times from a Weibull distribution with specified shape and scale parameters as well as two standard normal covariates with fixed coefficients (under proportional hazards assumption). Then, we sample censoring times from a uniform distribution with support between zero and 24 months, representing a maximum follow-up time of two years. For each patient in the population, the observed time is the minimum between survival and censoring times. Table (2) shows the fixed parameters for each simulation setting. Notice that simsurv employs a different Weibull parameterization than the one used by Stan and model (18); while the shape parameter (\(\gamma\) in simsurv) is the same, the scale is denoted \(\lambda=\sigma^{-\alpha}\). Table (2) uses simsurv notation for easy reproduction. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **AUC** & **Prev.** & \multicolumn{3}{c}{\(\mathbf{\beta}^{T}\)} & \multicolumn{3}{c}{\(\mathbf{\widehat{\beta}}^{T}\)} \\ \hline 0.65 & 1\% & \(\left(-4.750\right.\) & \(-\log\left[1.50\right]\) & \(\log\left[1.50\right]\) & \(\left(-5.000\right.\) & \(-\log\left[1.50\right]\) & \(\ast\) 1.25 & \(\log\left[1.50\right]\) & \(\ast\) 1.25 \\ 0.65 & 5\% & \(\left(-3.100\right.\) & \(-\log\left[1.50\right]\) & \(\log\left[1.50\right]\) & \(\left(-3.900\right.\) & \(-\log\left[1.50\right]\) & \(\ast\) 3.00 & \(\log\left[1.50\right]\) & \(\ast\) 3.00 \\ 0.65 & 30\% & \(\left(-0.900\right.\) & \(-\log\left[1.55\right]\) & \(\log\left[1.55\right]\) & \(\left(-1.200\right.\) & \(-\log\left[1.55\right]\) & \(\ast\) 3.00 & \(\log\left[1.55\right]\) & \(\ast\) 3.00 \\ 0.85 & 1\% & \(\left(-5.600\right.\) & \(-\log\left[2.57\right]\) & \(\log\left[2.57\right]\) & \(\left(-6.900\right.\) & \(-\log\left[2.57\right]\) & \(\ast\) 1.50 & \(\log\left[2.57\right]\) & \(\ast\) 1.50 \\ 0.85 & 5\% & \(\left(-3.755\right.\) & \(-\log\left[2.95\right]\) & \(\log\left[2.95\right]\) & \(\left(-7.300\right.\) & \(-\log\left[2.95\right]\) & \(\ast\) 3.00 & \(\log\left[2.95\right]\) & \(\ast\) 3.00 \\ 0.85 & 30\% & \(\left(-1.300\right.\) & \(-\log\left[4.50\right]\) & \(\log\left[4.50\right]\) & \(\left(-2.250\right.\) & \(-\log\left[4.50\right]\) & \(\ast\) 3.00 & \(\log\left[4.50\right]\) & \(\ast\) 3.00 \\ \hline \hline \end{tabular} \end{table} Table 1: **Simulation settings (binary outcomes).** AUC refers to the maximum AUC possible in a given setting and Prev. is the true underlying prevalence. The regression coefficients used to generate the data are \(\mathbf{\beta}\), while \(\mathbf{\widehat{\beta}}\) define the hypothetical models under validation with DCA. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **C** & \(S(1)\) & \(\gamma\) (shape) & \(\lambda\) (scale) & \(\mathbf{\beta}^{T}\) & \(\mathbf{\widehat{\beta}}^{T}\) \\ \hline 0.60 & 10\% & 1.22 & 0.12 & \(\left(\log\left[1.30\right]\) & \(\log\left[0.70\right]\) & \(\left(\log\left[1.30\right]\) & \(\log\left[0.70\right]\) & \(\ast\) 1.01 \\ 0.60 & 20\% & 1.07 & 0.12 & \(\left(\log\left[1.30\right]\) & \(\log\left[0.70\right]\) & \(\left(\log\left[1.30\right]\) & \(\log\left[0.70\right]\) & \(\ast\) 1.01 \\ 0.60 & 50\% & 0.7 & 0.12 & \(\left(\log\left[1.30\right]\) & \(\log\left[0.70\right]\) & \(\left(\log\left[1.30\right]\) & \(\log\left[0.70\right]\) & \(\ast\) 1.01 \\ 0.90 & 10\% & 4.60 & 0.0004 & \(\left(\log\left[1.95\right]\) & \(\log\left[0.05\right]\) & \(\left(\log\left[1.95\right]\) & \(\log\left[0.05\right]\) & \(\ast\) 1.25 \\ 0.90 & 20\% & 4.00 & 0.0004 & \(\left(\log\left[1.95\right]\) & \(\log\left[0.05\right]\) & \(\left(\log\left[1.95\right]\) & \(\log\left[0.05\right]\) & \(\ast\) 1.25 \\ 0.90 & 50\% & 2.90 & 0.0004 & \(\left(\log\left[1.95\right]\) & \(\log\left[0.05\right]\) & \(\left(\log\left[1.95\right]\) & \(\log\left[0.05\right]\) & \(\ast\) 1.25 \\ 0.95 & 10\% & 6.50 & 0.0004 & \(\left(\log\left[1.95\right]\) & \(\log\left[0.001\right]\) & \(\left(\log\left[1.95\right]\) & \(\log\left[0.001\right]\) & \(\ast\) 1.25 \\ 0.95 & 20\% & 5.40 & 0.0004 & \(\left(\log\left[1.95\right]\) & \(\log\left[0.001\right]\) & \(\left(\log\left[1.95\right]\) & \(\log\left[0.001\right]\) & \(\ast\) 1.25 \\ 0.95 & 50\% & 3.10 & 0.0004 & \(\left(\log\left[1.95\right]\) & \(\log\left[0.001\right]\) & \(\left(\log\left[1.95\right]\) & \(\log\left[0.001\right]\) & \(\ast\) 1.25 \\ \hline \hline \end{tabular} \end{table} Table 2: **Simulation settings (survival outcomes).** C refers to the maximum C-statistic possible in a given setting, and \(S(1)\) is the true underlying one-year survival rate. The \(\gamma\) and \(\lambda\) parameters match the definitions used by the simsurv R package. The regression coefficients used to generate the data are \(\mathbf{\beta}\), while \(\mathbf{\widehat{\beta}}\) define the hypothetical models under validation with DCA. Acknowledgements The authors thank Dr. Mohsen Sadatsafavi for extensive feedback and suggestions on drafts of this manuscript, as well as all members of the Korthauer lab for helpful comments. We also thank Dr. Andrew Vickers and Dr. Paul Gustafson for insightful discussion of the work. We also gratefully acknowledge the funding support from the BC Children's Hospital Research Institute Establishment Award (to KK).
2304.01863
Dynamics of Four Triple Systems
Orbital motions in four hierarchical stellar systems discovered by speckle interferometry are studied. Their inner orbits are relatively well constrained, while the long outer orbits are less certain. The eccentric and misaligned inner orbits in the early-type hierarchies Epsilon Cha (B9V, central star of the 5 Myr old association, P=6.4 yr, e=0.73), and I~385 (A0V, P~300 yr, e~0.8) suggest past dynamical interactions. Their nearly equal masses could be explained by a dynamical decay of a 2+2 quadruple progenitor consisting of four similar stars. However, there is no evidence of the associated recoil, so similar masses could be just a consequence of accretion from the same core. The other two hierarchies, HIP 32475 (F0IV, inner period 12.2 yr) and HIP 42910 (K7V, inner period 6.8 yr), have smaller masses and are double twins where both inner and outer mass ratios are close to one. A double twin could either result from a merger of one inner pair in a 2+2 quadruple or can be formed by a successive fragmentation followed by accretion.
Andrei Tokovinin
2023-04-04T15:11:36Z
http://arxiv.org/abs/2304.01863v1
# Dynamics of Four Triple Systems ###### Abstract Orbital motions in four hierarchical stellar systems discovered by speckle interferometry are studied. Their inner orbits are relatively well constrained, while the long outer orbits are less certain. The eccentric and misaligned inner orbits in the early-type hierarchies \(\epsilon\) Cha (B9V, central star of the 5 Myr old association, \(P=6.4\) yr, \(e=0.73\)), and I 385 (A0V, \(P\sim 300\) yr, \(e\sim 0.8\)) suggest past dynamical interactions. Their nearly equal masses could be explained by a dynamical decay of a 2+2 quadruple progenitor consisting of four similar stars. However, there is no evidence of the associated recoil, so similar masses could be just a consequence of accretion from the same core. The other two hiearchies, HIP 32475 (F0IV, inner period 12.2 yr) and HIP 42910 (K7V, inner period 6.8 yr), have smaller masses and are double twins where both inner and outer mass ratios are close to one. A double twin could either result from a merger of one inner pair in a 2+2 quadruple or can be formed by a successive fragmentation followed by accretion. binaries:visual stars:multiple stars:individual ## 1 Introduction Multiple stellar systems are very diverse, ranging from compact planar worlds, where three or four stars are tightly packed within 1 au, to wide systems of 0.1 pc scale, often found in non-hierarchical configurations; see Tokovinin (2021) for a review. Hierarchies with separations of 1-100 au, in the middle of this range, are more typical. Their dynamics (periods, eccentricities, mutual orbit orientation) bears imprints of the formation processes. However, only for a tiny fraction of known triple systems the inner and outer orbits could be determined or constrained owing to long (centuries and millenia) outer periods and insufficient data. It is increasingly clear that hiearchies were formed via several different channels. In this work, orbits are determined for four such systems (Table 1), continuing similar studies reported in (Tokovinin, 2021; Tokovinin & Latham, 2020; Tokovinin, 2018; Tokovinin & Latham, 2017). Inner pairs in these systems were discovered a decade ago by speckle interferometry, and the data accumulated to date allow calculation of the first inner orbits. The outer orbits are not yet fully covered. Two systems (\(\epsilon\) Cha and I 385) have similar components of early spectral type arranged in apparently non-hierarchical configurations. Their inner orbits have large eccentricities, suggesting that dynamical interactions played a major role. The other two triples contain solar-type stars and are double twins where a pair of similar low-mass stars orbits the primary component with mass comparable to the mass of the pair. Despite apparent similarity, the two double twins have very different dynamics: the first has quasi-circular and aligned orbits, while in the other the inner orbit is highly eccentric. The input data and methods are briefly introduced in Section 2. Sections 3-5 are devoted to individual systems. Their possible formation scenarios are discussed in Section 6. ## 2 Data and Methods ### Speckle Interferometry In the hierarchies studied here inner subsystems have been discovered by speckle interferometry with the high-resolution camera (HRCam) working on the 4 m telescopes SOAR (Southern Astrophysical Research Telescope) and Blanco located in Chile. HRCam, in use since 2007, is based on the electron multiplication CCD detectors. The instrument, data processing,
2309.01647
Towards Robust Velocity and Position Estimation of Opponents for Autonomous Racing Using Low-Power Radar
This paper presents the design and development of an intelligent subsystem that includes a novel low-power radar sensor integrated into an autonomous racing perception pipeline to robustly estimate the position and velocity of dynamic obstacles. The proposed system, based on the Infineon BGT60TR13D radar, is evaluated in a real-world scenario with scaled race cars. The paper explores the benefits and limitations of using such a sensor subsystem and draws conclusions based on field-collected data. The results demonstrate a tracking error up to 0.21 +- 0.29 m in distance estimation and 0.39 +- 0.19 m/s in velocity estimation, despite the power consumption in the range of 10s of milliwatts. The presented system provides complementary information to other sensors such as LiDAR and camera, and can be used in a wide range of applications beyond autonomous racing.
Andrea Ronco, Nicolas Baumann, Marco Giordano, Michele Magno
2023-09-04T14:56:38Z
http://arxiv.org/abs/2309.01647v1
Towards Robust Velocity and Position Estimation of Opponents for Autonomous Racing Using Low-Power Radar ###### Abstract This paper presents the design and development of an intelligent subsystem that includes a novel low-power radar sensor integrated into an autonomous racing perception pipeline to robustly estimate the position and velocity of dynamic obstacles. The proposed system, based on the Infineon BGT60TR13D radar, is evaluated in a real-world scenario with scaled race cars. The paper explores the benefits and limitations of using such a sensor subsystem and draws conclusions based on field-collected data. The results demonstrate a tracking error up to 0.21 \(\pm\) 0.29 m in distance estimation and 0.39 \(\pm\) 0.19 m/s in velocity estimation, despite the power consumption in the range of 10s of milliwatts. The presented system provides complementary information to other sensors such as LiDAR and camera, and can be used in a wide range of applications beyond autonomous racing. sensors, embedded systems, radar, autonomous driving ## I Introduction The field of autonomous racing has gained significant attention in recent years, with the development of self-driving cars and the increasing popularity of motorsports, which opens the opportunity to enable knowledge transfer from academia to industry [1, 2, 3, 4]. A fundamental component of autonomous systems is perception, namely the part of the system that enables the vehicle to observe and acquire information on the surrounding environment, which is necessary to generate appropriate responses [5, 6, 7, 8]. LiDARs, cameras, and radars are the main exteroceptive sensors used for perception purposes on autonomous vehicles [9]. Cameras use visible light to capture images and record video. They can provide high-resolution images and color information, but they rely on good lighting conditions and can be affected by shadows and reflections. LiDARs use infrared lasers to create three-dimensional point clouds of the surrounding environment, measuring the time of flight of the laser beam from the sensor to the reflecting objects. Due to the rotating parts, LiDARs have a fairly high cost and large size. They only provide ranging information and they are severely affected by environmental conditions (rain, snowflakes, and fog), as well as by the reflectivity properties of the targets. For example, opaque black objects are often difficult to detect with this technology [10]. Radars use radio waves with various frequencies and modulations to detect objects in their field of view. Thanks to the larger radio wavelengths, radars are less affected by adverse weather conditions, and they can measure reliably through raindrops, snowflakes, and dust [11, 12]. Radars on the other hand are very robust against the aforementioned object reflectivity and further robust to adverse weather conditions, such as rain and snowflakes [13]. Depending on the technology they can also provide valuable relative velocity information and spatial position of objects, which is highly beneficial in racing contexts [2, 14]. Each one of these technologies has its own strengths and weaknesses, and they are often used in combination to provide a more comprehensive view of the environment. Emerging novel low-power radars are capable of high-resolution measurements with a peak power consumption in the order of 100s of milliwatts, and an average consumption down to less than \(10\,\mathrm{mW}\)[15]. Moreover, their low cost and reduced form factor, also thanks to the Antenna in Package (AiP), make them an attractive option for achieving robust velocity and position estimation of opponents in the context of autonomous driving [16] and racing, especially on small-scale vehicles. LiDAR and radar sensor fusion is a popular technique in autonomous driving, robotics, and other applications where accurate and reliable sensing is critical [11, 12, 13]. By combining the strengths of both technologies, it is possible to create a more comprehensive view of the environment, allowing for safer, more efficient, and more robust operations [11, 12]. Autonomous racing is gaining popularity in the research community as an application for novel perception and control algorithms [2, 6, 7, 17]. Especially autonomous driving and racing on small-scale vehicles have been used to develop and test algorithms for autonomous vehicles in a safe and efficient way [6, 18]. One example of an autonomous vehicle project is the FITENTH project [6, 19], which is a global engineering competition for university students that challenges teams to design, build, and race a single-seat racing car. The FITENTH vehicle consists of various sensor modalities such as LiDAR, camera, radar, and optical flow to achieve a high level of autonomy in racing scenarios. The perception system of the F1TENTH vehicle is crucial for achieving a good racing performance [2] by accurately detecting opponents, estimating their positions and velocities, and avoiding collisions. In this context, the use of radar solutions for opponent detection and velocity estimation can be a valuable addition to the existing sensor suite of the F1TENTH vehicle. The adoption of low-power, small radar options enables the integration of the system in the vehicle without affecting the overall weight and aerodynamics and paves the road for a wider adoption on small-scale vehicles by assessing its effectiveness in a real racing scenario. This paper presents a preliminary evaluation of a small-size, low-power radar sensor that operates in the \(60\,\mathrm{GHz}\) frequency band to asses its performance in autonomous vehicles and perception applications. While more established radar systems already exist for these tasks, we argue that the advantages of this small-scale, integrated solution could enable new classes of vehicles to take advantage of radar technology. We evaluate the sensor for range and velocity estimation with different targets and set up the grounds for future work involving sensor fusion of LiDAR and radar data on our racing platform. The rest of the paper is organized as follows: In Section II we present existing work on the topic of autonomous driving and radar technology for perception and autonomous vehicles, and we summarize the contributions of the paper. In Section III we introduce a taxonomy of the most common radar technologies and provide the required theoretical background on radars. The evaluation setup is described in Section IV, while the results of our analysis are reported in Section V. We draw conclusions and present future work in Section VI. ## II Related Work Radars are commonly used in autonomous driving applications to provide information about the surrounding environment, including the position and velocity of other vehicles on the road. However, accurately estimating the position and velocity of opponents in autonomous racing can be challenging due to factors such as noisy data, complex dynamics of the opponents, and high speeds. Radars showed promising results when deployed on cars, both for driving assistance devices and autonomous driving. Previous work showed how radar, especially when fused with other kinds of sensors, can reliably detect road boundaries and other vehicles [20], especially with adverse weather conditions [21]. Previous work already showed how _mmWave_ radars can be used to estimate the velocity and position of other vehicles in autonomous driving scenarios[22]. However, the radars used were automotive-grade radars, which are orders of magnitude more expensive and power-hungry than the novel _mmWave_ radars used in our work. Radars have also been used in autonomous racing, as reported in [23, 24], where the authors argue their use in the Indy Autonomous Challenge (IAC), a competition of fully autonomous race cars at the Indianapolis Motor Speedway, promoting innovation and technological advancements in autonomous vehicle technology [7]. On the notes of the IAC, the F1TENTH association is a student competition of 1:10 scaled race cars, with the similar goal of competing on miniature racetracks for _time-trials_ and _head-to-head_ races. Due to the compact dimension of the car, it is not possible to use automotive radars on this platform, and therefore opponent estimation has traditionally been carried out through LiDAR and/or camera-based sensing modalities. This paper wants to investigate novel low-power radars in a tiny form factor in autonomous vehicles' perception, in particular regarding other vehicles' detection, ranging, and exteroceptive velocity estimates. The contributions of this paper are as follows: * A novel low-power radar sensor is introduced and integrated into an autonomous racing perception pipeline. * The sensor is evaluated in a real-world scenario with scaled race cars. * Benefits and limitations of such a sensor are explored and conclusions are drawn on the base of field-collected data. ## III Radar Background Radar systems emit an electromagnetic wave signal (known as the illumination signal), which eventually hits and is reflected by the target. The reflected (echo) signal contains information about the target, that can be extracted. The properties of the illumination signal can differ significantly in frequency, modulation, and other characteristics, defining different radar technologies, each suitable for different applications. * Pulse radars transmit high-frequency signals in small bursts and exploit the propagation delay and the antenna placement to extract information about the position of the target. * Continuous Wave (CW) radars illuminate the target with continuous power and exploit the Doppler effect to estimate its velocity. * Frequency Modulated Continuous Wave (FMCW) radars add frequency modulation to the illumination signal. This property allows estimating simultaneously the velocity and the distance of the targets. The capability of estimating both the distance and the velocity of the target makes the FMCW type a common choice for automotive and industrial applications. A simplified block diagram of a typical FMCW radar system is shown in Fig. 1. The modulator is responsible for generating the correct waveform for the illumination signal. The signal is amplified to the desired power level and transmitted by one or more antennas. At the same time, the echo signal is picked up by the receiving antenna, amplified, and mixed with the transmitted signal. The mixing process generates a signal with a new frequency and phase equal to the difference between the frequency and phase of the two input signals. For example, given two inputs signals \[y_{1}(t)=A\exp(2\pi jtf_{1}+\phi_{1})\quad y_{2}(t)=A\exp(2\pi jtf_{2}+\phi_{2}) \tag{1}\] the output of the mixer will be \[x(t)=A\exp(2\pi jt(f_{2}-f_{1})+\phi_{2}-\phi_{1}) \tag{2}\] This signal is called Intermediate Frequency (IF) signal or beat signal. It is typically much lower in frequency with respect to the transmitted signal and can be sampled with a traditional Analog to Digital Converter (ADC). A common modulation for FMCW radars is the linear one, where the frequency is linearly increased with time. This signal is also often referred to as _chirp_ and can be expressed as \[s(t)=A_{tx}\exp\left(2\pi jt(f_{low}+St)\right) \tag{3}\] where \(A_{t}x\) is the signal amplitude, \(f_{low}\) is the starting frequency and \(S\) is the chirp slope, equivalent to \(\frac{f_{high}-f_{low}}{T_{c}}\) with \(T_{c}\) being the chirp duration. \(f_{high}-f_{low}\) is also defined as modulation bandwidth \(B\). ### _Distance Measurement_ Given a reflective object at distance \(d\), the echo signal is received with a round-trip time delay \(\tau=2d/c\) where \(c\) is the speed of light. This effect is depicted in Fig. 2. This time delay is taken into account in the mixer, which will generate an IF signal equal to \[y(t)=A\exp\left(2\pi jt(S\tau)+\phi\right)\qquad\text{with}\qquad\tau=\frac{2d }{c} \tag{4}\] Equation (4) shows that the distance of the targets is proportional to the frequency of the IF signal, whose spectrum shows peaks corresponding to the target range. The spectrum can be evaluated with a Fast Fourier Transform (FFT), which is often called _Range FFT_. From Fourier transform theory, we know that in a window of duration \(T_{c}\) we can only resolve frequencies larger than \(\frac{1}{T_{c}}\). From the round-trip time, we know that the minimum frequency \(f_{min}=2d_{min}/S\). From these to equations we can derive that the distance resolution \(d_{min}\) only depends on the bandwidth \(B\), as shown in Eq. (5). \[f_{min}>\frac{1}{T_{c}}\quad f_{min}=\frac{2d_{min}}{c}S\quad d_{min}>\frac{c} {2ST_{c}}=\frac{c}{2B} \tag{5}\] ### _Velocity_ The radial velocity of the target is estimated by observing the phase of two consecutive chirps with a small time spacing \(T_{s}\). Given a sufficiently small \(T_{s}\), the distance of the target in the range FFT will be unchanged across the chirps. However, the phase difference depends on the variation of the round trip time \(\Delta\tau\) \[\Delta\phi_{IF}=2\pi f_{low}\Delta\tau\qquad\qquad\Delta\phi_{IF}=\frac{4\pi \Delta d}{\lambda} \tag{6}\] From Eq. (6) we can derive the angular velocity caused by the moving target, which is equal to the phase difference across the two chirps. \[\omega:=\Delta\phi=\frac{4\pi vT_{s}}{\lambda}\qquad\qquad v=\frac{\lambda \Delta\phi}{4\pi T_{s}} \tag{7}\] The velocity of the target can be resolved without ambiguity when \(\Delta\phi<\pi\), which sets the limit for the maximum velocity as \(v<\frac{\lambda}{2T_{s}}\) In the case of multiple targets with different speeds at the same range, the velocity for both targets can be estimated by increasing the number of equi-spaced chirps \(N\). A sequence of \(N\) chirps is often referred to as a radar _frame_. The velocities can be derived with a complex FFT, called _Doppler FFT_, phasors corresponding to each range. From the properties of the FFT, two frequencies \(\omega_{1}\) and \(\omega_{2}\) can be separated if \(|\omega_{1}-\omega_{2}|>2\pi/N\). Taking Eq. (7) in consideration we can calculate the velocity resolution \(v_{min}\) \[\Delta\omega=\frac{4\pi vT_{s}}{\lambda}>\frac{2\pi}{N}\qquad\quad v_{min}:=v> \frac{\lambda}{2T_{s}N} \tag{8}\] ### _Angle of Arrival_ Radar systems can also estimate the Angle of Arrival (AoA) of the signal thanks to an array of receiving antennas. Large antenna arrays (with many receivers) can differentiate multiple targets with equal speed and distance, by evaluating a third FFT on the antenna dimension. Since the radar evaluated in this paper has a small antenna array (of only 2 receivers on azimuth and elevation), we omit the details of AoA evaluation. However, it is still worth mentioning that the angular resolution \(\Theta_{min}\) increases with the length of the array, following the relation \[\Theta_{min}=\frac{2}{N} \tag{9}\] where \(N\) is the number of antennas and d is the antenna spacing, and the antenna spacing is optimal with \(d_{spacing}=\lambda/2\). Fig. 1: Simplified block diagram of a typical FMCW radar system. Fig. 2: Transmitted and Received signals in FMCW radars. Typically \(T_{c}~{}\ll~{}T_{s}\). ## IV Evaluation Setup This paper evaluates the capability of a novel low-power radar sensor from Infineon Technologies for autonomous racing on our FiTENTH platform. In particular, we focus on the evaluation of the capabilities for estimating the velocity and position of the opponents placed in front of the car, with the aim of improving the overtaking maneuvers. The selected sensor device operates in the \(60\,\mathrm{GHz}\) band and embeds one transmitting and three receiving antennas directly in the package (AiP), which simplifies the integration in existing systems by removing the need for high-frequency antenna design expertise. The 60GHz frequency band and the associated bandwidth provide a range resolution of about \(3\,\mathrm{cm}\), which is suitable for our racing scenario. The peak transmission power is \(5\,\mathrm{dBm}\), which results in a limited maximum range below \(10\,\mathrm{m}\). However, the reduced form factor and the low-power nature of the device make it suitable for small-scale, battery-operated applications, such as nano drones and small vehicle models, which could exploit the same subsystem to detect other moving or static objects. To have an accurate evaluation we concluded two different experiments with a similar setup, which we describe shortly. Two vehicles were used for the evaluation. The radar subsystem was attached to the front of the first car, facing towards the front of the vehicle, and connected to the Intel NUC via USB. A custom Robot Operating System (ROS) driver was developed to interface the USB device, configure it, and retrieve the data. The second vehicle was placed in front, in the field of view of the radar, and acted as a target for the radar, as seen in Fig. 3. ### _Experimental evaluation_ This first experiment was designed to study the accuracy of the radar in tracking the distance and the velocity of an opponent car in the scenario with the lowest interference. For this setup, the car equipped with the radar did not drive. The second car was placed in front of the first one and manually controlled to drive away at different speeds. The radar data was logged alongside the odometry data of the second car, in order to have a partial ground truth of the velocity readings. It must be noted that the odometry estimation for velocity is less accurate since it depends on indirect measurements. In the second experiment, both cars were driving at approximately the same speed, both manually controlled. This experiment served the purpose of observing the effect of the ego-velocity of the car on the radar data, which will be discussed in Section V. ## V Experimental Results For this evaluation, the signal processing on the radar signal was kept as simple as possible, only using standard FFT processing. This evaluation allowed us to properly identify the benefits that the additional radar data could bring, as well as the challenges of extracting the important information depending on the context. The radar is configured to produce radar frames at \(20\,\mathrm{Hz}\). Each frame is composed by \(64\) chirps, and each chirp is sampled \(64\) times at \(2\,\mathrm{MHz}\). The chirp timing is set in order to allow a maximum range of \(3.7\,\mathrm{m}\), a maximum velocity of \(8\,\mathrm{m}\,\mathrm{s}^{-1}\), and a velocity resolution of \(0.25\,\mathrm{m}\,\mathrm{s}^{-1}\). The current hardware is limited to a bandwidth of about \(5\,\mathrm{Mbit}\,\mathrm{s}^{-1}\) for radar data, which sets an upper bound on the resolution. DC removal and a Hanning window are used to improve the quality of the resulting map and reduce noise. The data is zero-padded to \(256\) samples before evaluating the spectra, in order to increase the resolution. A sample range-doppler map is shown in Fig. 4. ### _Static Measurements_ The static measurements are evaluated to characterize the sensor in the best case. They show that distance and velocity can be easily extracted from the range-doppler maps by tracking the max-energy point. This result shows that, despite the low power output, the radar is capable of detecting moving targets in its range reliably in optimal conditions. Fig. 5 shows a quantitative view of the range and velocity estimations from the range-doppler map with respect to the values estimated by the opponent car. The grey area marks the time when the Fig. 4: Sample range-doppler map from the static experiments. The moving target can be seen as a high-energy point. Fig. 3: Initial state of the evaluation setup with the initial distance \(d\) marked in red. oponent car was outside the maximum range of the radar, and therefore not visible. We estimated the range and velocity discrepancy for two different top velocities, each with five repeated experiments to reduce the effect of variability. The errors are evaluated only within the maximum range of the radar, and can be seen in Table I. ### _Dynamic Measurements_ In the case of a moving radar, the relative velocity of the environment with respect to the radar is not zero. This results in multiple targets in the range-doppler map, whose energy depends on properties such as the reflectivity of the material, the angle, and the distance to the radar. In our tests, most of the reflections are caused by the track boundaries. In Fig. 6 such targets appear on the side of negative velocity (2), forming the characteristic curved pattern. This is due to the environment being projected into the map. Objects in the far field appear at all distances, with a velocity approximately equal to the ego velocity of the car. However, the radial velocity of the track boundaries lowers as we consider points closer to the car since the relative angle also increases. This can be observed from the curved pattern. Some reflections from the ground can also be observed at close range (1). In the frame shown, the opponent vehicle is still visible in the range-doppler map (3), as it is on the positive velocity side. However, proper tracking in the range-doppler space requires further processing, since the opponent could be masked by the environment reflections at times. This demonstrates the necessity of a more sophisticated filtering technique that will be addressed in future work. Specifically, in the context of autonomous racing, we believe we can exploit the knowledge of the environment and the odometry information of the car to isolate the opponent in the range-doppler space. At the same time, we expect that our method will increase the robustness of the LiDAR point cloud by validating LiDAR points against their expected relative velocity, allowing for LiDAR filtering within highly dynamic environments. Finally, the fusion method will also improve the ability to correctly classify static and dynamic obstacles in racing conditions. ## VI Conclusions and Future Work A novel low-power FMCW radar sensor was evaluated in the context of autonomous racing. We evaluated the accuracy of distance and velocity tracking in a radar-static scenario, showing that despite the radars low transmission power, in the range of 10s of milliwatts, the sensor is capable of tracking distance and velocity of the target with relatively low tracking error of \(0.21\,\mathrm{m}\) and \(0.39\,\mathrm{m}\mathrm{s}^{-1}\) respectively. Dynamic experiments with the radar on a moving car were also conducted, in order to simulate a more realistic racing context. The subsequent results shed light on the challenges that a dynamic scenario will pose for the velocity estimation of the target, posing a starting point for incremental research on a novel LiDAR-in-radar sensor fusion algorithms. ## Acknowledgment The authors would like to thank Steven Peter, who developed and tested the ROS driver to interface the radar sensor into the racing stack.
2307.06484
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
In this paper, we present a novel Single-class target-specific Adversarial attack called SingleADV. The goal of SingleADV is to generate a universal perturbation that deceives the target model into confusing a specific category of objects with a target category while ensuring highly relevant and accurate interpretations. The universal perturbation is stochastically and iteratively optimized by minimizing the adversarial loss that is designed to consider both the classifier and interpreter costs in targeted and non-targeted categories. In this optimization framework, ruled by the first- and second-moment estimations, the desired loss surface promotes high confidence and interpretation score of adversarial samples. By avoiding unintended misclassification of samples from other categories, SingleADV enables more effective targeted attacks on interpretable deep learning systems in both white-box and black-box scenarios. To evaluate the effectiveness of SingleADV, we conduct experiments using four different model architectures (ResNet-50, VGG-16, DenseNet-169, and Inception-V3) coupled with three interpretation models (CAM, Grad, and MASK). Through extensive empirical evaluation, we demonstrate that SingleADV effectively deceives the target deep learning models and their associated interpreters under various conditions and settings. Our experimental results show that the performance of SingleADV is effective, with an average fooling ratio of 0.74 and an adversarial confidence level of 0.78 in generating deceptive adversarial samples. Furthermore, we discuss several countermeasures against SingleADV, including a transfer-based learning approach and existing preprocessing defenses.
Eldor Abdukhamidov, Mohammed Abuhamad, George K. Thiruvathukal, Hyoungshick Kim, Tamer Abuhmed
2023-07-12T23:07:06Z
http://arxiv.org/abs/2307.06484v1
# Single-Class Target-Specific Attack against Interpretable Deep Learning Systems ###### Abstract Developing a custom model specifically tailored to address domain-specific problems is crucial for achieving optimal performance. The interpretation of machine learning models plays a pivotal role in this development, aiding domain experts in gaining insights into the internal mechanisms of these models. However, adversarial attacks pose a significant threat to public trust by making interpretations of deep learning models confusing and difficult to understand. In this paper, we present a novel **Single-class** target-specific **ADV**ersarial attack called **SingleADV**. The goal of **SingleADV** is to generate a universal perturbation that deceives the target model into confusing a specific category of objects with a target category while ensuring highly relevant and accurate interpretations. The universal perturbation is stochastically and iteratively optimized by minimizing the adversarial loss that is designed to consider both the classifier and interpreter costs in targeted and non-targeted categories. In this optimization framework, ruled by the first- and second-moment estimations, the desired loss surface promotes high confidence and interpretation score of adversarial samples. By avoiding unintended misclassification of samples from other categories, SingleADV enables more effective targeted attacks on interpretable deep learning systems in both white-box and black-box scenarios. To evaluate the effectiveness of **SingleADV**, we conduct experiments using four different model architectures (ResNet-50, VGG-16, DenseNet-169, and Inception-Y3) coupled with three interpretation models (CAM, Grad, and MASK). Through extensive empirical evaluation, we demonstrate that SingleADV effectively deceives the target deep learning models and their associated interpreters under various conditions and settings. Our experimental results show that the performance of **SingleADV** is effective, with an average fooling ratio of 0.74 and an adversarial confidence level of 0.78 in generating deceptive adversarial samples. Furthermore, we discuss several countermeasures against SingleADV, including a transfer-based learning approach and existing preprocessing defenses. Adversarial Machine Learning, Deep Learning, Interpretation Models, Single-Class Attack, IDLSes ## 1 Introduction Deep learning (DL) has made significant contributions and advancements across various domains, including computer vision [1, 2, 3], natural language processing, and numerous security-sensitive applications [4, 5]. The impressive performance of deep learning models on large datasets has gained significant attention from the research community. However, a fundamental challenge lies in comprehend the underlying factors that drive the outcomes of DL models, primarily due to their complex architectures. Consequently, converting the behavior of DL models into a more comprehensible format for end-users has become crucial. To address this issue, numerous interpretation techniques [1, 5, 6, 7] have been developed to make DL models more understandable. These models provide insight into how DL models make decisions, which can help users to trust and use these models more effectively. DL interpretability plays a crucial role in understanding the behavior of models and ensuring confidence in detecting adversarial inputs. Thus, Interpretable Deep Learning Systems (IDLSes) have gained considerable attention, as they provide predictions and interpretations. However, recent studies have demonstrated the feasibility and practicality of creating adversarial examples (AE) that deceive both the target prediction model and its associated interpreters [8, 9, 10, 11, 12]. Consequently, IDLSes cannot guarantee robust security measures for detecting AEs. We introduce **SingleADV**, a single-class target-specific adversarial attack method designed to generate targeted perturbations that deceive IDLSes by causing misclassifications of an entire class of objects (referred to as the "source class") into a specific "target class." The perturbations are designed to maintain interpretations similar to those of benign inputs, making it difficult for IDLSes to detect them. In a targeted attack threat model, **SingleADV** generates stealthy perturbations that effectively deceive IDLSes, thereby hindering human involvement in analyzing the interpretation of adversarial inputs, as shown in Figure 1. Additionally, Figure 2 demonstrates that the employed interpreter can detect previous attacks, even in white-box scenarios. For example, Figure 1 shows that the existing universal perturbation attacks [10] can be detected due to the inconsistencies between the adversarial interpretations and the object of the image, which can be recognized by an "observer." On the other hand, our approach generates adversarial interpretations that are indistinguishable from benign ones, suggesting no manipulation of the input data. The main objective of **SingleADV** is to increase the success rate of adversarial attacks. This involves fooling the classifier and misleading its interpreter while reducing the likelihood of adversarial detection. **SingleADV** achieves this by generating perturbations in a fine-grained manner. This ensures that the perturbations impact the target category while minimizing their influence on other categories. Instead of employing the traditional targeted attack approach, where the model predicts the same label for all images, this work focuses on a single class in an adversarial scenario to reduce suspicion of an attack. We call this attack _"single-class attack."_ Although we apply this technique to an image classification task in this paper, it can be used in various security-sensitive applications, such as malware detection and facial recognition systems. Additionally, the generated adversarial samples cause the interpreter to produce false positives, resulting in interpretations similar to benign inputs. This aspect makes it challenging to identify the involvement of the adversary. This work investigates the effects of generating universal perturbations to launch a single-class attack in both white-box and black-box scenarios. Furthermore, it explores potential countermeasures and defenses against such attacks. **Our Contribution.** We present Single**ADV**, a single-class attack method designed to generate target-specific perturbations for inputs to fool the target deep learning models and deceive their coupled interpreters. Our method enables targeted attacks that are specific to a particular category. We evaluate the effectiveness of Single**ADV** on both the prediction and interpretation models using the ImageNet dataset [1]. Our contributions can be summarized as follows: * We propose a novel adversarial attack method called Single**ADV**, which leverages interpretation-derived techniques to perform targeted and category-specific fooling of DL models and their associated interpreters. Unlike traditional approaches focusing on individual input samples, our method extends the scope of adversarial effects to encompass an entire object category, thus limiting the impact within the chosen category. * We evaluate Single**ADV** in terms of the success rate for fooling the prediction model, the Intersection-over-Union (IoU) score for deceiving the coupled interpreter, and the leakage rate to measure the attack's impact on non-targeted classes. We demonstrate the effectiveness of our approach, _e.g.,_ an average fooling ratio of 0.74 and a corresponding adversarial confidence level of 0.78. * We conducted experiments using a knowledge distillation (teacher-student) approach to assess the practicality and effectiveness of Single**ADV** in a black-box setting. The results of our experiments confirm that Single**ADV** can be effectively applied in the black-box scenario, highlighting its practicality and effectiveness. * We analyze the effectiveness of existing general defense techniques to mitigate Single**ADV**. Additionally, we propose a novel adversarial training method that utilizes interpretation-derived information to enhance the robustness of DL models against adversarial attacks. This approach significantly improves the model's performance when faced with adversarial examples while maintaining its performance on benign examples. **Organization.** The rest of the paper is organized as follows: Section 2 highlights the relevant literature; Section 3 provides the fundamental concepts, description of the problem formulation, and the main algorithm of the attack; Section 4 and Section 5 provide the experiments and results in white-box and black-box settings, respectively; Section 6 proposes potential countermeasures; Section 7 discusses the limitations; and Section 8 offers the conclusion. ## 2 Related Work This section highlights previous studies related to our work, specifically in the domains of adversarial attacks and interpretation-guided attacks. Machine learning models face two primary threats: evasion attacks [4], which involve manipulating the model's behavior through data manipulation, and poisoning attacks [4], which weaken the target model by infecting the training data. This work focuses on the first type of threat in which we manipulate data to make a DL model misbehave while ensuring the preservation of correct interpretation. Unlike existing approaches, we specifically consider targeted attacks where the perturbations affect a specific class without influencing other classes. Our work is one of the pioneering studies exploring targeted attacks Fig. 1: Adversarial example that was generated using the ResNet-50 model with the CAM interpreter. The original image is a dog, but the adversarial example is misclassified as a goose. The other class images (tractor and wolf) are not impacted. The attacker achieved this by adding a single perturbation to the original image. The perturbation was carefully chosen to have a minimal impact on the interpretation of the image while still causing the model to misclassify it. against DL models using interpretability via a universal perturbation in both white-box and black-box settings. In the following, we highlight the related studies that are relevant to our work. Moosavi-Dezfooli _et al._[13] showed the existence of a universal and small-sized perturbation vector that can mislead the state-of-the-art deep neural networks. The work explored that there are single directions in the input space that can be used to generate universal noise. However, the universal perturbation can still be a noise-like pattern to the human eye when the interpretation is applied. The study [10] proposed that extended universal perturbation can exploit the explainability of models by carefully exploring the decision boundaries of deep models. The authors showed that their attack can be used to interpret the internal working process of DL models. Even though the attack is effective against the DL models, it is still vulnerable to interpreters. When an interpreter is adopted with a DL model, the involvement of the adversary can easily be detected (see Figure 1). The main idea of our attack is also based on generating universal perturbation [10]. Unlike those attacks, we consider misleading the interpretability along with classification while generating adversarial samples to increase the robustness and stealthiness of the attack. **Interpretability.** There have been numerous methods that have attempted to describe the inner working of deep learning models. Such methods operate by exploiting the characteristics of optimization methods (_e.g._, back-propagation), intermediate representations, input manipulation and feature perturbation, and the development of meta models. Interpretability can offer a sense of security when inspecting the attribution maps using human involvement as adversarial examples reflect inaccurate attribution maps. However, recent studies show that some interpretation models are detached from DL models, and they can be impacted by manipulations without affecting the DL model performance [16, 5, 17]. Other studies have demonstrated the validity and practicality of simultaneously attacking the deep learning models and their corresponding interpreters. These studies suggest that interpretability provides a limited sense of security [9]. A recent work [8] introduces an optimized attack to fool IDSs with limited perturbation using the edge information of the image. Similarly, our work focuses on attacking IDSs; however, we consider generating universal perturbation instead of generating perturbation for each input. The properties provided by Single**ADV** and previous studies are summarized in Table I. ## 3 Methods This section discusses several key concepts related to our work. We begin by presenting the problem formulation, which outlines the specific challenge we aim to address. We then describe the main algorithm for Single**ADV**. ### _Fundamental Concepts_ This section presents the concepts and notations, and key terms used throughout the paper. **DL Model.** As our paper mainly focuses on the classification task, let \(f(x)=y\in Y\) denote a classifier that assigns an input \(x\) to a category \(y\) from a set of categories \(Y\). **Interpreter.** Let \(g(x;f)=m\) denote an interpreter \(g\) that generates an attribution map (_i.e._, interpretation map) \(m\) that reflects the importance of features in the input sample \(x\) based on the output of the classifier \(f\), (_i.e._, the value of the \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{**Research Studies**} & **Adversar** & **Perturba** & **Universal** & **Targeting** & **Interpretation** & **Adaptive to** & **White-box** & **Black-box** \\ & **lal Attack** & **Iteration** & **Perturba-** & **Single** & **Based Attack** & **Interpreter** & **Attack** & **Attack** \\ \hline Moscow _et al._[13] & ✓ & ✓ & ✓ & ✓ & & & & ✓ \\ Khrulkov _et al._[14] & ✓ & ✓ & ✓ & & & & ✓ & \\ Hayes _et al._[15] & ✓ & ✓ & ✓ & & & & ✓ & \\ He _et al._[16] & ✓ & ✓ & & & ✓ & ✓ & ✓ & \\ Zhang _et al._[9] & ✓ & ✓ & & & ✓ & ✓ & ✓ & \\ Eldor _et al._[8] & ✓ & ✓ & & & ✓ & ✓ & ✓ & \\ Akhtar _et al._[10] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ **Ours** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of related works based on several aspects. Fig. 2: Single**ADV** vs. the existing attack [10] against ResNet-50 with CAM interpreter. Single**ADV** preserves benign attribution maps. Pred., Univ. Pert. and Adv. Int. stand for prediction, universal perturbation, and adversarial interpretation. \(i\)-th element in \(m\) reflects the importance of the \(i\)-th element in \(x\)). Based on the methods used to obtain interpretations of a model, interpretations can be divided into two types: \(\bullet\)**Pre-hoc Interpretability:** It is achieved by constructing self-explanatory models that integrate interpretability directly into their structures. In other words, this type of interpretability focuses on building DL models that can explain their behavior explicitly in terms of the inference process [7, 18]. The category includes decision tree, rule-based model, attention model, _etc._ \(\bullet\)**Post-hoc Interpretability:** Post-hoc Interpretability is based on the complexity-regulated DL model interpretation or adopting post-training methods [19, 20]. This type of interpretation requires another model to provide explanations for the current model. Our proposed attack specifically targets post-hoc interpretability, which involves the use of an interpreter that receives information from the target DL model (_e.g.,_ gradients) and generates an interpretation of how the target model classifies an input sample. One reason for choosing this type of interpreter is that it does not require any modifications to the architecture of the prediction model, thereby preserving its high prediction accuracy. **Threat Model.** This work considers both white-box and black-box attack scenarios. In the white-box attack setting, the adversary has complete access to the victim classifiers \(f\) and their interpreters \(g\). While white-box scenarios provide valuable insights into the strengths and weaknesses of the system, they are often impractical in real-world applications. Therefore, we also consider a black-box environment where the adversary has limited knowledge about the victim classifiers \(f^{\prime}\). ### _Target Interpretation Model_ The interpreters chosen for our experiment are representative of all state-of-the-art interpretation techniques. For example, Grad [21] shares the same formulations with DeepLift [22], SmoothGrad [23], _etc._, while CAM [6] belongs to the same family of representation-guided interpreters (_e.g.,_ GradCAM [24]). We also use MASK interpreter [25] from the perturbation-guided interpreters. Thus, the attack is applicable to other related interpreters. **CAM.** In **C**lass **A**ctivation **M**ap (CAM) [6], interpretation maps are generated using feature maps taken from the target DL classifier's intermediate layers. The significance of the areas of the samples is generated by reflecting the weights of the fully-connected layer on the features maps of the convolutional layer. Assume \(a_{i}(j,k)\) as the activation of a channel \(i\) in the last convolutional layer at a spatial position \((j,k)\), and \(\sum_{j,k}a_{i}(j,k)\) denote the outcome of global average pooling. So, the softmax function receives: \[\psi_{y}(x)=\sum_{j,k}\sum_{i}w_{i,y}\;a_{i}(j,k),\] and the attribution map \(m_{y}\) is as follows: \[m_{y}(j,k)=\sum_{i}w_{i,y}\;a_{i}(j,k),\] where \(w_{i,y}\) is the weight associated with the output class \(y\) for \(i\)-th channel. We construct interpretation maps by collecting and combining feature vectors from \(f\) up to its last CNN layer. **Grad.** To generate the importance of features of a given sample, the gradient of a DL classifier's outcome in terms of the given sample is computed by the interpreter. To be specific, based on the DL model \(f\) and its prediction \(y\) to a certain sample \(x\), the interpreter generates an interpretation map or also called attribution map \(m\) as \(m_{y}=\left|\frac{\partial f_{y}(x)}{\partial x}\right|\). Since the ReLU activation function is used in the target DL models, and all CNN-based models, the computed result of the Hessian matrix becomes all-zero. To find an optimal adversarial sample \(\hat{x}\), the gradient of the ReLU \(r(z)\) function is approximated as: \[h(z)\triangleq\left\{\begin{array}{ll}(z+\sqrt{z^{2}+\tau})^{\prime}=1+ \frac{z}{\sqrt{z^{2}+\tau}}&\text{for}\quad z<0\\ (\sqrt{z^{2}+\tau})^{\prime}=\frac{z}{\sqrt{z^{2}+\tau}}&\text{for}\quad z\geq 0 \end{array}\right.\] where \(h(z)\) approximates the gradient of ReLU \(r(z)\) with a small constant parameter \(\tau\) (_e.g.,_\(\tau=1e-4\)) [9]. **MASK.** The interpreter [25] generates interpretation maps by detecting changes in the prediction of a DL model while adding a minimal amount of noise to the sample. Specifically, the interpreter creates a mask \(m\) that is a binary matrix of the same size as the sample \(x\). In the matrix, 0 represents the area of the sample \(x\) where the feature is kept without noise. The value 1 in the matrix means that the area is replaced with Gaussian noise. The main objective of the interpreter is to find the smallest mask \(mask\) that makes a DL model's performance decrease greatly: \[\min_{\text{\emph{mask}}}:f_{y}(\phi(x;\text{\emph{mask}}))+\lambda\;\|1- \text{\emph{mask}}\|_{1}\quad s.t.\quad 0\leq\text{\emph{mask}}\leq 1, \tag{1}\] where \(\phi(x;\text{\emph{mask}})\) is the operator that generates perturbation to decrease the probability of the current prediction category \(y\) and the second term \(\lambda\;\|1-\text{\emph{mask}}\|\) helps the mask to be scattered. Since the MASK interpreter is an optimization function, applying the attack (as another optimization) is directly infeasible. We reformulate the attack as a bi-level optimization task (similar to the framework in [26, 9, 27]). ### _Problem Formulation_ Let \(S\) be a distribution over the dataset and \(\mathbf{s}\in\mathbb{R}^{d}\) indicates a sample from a distribution \(S\). For trained model \(f(s)\to y\), where \(y\) is the correct class, the main objective of adversarial attacks is to generate a perturbation \(p\in\mathbb{R}^{d}\) that satisfies the following constraint: \[f(\mathbf{s}+p)\to y_{t},where\;y\neq y_{t},\;\|p\|_{\ell_{p}}\leq\eta \tag{2}\] In Equation 2, confining \(y_{t}\) to a chosen category and \(\ell_{p}\)-norm vector to a pre-set value of \(\eta\) produces a targeted attack. Generating universal perturbation in adversarial attacks expands the domain of \(p\), which we denote as \(D(p)\). Given that \(|D(p)|\geq 1\), where \(\mid.\mid\) denotes the cardinality of a set, we maximize the objective of Equation 2 as follows: \[\begin{split}&\max\;\mathbf{P}_{D(p)}(f(\mathbf{s}+p)\to y_{t}) \geq\gamma,\\ & where\;\|p\|_{\ell_{p}}\leq\eta,and\;|D(\mathbf{p})|\geq 1\end{split} \tag{3}\] In Equation 3, **P** refers to the probability distribution of generating \(p\), and \(\gamma\) is the attack success threshold with a fixed value in the range of \([0,1]\). In Single**ADV** case, we consider an interpretation model to generate universal perturbation that keeps the adversarial interpretation result similar to the interpretation of the original samples. Hence, we have the following constraints: * Ensuring model misclassification to a pre-defined category: \(f(\mathbf{s}+p)\to y_{t}\), \(y\neq y_{t}\), where \(y_{t}\) is the target category. * Restricting the sample domain to the selected category: \(D(p)=\{\mathbf{s}\mid\mathbf{s}\sim\mathcal{S}_{\textit{subset}}\}\), where \(\mathcal{S}_{\textit{subset}}\) the distribution of the targeted category. * Restricting the effect of the perturbation on other categories' domain: \(\mathbf{P}_{D(p)}(f(\mathbf{s}+p)\to y_{t})<\gamma\), where \(\hat{D}(p)\) denotes the domain of samples from non-selected categories. * Triggering an interpreter \(g\) to generate target attribution maps: \(g(\mathbf{s}+p;f)\)\(\xrightarrow{\textit{similar}}m_{t}\), such that \(g(\mathbf{s};f)\to m_{t}\). The following subsection describes the attack algorithm and the considerations to meet the outlined constraints. ### _Computing the Perturbation_ Algorithm 1 describes the perturbation generation and follows the constraints mentioned in Subsection 3.3. As the objective of the algorithm is straightforward, a universal perturbation is calculated by steps taken to minimize the cost of the attack such that the used classifier and the interpreter increase the confidence of adversarial samples for the selected category while minimizing the difference between benign and adversarial attribution maps. The desired cost surface for high confidence and low interpretation loss is based on stochastic computation and is ruled by the first and second-moment estimations. The first and the second moment check if the computed perturbation prevents other categories from crossing their decision boundaries during the generation process. The computed perturbation should not interfere with the prediction of non-source classes. \(\ell_{\infty}\) norm is applied to bounce the perturbation norm. We explain each line of the algorithm in detail. The _typical_ attack is based on the white-box scenario as it requires the target model's parameters. Samples of \(X_{\textit{subset}}\) and \(\hat{X}\) are accumulated from \(D(p)\) and \(\hat{D}(p)\), _i.e._, samples from the selected categories and other categories, respectively. Like other parameters, \(\eta\) for \(\ell_{p}\)-norm of the perturbation, target category \(y_{t}\), target attribution map \(m_{t}\), batch size \(b\) for the optimization, fooling ratio \(\gamma\) (confidence level as a target category), pre-set first, and second-moment hyper-parameters are considered (_line 1_). In the algorithm, firstly, the selected and the other categories are sampled randomly into sets \(X_{x}\) and \(X_{o}\), and the cardinality of each set equals half of the batch size \(b\) (_line 3_). On _line 4_, all sets are perturbed by subtracting the currently estimated perturbation \(p_{t}\) from each of them (the operation is displayed as \(\ominus\)). Afterward, the perturbed sets are clipped to a valid range by the \(C\) function. On _line 6_, we calculate the ratio between the expected norms (it is referred to as \(\mathbb{E}\) in the algorithm) of the selected category gradients and other categories' gradients by calculating the expected gradient of an input using the selected and non-selected category samples, and the computed attribution maps of the interpreter. In the algorithm, \(L_{\textit{prd}}\) is the classification loss function that shows the difference between the model prediction and the target category. \(L_{\textit{init}}\) is the interpretation loss for calculating the difference between the current and target attribution maps: \(L_{\textit{init}}(g(x;f),m_{t}))=\|(g(x;f)-m_{t}\|_{2}^{2}\). \(\lambda\) balances the two factors. The value of the hyper-parameter depends on the interpreter. In our experiments, we explored different values of \(\lambda\) to account for different interpreters. ``` Data: Target model \(f\), interpreter \(g\), selected category samples \(X_{\textit{subset}}\), non-selected categories samples \(\hat{X}\)\(s.t.\)\(\hat{X}_{i}\) or \(X_{i}\in\mathbb{R}^{d}\), target category \(y_{t}\), target attribution map \(m_{t}\), perturbation norm \(\eta\), balance factor \(\lambda\), batch size \(b\), and fooling ratio \(\gamma\), \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). Result: Perturbation \(p\in\mathbb{R}^{d}\) Initialization: Setting \(p_{0},v_{0}\in\mathbb{R}^{d},\omega_{0}\in\mathbb{R}^{d}\) and \(i=0\); while\(\textit{fooling ratio}<\gamma\)do \(X_{x}\)\({}^{\textit{pred}}\)\(X_{x}\)\(\sim\)\(X_{o}\)\(\oplus\)\(\hat{X}_{x}\)\(:\)\(|X_{x}|=|X_{o}|=\frac{b}{\hat{\tau}}\); \(S_{x}\)\(\leftarrow\)\(\mathbb{C}(X_{x}\ominus p_{i})\), \(S_{o}\)\(\leftarrow\)\(\mathbb{C}(X_{o}\ominus p_{i})\); \(i\)\(\leftarrow\)\(i+1\); \(\delta\)\(\leftarrow\)\(\frac{\mathbb{E}_{x_{i}\in S_{x}}\big{[}\mathbb{V}_{x_{i}}(L_{\textit{prd}}(x_{i},y_{t}))+ \lambda L_{\textit{init}}(g(x_{i},f),m_{t}))\|_{2}\big{]}}{\mathbb{E}_{x_{i} \in S_{x}}\big{[}\mathbb{V}_{x_{i}}(L_{\textit{prd}}((x_{i},y))+\lambda L_{ \textit{init}}(g(x_{i},f),m_{t}))\|_{2}\big{]}}\); \(\xi_{i}\)\(\leftarrow\)\(\frac{1}{2}\Big{(}\mathbb{E}_{x_{i}\in S_{x}}\big{[}\nabla_{x_{i}}(L_{ \textit{prd}}(f(x_{i},y_{t}))+\) \(\lambda L_{\textit{init}}(g(x_{i};f),m_{t}))\big{]}+\)\(\delta\mathbb{E}_{x_{i}\in S_{o}}\big{[}\nabla_{x_{i}}(L_{\textit{prd}}(f(x_{i},y))+\) \(\lambda L_{\textit{init}}(g(x_{i};f),m_{t}))\big{]}\); \(\upsilon_{i}\)\(\leftarrow\)\(\beta_{1}\upsilon_{i-1}+(1-\beta_{1})\)\(\xi_{i}\); \(\omega_{i}\)\(\leftarrow\)\(\beta_{2}\omega_{i-1}+(1-\beta_{2})(\xi_{i}\odot\xi_{i})\); \(\bar{p}\)\(\leftarrow\)\(\frac{\sqrt{1-\beta_{1}^{2}}}{1-\beta_{1}^{2}}\). \(\textit{dag}\big{(}diag(\sqrt{\omega_{i}})^{-1}\upsilon_{i}\big{)}\); \(p_{i}\)\(\leftarrow\)\(p_{i-1}+\frac{\theta}{\|p\|_{\infty}}\); \(p_{i}\)\(\leftarrow\)\(sign(p_{i})\odot\min(|p_{i}|,\eta)\); end while ``` **Algorithm 1**Single**ADV** attack's main algorithm With those functions, we calculate the \(\xi_{i}\) as the average of the expected gradients for both source and non-source classes (_line 7_), which keeps the direction of the vector to obtain the targeted fooling for the source classes while preventing the fooling of non-source classes. At the same time, it considers the interpretation loss in choosing the optimal vector direction. Specifically, \(L_{\textit{init}}\) evaluates the difference between the current and the desired interpretation maps based on the chosen vector direction. This, in turn, helps achieve the desired result formulated in Subsection 3.3. Next, on _lines 8 and 9_, the first and the second raw moment (_i.e._, un-centered variance) of the computed gradient (referred to as \(\upsilon\) and \(\omega\) respectively) are calculated with moving averages exponentially (\(\odot\) represents the Hadamard product) for the effective stochastic optimization on the cost surface. On _line 10_, we conduct bias-corrected estimation since the second moment (_i.e._, moving average) is known to be heavily biased in the early stages of the optimization. The derivation of the expression in _line 10_ is explained in [10]. The perturbation is computed by the ratio between the moment estimates (\(\frac{\upsilon_{i}}{\sqrt{\omega_{i}}}\), where \(\omega_{i}\) represents the second mo ment), and as the notations are vectors, we convert vectors into diagonal matrices or diagonal matrices into vectors via \(diag(.)\) operation. Finally, we update the perturbation by restricting the \(\bar{p}\) with \(\ell_{\infty}\)-norm to keep the desired direction (_line 11_). The norm of the computed perturbation is restricted by \(\ell_{\infty}\)-ball projection at the end of each iteration to minimize the perturbation perceptibility by performing the Hadamard (\(\odot\)) product between the element-wise sign (\(sign(.)\)) and the minimum values for perturbation (_line 12_). The objective of Single**ADV** is not only to deceive the classifier by making it misclassify the designated source class samples while correctly predicting non-source classes but also to preserve the original interpretation for all samples of the source class. From an adversarial standpoint, this approach minimizes suspicion of the attack by manipulating a single class rather than making the classifier predict the same label for all images and providing different attribution maps. ## 4 Single**ADV** in White-box Settings This section presents the experimental results obtained in white-box settings. We evaluate the effectiveness of the generated perturbation from two perspectives: adversarial success and interpretation validity. ### _Experimental Settings_ **Dataset.** We use the ImageNet dataset [1] for our experiments. The training set consists of 1,300 samples per category from the ImageNet dataset. We use the training set within the attack framework to calculate the perturbation. The testing portion of the dataset, which includes 50 samples per category (both source and non-source categories), is used to evaluate the generated perturbation. To ensure accurate gradient directions, we randomly select samples that are correctly classified with \(\geq\)60% confidence from both the targeted and non-targeted categories. In this paper, we designate certain object classes as the targeted category while considering other classes as the non-targeted category. For instance, categories such as **panda**, **dog**, and **cup** are used as targeted categories (see Table II). **DL Models.** In the experiments, four state-of-the-art DL models are used for our targeted attack, which are **ResNet-50**[3] (74.90% top-1 accuracy and 92.10% top-5 accuracy on ImageNet dataset), **VGG-16**[2] (71.30% top-1 accuracy and 90.10% top-5 accuracy on ImageNet dataset), **DenseNet-169**[28] (76.20% top-1 accuracy and 93.15% top-5 accuracy on ImageNet dataset), and **Inception-V3**[29] (77.90 % top-1 accuracy and 93.70% top-5 accuracy on ImageNet dataset). The models are chosen in terms of performance and network architecture to help measure the effectiveness of our attack. **Interpretation Models. CAM**[6], **Grad**[21] and **MASK**[25] interpreters are utilized as the representative of the interpretation models. Their original open-source implementations are used for our experiment. **Attack Evaluation.** The effectiveness of the attack is evaluated by conducting several experiments and calculating some evaluation metrics in terms of attack success rate in fooling the classifier while maintaining a convincing interpretation. The evaluation aims to find answers to the following questions: _Is our technique effective in attacking DL models by targeting a single category?_ _Is our technique effectively misleading the interpreters by generating a similar interpretation to the benign sample?_ **Evaluation Metrics.** To assess the effectiveness of the proposed attack against the target IDLSes, we utilize various metrics. These metrics are employed to evaluate the attack's impact on both the classifiers and interpreters. The following metrics are used in our evaluation: **Against Classifiers:** * **Fooling Ratio:** This metric [13] measures the proportion of images that undergo the universal perturbation and have their labels changed. It indicates the attack's success in causing misclassifications by the target model, specifically into the target category. This metric provides a quantitative measure of the attack success against DL models. * **Misclassification Confidence:** This metric [9] measures the probability (_i.e., confidence score_) of an adversarial sample assigned by the target model to the target category (_i.e.,_ we calculate the average confidence scores of adversarial samples successfully misclassified). * **Classification Confidence:** This metric [9] evaluates the impact of universal perturbations on non-target categories. It measures the confidence scores of these categories when the perturbation is applied, revealing the extent to which the model's performance is affected while maintaining its effectiveness against the targeted category. * **Leakage Rate:** This metric [13] measures the impact of universal perturbations on non-source categories. It is calculated as the ratio of misclassified non-source category images to the total number of non-source category images used for testing when the universal perturbation is applied. A lower leakage rate indicates that the universal perturbations are more specific to the targeted category, while a higher leakage rate suggests a more general impact. **Against Interpreters:** * **Qualitative Comparison:** This method [9] is used to verify whether the results of interpretation are perceptually similar. Every interpretation map is manually checked to see if it is similar to its benign interpretation map or if the interpretation is reliable. * **IoU Test (Intersection-over-Union)**: This metric [30] is used to quantify the similarity between two arbitrary shapes. It encodes the shape properties of interpretation maps, _e.g., height, width, and location_ into region properties and calculates the intersection areas between the predictions and the ground truths. It is widely employed to evaluate object detection, segmentation, and tracking: \[\text{IoU}(m,m_{\circ})=|O(m)\bigcap O(m_{\circ})|\ /\ |O(m)\bigcup O(m_{\circ})|.\] In the formula, \(m\) represents the attribution map of samples when the universal perturbation is added and \(m_{\circ}\) is the attribution map of samples without any perturbation, and \(O(.)\) represents a binarization function. In our case, we compare an adversarial interpretation map with a benign interpretation map based on (shapes, positions, and areas), to which the metric can be applied. ### _Attack Effectiveness Against DL Models_ We first assess the effectiveness of the attack in terms of deceiving the classifiers. The results are summarized using the _fooling ratio, misclassification confidence_, and _leakage rate_ in Table II. The reported results (_i.e.,_ fooling ratio or attack success rate, misclassification confidence, and leakage rate) are based on the test samples that are not seen by the selected model and the attack algorithm. The success of the attack is demonstrated by fooling ResNet-50, VGG-16, DenseNet-169, and Inception-V3 trained on the ImageNet dataset [1]. Observing the results for four classifiers (_i.e.,_ ResNet-50, VGG-16, DenseNet-169, and Inception-V3), the attack generates a universal perturbation for different architectures with high fooling rates. More specifically, VGG-16 and ResNet-50 were deceived with the universal perturbation of Single**ADV** with a success rate of more than 70% for all target categories regardless of the interpreter. This means that the addition of the universal perturbation to any raw image in our test samples can deceive the target DL models more than seven times out of ten. The attack also achieved significantly better results with a higher than 60% fooling ratio in all interpreters when Densenet-169 and Inception-V3 were used as the target DL models. Among the target models, Inception-V3 was attacked with a relatively lower fooling ratio while VGG-16 achieved a higher fooling ratio in comparison with the other models. Based on the _miscal classification confidence_, the attack fooled target DL models with confidence scores higher than 70% regardless of the interpreters employed. Upper results can be seen when the attack is implemented against ResNet-50 and DenseNet-169, while the Inception-V3 model provides lower results across all interpreters. The main reason for the lower misclassification confidence scores of Inception-V3 could be due to the global averaging before the output layer, which reduces the computational cost and diminishes the effect of the perturbation on the output. According to the analysis of the leakage rate, the proposed adversarial algorithm appears to perform well. The computed leakage rate indicates that the algorithm's universal perturbation has a limited impact on non-source classes, with an average of 33% leakage rate across all interpreters while the existing attack [10] has an average leakage rate of 41.5%. This suggests that the perturbation is more specific to the targeted source class and has minimal impact on other non-source classes, which is a desirable characteristic. A low leakage rate implies that the algorithm's universal perturbation has a more targeted effect and is less likely to cause errors in the classification of non-source classes. The results of the classification confidence metric are presented in Figure 3, which depicts the impact of a universal perturbation on the confidence scores assigned by the DL models for non-source categories when various interpreters are used. To provide a basis for comparison, the scores obtained without any universal perturbation are also included. The figure shows that the universal perturbations generated using the CAM and Grad interpreters have a smaller impact on non-source categories than that of the MASK interpreter. Nevertheless, the results indicate that the perturbation has a minimal impact on the scores of non-target categories. Observing the performance of all classifiers under attack, Single**ADV** generates a universal perturbation for different architectures with high fooling rates. These results suggest that Single**ADV** has successfully fooled the target DL models considering a single-class targeted attack scenario. ### _Attack Effectiveness Against the Interpreters_ In this section, we evaluate the effectiveness of the attack to generate adversarial samples that produce attribution maps similar to the benign samples using the targeted interpreter. Firstly, we use qualitative comparison to verify if the produced attribution maps of adversarial samples are perceptually similar to their benign samples. We checked all the adversarial attribution maps and found that observing all the cases for all targeted categories, Single**ADV** attack generates universal perturbations that produce attribution maps on adversarial domains similar to or indistinguishable \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Interpreter**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Target 1**} & \multicolumn{2}{c}{**Target 2**} & \multicolumn{2}{c}{**Target 3**} & \multicolumn{2}{c}{**Average**} \\ \cline{3-14} & & & **Fooling** & **Miscalification** & **Leakage** & **Fooling** & **Miscalification** & **Leakage** & **Fooling** & **Miscalification** & **Leakage** & **Fooling** & **Miscalification** & **Leakage** \\ & & **Ratio** & **Confidence** & **Ratio** & **Confidence** & **Ratio** & **Confidence** & **Ratio** & **Confidence** & **Ratio** & **Confidence** & **Ratio** & **Confidence** & **Ratio** \\ \hline \multirow{3}{*}{**CAM**} & VGG-16 & 0.85 & 0.82 & 0.30 & 0.87 & 0.85 & 0.35 & 0.71 & 0.80 & 0.32 & 0.81 \(\pm\) 0.07 & 0.82 & 0.32 \\ & ResNet-50 & 0.86 & 0.86 & 0.36 & 0.79 & 0.82 & 0.34 & 0.75 & 0.79 & 0.35 & 0.80 \(\pm\) 0.05 & 0.82 & 0.35 \\ & DenseNet-169 & 0.78 & 0.81 & 0.34 & 0.75 & 0.80 & 0.33 & 0.69 & 0.77 & 0.36 & 0.74 \(\pm\) 0.05 & 0.79 & 0.34 \\ & Inception-V3 & 0.68 & 0.79 & 0.30 & 0.62 & 0.78 & 0.31 & 0.60 & 0.75 & 0.29 & 0.63 \(\pm\) 0.04 & 0.77 & 0.30 \\ \hline \multirow{3}{*}{**Grad**} & VGG-16 & 0.87 & 0.71 & 0.74 & 0.83 & 0.75 & 0.36 & 0.75 & 0.77 & 0.31 & 0.82 \(\pm\) 0.03 & 0.74 & 0.34 \\ & ResNet-50 & 0.83 & 0.88 & 0.38 & 0.83 & 0.85 & 0.37 & 0.72 & 0.78 & 0.35 & 0.74 \(\pm\) 0.02 & 0.83 & 0.37 \\ & DenseNet-169 & 0.80 & 0.86 & 0.35 & 0.73 & 0.82 & 0.33 & 0.70 & 0.76 & 0.35 & 0.74 \(\pm\) 0.02 & 0.81 & 0.34 \\ & Inception-V3 & 0.72 & 0.80 & 0.31 & 0.68 & 0.79 & 0.33 & 0.65 & 0.72 & 0.30 & 0.68 \(\pm\) 0.01 & 0.77 & 0.31 \\ \hline \multirow{3}{*}{**MASK**} & VGG-16 & 0.81 & 0.79 & 0.36 & 0.76 & 0.77 & 0.34 & 0.72 & 0.70 & 0.33 & 0.58 \(\pm\) 0.04 & 0.75 & 0.34 \\ & ResNet-50 & 0.80 & 0.82 & 0.37 & 0.73 & 0.84 & 0.36 & 0.70 & 0.72 & 0.34 & 0.74 \(\pm\) 0.04 & 0.79 & 0.36 \\ \cline{1-1} & DenseNet-169 & 0.77 & 0.81 & 0.35 & 0.71 & 0.83 & 0.33 & 0.68 & 0.75 & 0.34 & 0.72 \(\pm\) 0.04 & 0.80 & 0.34 \\ \cline{1-1} & Inception-V3 & 0.69 & 0.76 & 0.30 & 0.64 & 0.79 & 0.34 & 0.63 & 0.70 & 0.31 & 0.68 \(\pm\) 0.03 & 0.75 & 0.32 \\ \hline \hline \end{tabular} \end{table} TABLE II: Fooling ratio, misclassification confidence, and leakage rate against several models using ImageNet. The results show source category \(\xrightarrow{\text{transformed}}\) target category for Target 1: Panda \(\rightarrow\) Cat, Target 2: Dog \(\rightarrow\) Goose, Target 3: Cup \(\rightarrow\) Wolf. Fig. 3: Classification confidence of adversarial samples (non-source categories) based on VGG-16, ResNet-50, DenseNet-169, and Inception-V3 with CAM, Grad, and MASK interpreters. The universal perturbation is for Dog \(\rightarrow\) Goose. from the attribution maps of the corresponding benign domain. Figure 4 displays a set of samples alongside their attribution maps based on the **CAM**, **Grad**, and **MASK** interpreters. The samples are selected randomly from the output set. In the figure, the first three columns display the benign samples, their attribution maps, and their prediction categories. The last three columns present universal perturbations generated based on a DL model and an adopted interpreter, adversarial attribution maps produced by adding the universal perturbations to the benign samples, and target prediction categories. As shown in the figure, the results support high similarity in terms of interpretations. By observing the produced attribution maps for adversarial samples in both targeted and non-targeted categories, Single**ADV** produces perturbations that only affect the target category in terms of prediction while maintaining accurate interpretations across all categories. Additionally, the attribution maps of benign samples and adversarial samples (when the universal perturbation is added to the benign images) are compared using the IoU score metric. The IoU score is a quantitative measure that compares model outputs to ground-truth data. In our case, model outputs are generated attribution maps of adversarial samples, while ground-truth data contains benign attribution maps. To estimate the IoU score, the attribution Fig. 4: Attribution maps of benign and adversarial samples based on VGG-16, ResNet-50 with CAM, Grad and MASK interpreters. In this sample, our target category is \(\text{Dog}\rightarrow\textbf{Goose}\). (Ben.Int. stands for Benign Interpretation, Adv.Int. stands for Adversarial Interpretation and Univ.Pert. stands for Universal Perturbation). Fig. 5: IoU scores for attribution maps of adversarial inputs using different thresholds. The results are based on CAM, Grad, and MASK interpreters for VGG-16, ResNet-50, DenseNet-169, and VGG-16. maps that contain floating numbers in the range of [0, 1] are binarized, _i.e._, their fundamental values are assigned one or zero based on a threshold value. All values higher than a threshold are assigned to 1; otherwise, the value is set to 0. We applied different threshold values (_0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9_) to measure the IoU scores with attribution maps and calculated the average values of those IoU scores. Region of interest (RoI) is generally considered positive if it has an IoU score of 0.5 or above compared with the ground-truth sample. Thus, we can assume that adversarial interpretation maps with an IoU score equal to or above 0.5 based on their benign interpretation maps are credible. However, we also check if interpretation maps with low IoU scores are meaningful regarding their images. The IoU scores of attribution maps from different DL models are displayed in Figure 5. Using CAM and Grad interpreters, the achieved IoU scores of all models (_i.e._, VGG-16, ResNet-50, DenseNet-169, and Inception-V3) are above 60% across all thresholds when the universal perturbation is added to the samples on the test set. Using the MASK interpreter, the attack achieved significantly lower IoU scores across all DL models. The main reason for the lower IoU scores in the MASK is that benign and adversarial attribution maps do not share high similarities regarding the shape and size of highlighted areas. However, based on our inspection of many generated adversarial examples, we observed that the adversarial attribution maps are similar to the benign ones with respect to their highlighted region and position in sample images. Section 7 discusses other factors for the lower IoU scores in detail. The results show that the adversarial examples generate highly similar or meaningful attribution maps to the corresponding benign samples even when restricting the attribution maps to higher values. ## 5 SingleADV in Black-box Settings In Section 4, we conducted experiments on the proposed attack in a white-box environment. Since the white-box attacks have limited practicality in real-life cases, we also employ our proposed method in the black-box settings. In this section, we demonstrate the general applicability of SingleADV in the black-box settings. The main purpose of this section is to demonstrate the attack usage in a real-world scenario. ### _Methodology_ As we cannot directly apply SingleADV to attack a black-box model, we employ one of the techniques against the black-box models [27, 31]. We use a method called teacher-student learning, considered adequate for attacking DL models with black-box settings [32]. In teacher-student learning, there is a black-box DL model (referred to as a teacher model) is used to transfer significant knowledge to a DL model (referred to as a student model) [32]. Specifically, we train a student model based on the output of the teacher model. A well-trained student model can imitate the behavior of the teacher model. Thus, it can be guaranteed that adversarial examples generated against the student model can be directly transferred to the black-box model (teacher model) [31]. Figure 6 displays the overall pipeline of SingleADV in a black-box environment. In the beginning, we prepare a dataset that is unlabeled and relevant to the target black-box deep learning model (teacher model) task. We utilize a technique known as deep learning inference to feed the unlabeled dataset into the teacher model to label the dataset so that a student model can be trained to mimic a teacher model. Once the inference process is finished, we select an architecture for the student model and train it with the labeled dataset. As we mentioned, the architecture of the student model should be complex enough to copy the behavior of the teacher model [31]. After training the student model, we adopt it for SingleADV to generate a universal perturbation based on the selected source and target categories. The SingleADV generates a single perturbation that triggers the student model to misclassify a specific category. Later, we transfer the induced perturbation to attack the teacher model and test whether the universal perturbation is valid. ### _Experimental Settings_ **Dataset.** In the experiment, we randomly select a deep learning app from Google Play to use its deep learning model as a target black-box model. We use an app called Bei Ke that is used to identify scenes. By tracking the APIs of the deep learning frameworks in the app using tools called Soot [33] and FlowDroid [34], we observe the working and invoking processes of the DL framework. The tools help us invoke the DL model of the app to send a sample and receive its response. We utilize the ImageNet dataset as an unlabeled dataset, which we find most relevant to the app task. We query the teacher model (_i.e._, the DL model within the app) to label the ImageNet dataset, which we find to be the most relevant to the app task. This process involves only the training dataset we used in previous experiments, _i.e._, 1,300 training images per class. This newly labeled dataset is utilized for training the student model. After the training is complete, 30 random samples from the test set are selected to test our approach against one universal perturbation of the target category. **Student Model.** In the teacher-student learning approach, a student model with a complex architecture is the correct option to imitate a teacher model well. In other words, the more complex a student model is, the higher the attack success rate is [31]. Considering this, we adopt the VGG model architecture (VGG-11 and VGG-16) as a student model. Fig. 6: The pipeline of SingleADV attack in a black-box environment, where the unlabeled dataset is labeled using the teacher model (a black-box model) with the help of a deep learning inference technique. SingleADV uses the student model to learn universal perturbation. **Interpreter.** We employ the CAM as the attack showed relatively significant results with the interpreter as shown in Figure 5. However, the target black-box model (teacher model) does not provide interpretability, and we cannot add the CAM as we do not have access. Therefore, we employ only the fooling ratio metric for the teacher model while calculating the fooling ratio and the similarity of attribution maps for the student model. To show the attack's effectiveness in generating similar interpretations with the adversarial samples as the ones for the benign samples, we perform a controlled experiment where we do not have access to the teacher model but generate interpretation maps from the student model. We adopt ResNet-50 as a teacher model and VGG-16 as a student model. For the reproducibility of our experiments, our code, data, and models are available at (_[https://github.com/InfoLab-SKKU/SingleClassADV_](https://github.com/InfoLab-SKKU/SingleClassADV_)). ### _Experimental Results_ Figure 7 depicts the results of the experiment. As we mentioned earlier, the app does not provide interpretability, and we tested the app in terms of the fooling ratio. In the figure, the fooling ratio and IoU test results are provided. The IoU test results are calculated from the attribution maps of the student models. The attack achieved over 60% and 70% fooling ratios, while the perturbation showed over 60% and 70% of similarity in producing adversarial interpretation maps, respectively. The same perturbation reaches about 30% and 50% in the fooling ratio when used against the teacher model. The experimental result, displayed in Figure 7, contains the actual black-box model; therefore, we cannot check the interpretation of the model. Considering the case, we performed a controlled experiment to generate interpretation maps on the teacher model to check if they share similarities with benign interpretation maps. Figure 8 displays the results of the experiment. The results show that the attack achieved about a 60% fooling ratio and over 60% IoU score which is more or less the same as the student model. According to the result of the controlled experiment, we can assume that the teacher model used in the first experiment (see Figure 7) shares a high similarity in interpretation when an interpreter is coupled. The results of both experiments show the effectiveness of SingleADV against attack black-box models. ## 6 Countermeasures In the following and based on our observations, we discuss several potential countermeasures against SingleADV. **Preprocessing methods.** Preprocessing approaches involve specific modifications to eliminate adversarial noise from samples before passing them to the DL model. The objective of preprocessing defenses is to enhance the robustness of DL models against adversarial samples, ensuring that the models can classify adversarial samples correctly with a little reduction in performance on benign images. It is important to note that relying on a single preprocessing defense technique may not be sufficient, as the attack can be adapted to bypass that particular defense. Therefore, employing multiple preprocessing techniques can help in removing the perturbation added to the samples. By focusing on different features of the samples, these defense techniques make it more challenging to generate robust adversarial samples that can bypass these defenses. In order to evaluate the effectiveness of defense techniques, including our proposed defense method, we apply them to adversarial samples and visualize their impact in Figure 9. These examples provide insights into how the defense models affect the adversarial samples and highlight the effectiveness of the defense techniques in mitigating the impact of SingleADV. Table III provides the fooling ratio of SingleADV when a pair of defense methods is applied to preprocess the adversarial samples generated by SingleADV. In the experiment, three defense techniques were included, namely bit depth reduction [35], median smoothing [36], and random resizing and padding (R&P) [37]. Each defense technique was applied with its default hyperparameters. The results demonstrate that employing two defense techniques together decreases the effectiveness of the attack. It is worth noting that the performance of the defense methods can be further improved by adjusting the settings and hyperparameters of the defense techniques. We surmise that optimizing the parameters makes it possible to enhance the performance of the defense methods in mitigating the impact of SingleADV. **Ensemble Interpreter.** The term "ensemble interpreter" refers to using a set of interpreters to provide a comprehensive view of a DL model. By employing multiple interpreters, each with its unique characteristics, a more comprehensive understanding of the DL model can be achieved [8]. Therefore adopting several interpreters can help detect if a sample is benign or adversarial. Future research direction can be in examining whether SingleADV can be adapted to counter IDLSes with ensemble interpreters. For example, the attack can be optimized to minimize the interpretation loss across multiple interpreters used by an IDLS. However, it is worth mentioning that the generation process of adversarial samples in SingleADV can be computationally expensive against IDLSes with ensemble interpreters. Therefore, the number of interpreters employed in an IDLS can play a significant role in defending against SingleADV. **Interpretation-based Adversarial Training.** The objective of this task is to develop robust classifiers that can effectively handle universal perturbations. To this end, we adopt a similar approach to the one in [38] and leverage concepts from robust optimization. Our strategy involves formulating the problem of universal adversarial training as a min-max optimization problem to construct highly resilient models to universal perturbations. By treating the generation of universal perturbations as an optimization \begin{table} \begin{tabular}{c c} \hline \hline **Pair of Defenses** & **Fooling Ratio** \\ \hline Bit-Depth Reduction - Median Smoothing & 0.13 \\ \hline Median Smoothing - R\&P & 0.07 \\ \hline R\&P - Bit-Depth Reduction & 0.10 \\ \hline \hline \end{tabular} \end{table} TABLE III: Fooling ratio of adversarial samples when two defense techniques are applied. The results are based on 30 samples of the targeted category (Target 2: Dog \(\rightarrow\)**Gouse**). task, we aim to find the optimal perturbation that maximally affects the model's robustness while minimizing its impact on the model's performance on benign inputs. To solve this optimization problem, we utilize alternating stochastic gradient methods. Algorithm 2 uses a single perturbation refined throughout all iterations. We only update the weights \(w\) and perturbations \(\delta\) once per training step. In the algorithm, the universal perturbation is cropped by identifying significant areas. This is achieved by converting the interpretation mask \(m\in M\) to a binary form \(m_{o}=\textit{Binarize}(m,t)\) using \(t\) (_i.e._, 0.3 for our experiment) so that a value of \(0\) indicating an irrelevant area and \(1\) indicating a relevant area. The resulting mask is then multiplied with the universal perturbation \(\delta\) using element-wise multiplication \(\odot\). This approach enables the model to learn and become more robust to the perturbation within the interpretation mask. For the CIFAR-10 experiment, we utilized a perturbation threshold \(\epsilon\) of 0.031, a batch size of 128, Momentum SGD with an initial learning rate of 0.1 that drops until 0.001 and trained for 500 epochs for the ResNet-20 model. The selection of the CIFAR-10 dataset and the ResNet-20 model for adversarial training and testing was driven by practical considerations such as performance, computational efficiency, standardization, and generalization properties. For the ImageNet experiment, we used pre-calculated universal perturbations (_i.e._, using the DenseNet-169 model with CAM interpreter from the experiments presented in Table 2), as a computationally efficient alternative. To enable adversarial training, we employed a fine-tuning approach to learning relevant features in the presence of perturbations efficiently. The fine-tuning process utilized a batch size of Fig. 7: The fooling ratio and IoU scores results are based on the universal perturbation generated by Single**ADV** on the universal perturbation generated by Single**ADV** following the teacher-student learning approach. A student refers to a white-box DL model (Student 1: VGG-11, Stu- 2: VGG-16), and a teacher refers to a black-box DL refers to a black-box DL model (ResNet-50). IoU results are model used by an app. Source and target categories are based on the attribution maps generated CAM interpreter. 32, Momentum SGD with an initial learning rate of 0.1 that drops until 0.001, a perturbation threshold of 0.04 and 100 epochs. In our experiments, we randomly selected the source and target categories for the CIFAR-10 dataset. However, when using the ImageNet dataset, we utilized a previously calculated universal perturbation as described in Table II. Due to the computational expense of generating a new universal perturbation for large datasets, we opted to use the existing perturbation for our experiments with ImageNet. To evaluate the effectiveness of interpretation-based adversarial training, we conducted experiments and measured the fooling ratio before and after applying the adversarial training technique. Table IV demonstrate that the fooling ratio significantly decreases after applying interpretation-based adversarial training. This indicates that the customized adversarial training approach improves the classifier's robustness against the attack.
2308.16012
Geometric integration on symmetric spaces
We consider geometric numerical integration algorithms for differential equations evolving on symmetric spaces. The integrators are constructed from canonical operations on the symmetric space, its Lie triple system (LTS), and the exponential from the LTS to the symmetric space. Examples of symmetric spaces are n-spheres and Grassmann manifolds, the space of positive definite symmetric matrices, Lie groups with a symmetric product, and elliptic and hyperbolic spaces with constant sectional curvatures. We illustrate the abstract algorithm with concrete examples. In particular for the n-sphere and the n-dimensional hyperbolic space the resulting algorithms are very simple and cost only O(n) operations per step.
Hans Munthe-Kaas
2023-08-30T13:13:14Z
http://arxiv.org/abs/2308.16012v1
# Geometric integration on symmetric spaces ###### Abstract. We consider geometric numerical integration algorithms for differential equations evolving on symmetric spaces. The integrators are constructed from canonical operations on the symmetric space, its Lie triple system (LTS), and the exponential from the LTS to the symmetric space. Examples of symmetric spaces are \(n\)-spheres and Grassmann manifolds, the space of positive definite symmetric matrices, Lie groups with a symmetric product, and elliptic and hyperbolic spaces with constant sectional curvatures. We illustrate the abstract algorithm with concrete examples. In particular for the \(n\)-sphere and the \(n\)-dimensional hyperbolic space the resulting algorithms are very simple and cost only \(\mathcal{O}(n)\) operations per step. ## 1. Introduction _Symmetric spaces_ are fundamental geometric objects in mathematics originating from the seminal work on non-Euclidean geometries by Gauss, Bolyai, Lobachevsky, Riemann, Klein and Lie in the 19'th century. Examples include: * The \(n\)-sphere \(S^{n}\), the surface of the unit ball in \(\mathbb{R}^{n+1}\). * Hyperbolic spaces \(H^{n}\) with constant negative sectional curvature. * Symmetric positive definite matrices with \(A\cdot B=AB^{-1}A\). * Lie groups \(G\) with a symmetric product \(g\cdot h=gh^{-1}g\) and subsets of \(G\) closed under \(\cdot\). * Homogeneous spaces \(\mathcal{M}=G/H\) where the Lie algebra of \(G\) splits as \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}\) such that \(\mathfrak{h}\) is the Lie algebra of \(H\) and \(\mathfrak{m}\) is a Lie triple system. * The real line \(\mathbb{R}\) with \(x\cdot y=2x-y\). * Grassman manifolds: the \(p\)-dimensional subspaces of \(\mathbb{R}^{n}\). Symmetric spaces have also appeared in various numerical algorithms, such as e.g. matrix computations and splines [16, 3, 5]. In this paper we are concerned with numerical algorithms for integrating dynamical systems evolving on symmetric spaces. Over the last two decades extensive theories have been developed for numerical integration of Lie groups and homogeneous spaces, so-called _Lie group integrators_[9]. A symmetric space can always be constructed as a homogeneous space, so Lie group integrators _can_ be used also for symmetric spaces. However, this approach neglects the finer geometric structures of the symmetric space. In particular, the natural metric of a symmetric space is preserved by a symmetric product, but generally not by a (left- or right) Lie group action. Furthermore, the tangent space of the symmetric space is a triple system closed under double Lie brackets, but not under the single bracket of the Lie algebra. The exponential map from the triple system to the symmetric space is often much cheaper to compute than the corresponding exponential map from the Lie algebra to the Lie group. So, there are good reasons to consider _intrinsic_ integration algorithms on symmetric spaces. In this paper we propose integration schemes entirely built from operations which can be canonically obtained from the definition of a symmetric space \((\mathcal{M},\cdot)\) as a manifold with a symmetric product. Algorithms are written out explicitly for several important examples. For the \(n\)-sphere and hyperbolic \(n\)-space, the computational cost of the algorithms are just \(\mathcal{O}(n)\) per step, using Rodrigues type formulae. The concrete algorithms for these cases can easily be implemented without comprehending the theoretical framework of this paper. ## 2. Canonical integration on symmetric spaces ### An outline of the algorithm We discuss the basic idea of canonical symmetric space integration in a general geometric language, assuming some familiarity with symmetric spaces. Detailed mathematical definitions are given in Section 2.2 and concrete examples in Section 3. The basic idea of the algorithm is very similar to the basic formulation of the RKMK algorithm on a Lie group \(G\), as presented in [15]. In Lie group case we seek the solution of a differential equation \(y^{\prime}(t)=f(y(t))y(t)\) for \(y(t)\in G\), where \(f\colon G\to\mathfrak{g}\) and \(\mathfrak{g}=T_{e}G\) is the Lie algebra. The equation is pulled back from \(G\) to the linear space \(\mathfrak{g}\) by the ansatz \(y(t)=\exp(\theta(t))\) for \(\theta(t)\in\mathfrak{g}\). A crucial point is that \(\theta(t)\) is the solution of the 'dexpinv' equation \(\theta^{\prime}(t)=\mathrm{dexp}_{\theta}^{-1}\,f(\exp(\theta))\), where \(\mathrm{dexp}\) denotes a trivialised tangent of the exponential mapping and \(\mathrm{dexp}_{\theta}^{-1}\) can be computed by commutators on \(\mathfrak{g}\). The RKMK Lie group integrator is obtained by solving the dexpinv equation by a Runge-Kutta method, moving back and forth between \(\mathfrak{g}\) and \(G\) using \(\exp\). Unlike a Lie group, which has the identity \(e\) as a special point, a symmetric space has no special point. We can choose any point \(o\in\mathcal{M}\) as a _base point_ for various constructions, as long as we ensure that the construction is geometrically independent of the chosen point. It should, however, be remarked that from the perspective matrix representations and efficient numerical linear algebra, certain base points might be preferred. We call \((\mathcal{M},\cdot,o)\) a _pointed symmetric space_. The tangent space \(\mathfrak{m}:=T_{o}\mathcal{M}\) has the algebraic structure of a _Lie Triple System_, \((\mathfrak{m},[\_\_\,])\), where \([\_,\_]\colon\mathfrak{m}\times\mathfrak{m}\times\mathfrak{m}\to\mathfrak{m}\) is the _triple bracket_, defined below. On \(\mathcal{M}\) there exists a canonical connection \(\nabla\) which is torsion free \(T=0\) and has constant (parallel) curvature \(\nabla R=0\). The algebra describing vector fields with the canonical connection on a symmetric space, called a Lie Admissible Triple algebra, is studied in [18]. The geodesic of this connection, starting at a tangent \(V\in\mathfrak{m}\) is denoted \(\mathrm{Exp}\colon\mathfrak{m}\to\mathcal{M}\). The connection defines _parallel transport_ of tangent vectors along the geodesic. For a given \(V\in\mathfrak{m}\), parallel transport yields a linear isomorphism \(\Gamma_{V}\colon\mathfrak{m}\to T_{\mathrm{Exp}(V)}\mathcal{M}\), which transports a tangent vector along the geodesic of \(V\) from \(t=0\) to \(t=1\). Crucial to our approach is the following characterisation of the tangent of the geodesic exponential Exp trivialised by parallel transport with \(\Gamma\): **Proposition 2.1**.: _For \(V,W\in\mathfrak{m}\) we have_ \[\frac{d}{dt}\bigg{|}_{t=0}\,\mathrm{Exp}(V+tW)=\Gamma_{V}\,\mathrm{dExp}_{V}\,W, \tag{1}\] _where \(\mathrm{dExp}_{V}\colon\mathfrak{m}\to\mathfrak{m}\) is given as_ \[\mathrm{dExp}_{V}=\left.\frac{\sinh(\sqrt{x})}{\sqrt{x}}\right|_{x=\mathrm{ad} _{V}^{2}}=\left.\sum_{n=0}^{\infty}\frac{x^{n}}{(2n+1)!}\right|_{x=\mathrm{ad} _{V}^{2}}, \tag{2}\] _for \(\mathrm{ad}_{V}^{2}W:=[W,V,V]\)._ Proof: See [7], Th. 4.1 p.215. A proof is also provided here, in Section 3.1. Consider the differential equation for \(y(t)\in\mathcal{M}\) given a vector field \(F\colon\mathcal{M}\to T\mathcal{M}\): \[y^{\prime}(t)=F(y(t)),\quad y(0)=o. \tag{3}\] Assuming \(y(t)\in U\) is within a normal neighbourhood \(o\in U\subset\mathcal{M}\), where \(\mathrm{Exp}\) is well-defined, we write \[y(t)=\mathrm{Exp}(\theta(t)),\quad\theta(t)\in\mathfrak{m}, \tag{4}\] and trivialise \(F\) as \[F(\mathrm{Exp}(\theta))=\Gamma_{\theta}f(\mathrm{Exp}(\theta)),\quad\text{for } f\colon U\to\mathfrak{m}. \tag{5}\] **Proposition 2.2**.: _The curve \(\theta(t)\in\mathfrak{m}\) satisfies the differential equation_ \[\theta^{\prime}(t)=\mathrm{dExp}_{\theta}^{-1}f(\mathrm{Exp}(\theta)),\quad \theta(0)=0, \tag{6}\] _for_ \[\mathrm{dExp}_{\theta}^{-1}=\frac{\sqrt{x}}{\sinh(\sqrt{x})}=1-\sum_{n=1}^{\infty} (2^{2n}-2)\frac{B_{2n}}{(2n)!}x^{n}=1-\frac{x}{6}+\frac{7x^{2}}{360}-\frac{31x^{ 3}}{15120}+\mathcal{O}(x^{4}), \tag{7}\] _where \(x=\mathrm{ad}_{\theta}^{2}=[\_,\theta,\theta]\) and \(B_{2n}\) are the Bernoulli numbers._ We obtain the first version of our algorithm by applying a Runge-Kutta (RK) method to \(\theta(t)\): ``` Result: \(\mathrm{Evolve}\ y^{\prime}(t)=F(y(t))\) from \(y_{0}=o\) to \(y_{1}\approx y(t=h)\). Choose time step \(h\) and \(\{a_{i,j}\}_{i,j=1}^{r}\), \(\{b_{j}\}_{j=1}^{r}\) coefficients of an RK method. for\(i=1,\ldots,r\)do \(\theta_{i}=\sum_{j=1}^{r}a_{i,j}K_{j}\) \(K_{i}=h\ \mathrm{dExp}_{\theta_{i}}^{-1}\Gamma_{\theta_{i}}^{-1}F(\mathrm{ Exp}(\theta_{i}))\) end \(y_{1}=\mathrm{Exp}(\sum_{j=1}^{r}b_{j}K_{j})\) ``` **Algorithm 1**Canonical integration step on symmetric space \((\mathcal{M},\cdot,o)\), with \(y_{0}=o\). _Remark_.: The perhaps most important case is _explicit RK methods_, where the coefficient matrix \(\{a_{i,j}\}\) is strictly lower triangular. In this case \(\theta_{1}=0\) and we find the others as \(\theta_{i}=\sum_{j=1}^{i-1}a_{i,j}K_{j}\). If the method is _implicit_, a system of equations must be solved to find \(K_{i}\) and \(\theta_{i}\). _Remark_.: The \(\mathrm{dExp}^{-1}\) series for the symmetric space exponential starts with the double bracket, while for the Lie group exponential the series starts with a single bracket. In [14] we showed that a naive Lie group method, with no correction for \(\mathrm{dExp}^{-1}\) can generally only achieve order 2. The symmetric space algorithm implemented without a \(\mathrm{dExp}^{-1}\) correction can achieve order 3. With higher order truncations of the \(\mathrm{dExp}^{-1}\) expansion, this method achieves the same order as the underlying RK method. This basic version of the method chooses the base point \(o=y_{0}\). If we want to choose a different base point \(o\neq y_{0}\), we can pull back the equation using an automorphism of \(\mathcal{M}\) sending \(o\) to \(y_{0}\). The automorphisms \(\mathrm{Aut}(\mathcal{M})\) are all mappings \(\tau\colon\mathcal{M}\to\mathcal{M}\) such that \(\tau(x\cdot y)=\tau(x)\cdot\tau(y)\). Let \(T\tau\colon\mathcal{TM}\to\mathcal{TM}\) denote the tangent map. ``` Result: \(\mathrm{Evolve}\ y^{\prime}(t)=f(y(t))\) from \(y_{0}\) to \(y_{1}\approx y(t=h)\). Choose any \(\tau_{y_{0}}\in\mathrm{Aut}(\mathcal{M})\) such that \(\tau_{y_{0}}(o)=y_{0}\). Compute \(\tilde{y}_{1}\) by applying Algorithm 1 on the pull-back equation \[\tilde{y}^{\prime}(t)=T\tau_{y_{o}}^{-1}F(\tau(\tilde{y}(t))),\quad\tilde{y}(0 )=o.\] Map back as \(y_{1}=\tau_{y_{0}}(\tilde{y}_{1})\). ``` **Algorithm 2**Canonical integration step on symmetric space \((\mathcal{M},\cdot,o)\) with \(y_{0}\neq o\). We return to a more detailed description of these algorithms in Section 2.2, after a basic introduction to symmetric spaces. ### Defining the canonical operations We start with a pointed symmetric space \((\mathcal{M},\cdot,o)\) and define all the operations of the algorithm from this structure. Concrete examples are found in Section 3. For a comprehensive treatment of the background theory, we refer to [12, 7]. **Definition 2.1** (Symmetric space).: A symmetric space is a manifold \((\mathcal{M},\cdot)\) with a product \(\cdot\colon\mathcal{M}\times\mathcal{M}\to\mathcal{M}\) such that \[x\cdot x =x\] \[x\cdot(x\cdot y) =y\] \[x\cdot(y\cdot z) =(x\cdot y)\cdot(x\cdot z)\] and every \(x\) has a neighbourhood \(U\) such that \(x\cdot y=y\) implies \(y=x\) for all \(y\in U\). As an example, any Lie group \(L\) with the symmetric product \(x\cdot y:=xy^{-1}x\) is a symmetric space denoted \((L^{+},\cdot)\). An other example is the sphere \(S^{n}=\{x\in\mathbb{R}^{n+1}\colon x^{T}x=1\}\) with the product \(x\cdot y=y-2xx^{T}y\). For each point \(x\in\mathcal{M}\) we have the symmetry \(\sigma_{x}\colon\mathcal{M}\to\mathcal{M}\) defined as \(\sigma_{x}(y):=x\cdot y\). The symmetry \(\sigma_{x}\in\operatorname{Aut}(\mathcal{M})\) is an involutive automorphism of the symmetric space \((\mathcal{M},\cdot)\), having \(x\) as an isolated fix point1. Note: 'involutive': \(\sigma_{x}(\sigma_{x}(y))=y\), 'automorphism': \(\sigma_{x}(y\cdot z)=\sigma_{x}(y)\cdot\sigma_{x}(z)\), isolated 'fix-point \(x\)': \(\sigma_{x}(x)=x\), are exactly the contents of Definition 2.1 Footnote 1: To get a feeling for this structure, consider a Riemannian metric space with its geodesics. Moving from \(y\) to \(x\) along the joining geodesic twice the distance from \(y\) to \(x\), we arrive at \(\sigma_{x}(y)\). So \(\sigma_{x}\) is a reflection of the geodesics through the point \(x\). A Riemannian symmetric space has for each \(x\in\mathcal{M}\) such a symmetry \(\sigma_{x}\) which is a metric isometry and has \(x\) as an isolated fix-point. The product of two reflections is called a _displacement_. The _group of displacements_ is generated by all \(\sigma_{x}\sigma_{y}\) for \(x,y\in\mathcal{M}\) and denoted \(G=G(\mathcal{M})\). This is a normal subgroup of \(\operatorname{Aut}(\mathcal{M})\). Let \(o\in\mathcal{M}\) be a chosen point called the _base point_, and let \(\mathfrak{m}:=T_{o}\mathcal{M}\). We call \((\mathcal{M},\cdot,o)\) a _pointed symmetric space_. The map \(Q\colon\mathcal{M}\to G(\mathcal{M})\) defined as \(x\mapsto Q_{x}:=\sigma_{x}\sigma_{o}\), called the _quadratic representation_. \(Q_{\mathcal{M}}\) generates all of \(G\) and it is a homomorphism of \((\mathcal{M},\cdot,o)\) onto \((G^{+},\cdot,e)\) with the symmetric product \(g\cdot h:=gh^{-1}g\), satisfying \[Q_{x\cdot y}=Q_{x}Q_{y}^{-1}Q_{x}=Q_{x}\cdot Q_{y},\quad Q_{o}=e\in G.\] The quadratic representation defines an action of \(\mathcal{M}\) on itself, \((x,y)\mapsto Q_{x}y\colon\mathcal{M}\times\mathcal{M}\to\mathcal{M}\). By differentiating \(Q\) at \(o\) we find the infinitesimal generator of the action as the vector field \(\xi_{V}\in\mathcal{X}\mathcal{M}\), defined for \(V\in\mathfrak{m}\) at \(y\in\mathcal{M}\) as \[\xi_{V}(y)=\left.\frac{d}{dt}\right|_{t=0}Q_{\gamma(t)}(y),\quad\gamma(0)=o, \quad\gamma^{\prime}(0)=\frac{1}{2}V. \tag{8}\] The factor \(\frac{1}{2}\) is due to the quadratic nature of \(Q\). With this scaling we have \(\xi_{V}(o)=V\), so it defines an extension of a tangent at \(o\) to a vector field on all of \(\mathcal{M}\). Also recall that \(Q_{y}o\) sends \(o\) beyond \(y\) to the point on the opposite side of \(y\) on the geodesic, so we have to travel with half the speed to arrive at \(y\) for \(t=1\). **Definition 2.2** (Geodesic Exp).: The geodesic exponential at the base point is a mapping \(\operatorname{Exp}\colon\mathfrak{m}\to\mathcal{M}\) defined as the \(t=1\) solution of the differential equation \[y^{\prime}(t)=\xi_{V}(y(t)),\quad y(0)=o. \tag{9}\] Let \((\mathbb{R},\cdot)\) be the real line with symmetric product \(s\cdot t=2s-t\). It can be shown that \(\operatorname{Exp}(s\cdot tV)=\operatorname{Exp}(sV)\cdot\operatorname{Exp}( tV)\). We summarise the most important properties of the symmetric space exponential: **Proposition 2.3**.: \(\operatorname{Exp}\colon(\mathbb{R},\cdot,0)\to(\mathcal{M},\cdot,o)\) _is the unique pointed symmetric space homomorphism with derivative \(V\in\mathfrak{m}\) at \(0\). It relates to the quadratic representation via_ \[\operatorname{Exp}(V)=Q_{\operatorname{Exp}(\frac{1}{2}V)}o.\] Note that \(Q_{\operatorname{Exp}(\frac{1}{2}tV)}o\) is a geodesic through \(o\), while \(Q_{\operatorname{Exp}(\frac{1}{2}tV)}p\) is generally _not_ a geodesic through \(p\) for \(p\neq o\). The maps \(Q_{\operatorname{Exp}(\frac{1}{2}tV)}\colon\mathcal{M}\to\mathcal{M}\) are \(t\)-parametrised families of automorphisms (isometries for Riemannian symmetric spaces) of \(\mathcal{M}\). In the case of the sphere \(S^{2}\), the family \(Q_{\operatorname{Exp}(\frac{1}{2}tV)}\colon S^{2}\to S^{2}\) rotates the sphere around an axis \(V\) perpendicular to the north pole. Starting at the north pole, this yields a great circle (geodesic). Starting at other points it yields (possibly) smaller circles orthogonal to the rotation axis \(V\). For a fixed \(y\in\mathcal{M}\), the tangent of the automorphism \(p\mapsto Q_{y}p\) is denoted \(TQ_{y}\colon T_{p}\mathcal{M}\to T_{Q_{y}p}\mathcal{M}\) \[TQ_{y}(W)=\left.\frac{d}{dt}\right|_{t=0}TQ_{y}(\gamma(t)),\quad\gamma(0)=p, \gamma^{\prime}(0)=W. \tag{10}\] Restricted to \(W\in\mathfrak{m}\), this defines parallel transport along the geodesic: **Definition 2.3** (Parallel transport of \(W\) along the geodesic of \(V\)).: For \(V,W\in\mathfrak{m}\) define \(\Gamma_{V}\colon\mathfrak{m}\to T_{\mathrm{Exp}(V)}\mathcal{M}\) as \[\Gamma_{V}W=TQ_{\mathrm{Exp}(\frac{1}{2}V)}(W).\] In the introduction we mentioned that symmetric spaces are equipped with a canonical connection \(\nabla\colon\mathcal{X}\mathcal{M}\times\mathcal{X}\mathcal{M}\to\mathcal{X} \mathcal{M}\) for which the name _'geodesic curves'_ acquires geometric meaning. For the Levi-Civita connection on a Riemannian symmetric space, the geodesics define (locally) shortest paths between points. The canonical connection is torsion free and has constant curvature. We do not need \(\nabla\) in this paper, so we will be brief on its construction. For two vector fields \(F,G\in\mathcal{X}\mathcal{M}\) define the connection at the base point \(o\) as the time derivative of the parallel transport, \[\nabla_{F}G(o)=\left.\frac{d}{dt}\right|_{t=0}\Gamma_{tF(o)}^{-1}G(\mathrm{ Exp}(tF(o))).\] The definition can be extended to an arbitrary point \(y\in\mathcal{M}\) by moving around with \(\tau\in\mathrm{Aut}(\mathcal{M})\) such that \(\tau(o)=y\). We refer to [12, 10] for details. We define the last operation we need to integrate on \(\mathcal{M}\), the _Lie triple system_ on \(\mathfrak{m}\). **Definition 2.4** (Triple bracket on \(\mathfrak{m}\)).: The tri-linear bracket \([\_,\_,]\colon\mathfrak{m}\times\mathfrak{m}\times\mathfrak{m}\to\mathfrak{m}\) is defined as \[[V,W,Z]=[[\xi_{V},\xi_{W}]_{J},\xi_{z}]_{J}(o), \tag{11}\] where \([\_,\_]_{J}\) is the Jacobi bracket of vector fields. The algebra \((\mathfrak{m},[\_,\_,])\) has the following structure: **Definition 2.5** (Lie triple system).: A _Lie triple system_ (Lts) \((\mathcal{A},[\cdot,\cdot,\cdot])\) is a vector space \(\mathcal{A}\) with a tri-linear bracket \([\cdot,\cdot,\cdot]\colon\mathcal{A}\times\mathcal{A}\times\mathcal{A}\to \mathcal{A}\) satisfying for all \(x,y,z,t,w\in\mathcal{A}\): \[[x,x,z]=0 \tag{13}\] \[[x,y,z]+[y,z,x]+[z,x,y]=0\] (14) \[[x,y,[z,t,w]]=[[x,y,z],t,w]+[z,[x,y,t],w]+[z,t,[x,y,w]]. \tag{12}\] We can combine Algorithms 1 and 2 in a single algorithm by using the quadratic representation to pull back \(y_{0}\) to \(\mathfrak{m}\) and \(\mathrm{Exp}\) to pull back from a neigbourhood of \(o\in\mathcal{M}\) to \(\mathfrak{m}\). This yields: **Proposition 2.4**.: _Given \(y^{\prime}(t)=F(y(t))\), \(y(0)=y_{0}\). Let \(s\in\mathcal{M}\) be such that \(Q_{s}o=y_{0}\) and let \(\theta(t)\in\mathfrak{m}\) such that \(y(t)=Q_{s}(\mathrm{Exp}(\theta(t)))\). Then \(\theta(t)\) satisfies_ \[\theta^{\prime}(t)=\mathrm{dExp}_{\theta}^{-1}\Gamma_{\theta}^{-1}TQ_{s}^{-1}F( y(t)),\theta(0)=0.\] Using a RK method on this equation yields the following algorithm: ``` Result:Evolve \(y^{\prime}(t)=F(y(t))\) from \(y(0)=y_{0}\) to \(y_{n}\approx y(t\!=\!T)\). Choose time step \(h=T/n\) and \(\{a_{i,j}\}_{i,j=1}^{r}\), \(\{b_{j}\}_{j=1}^{r}\) coefficients of an RK method. for\(\ell=0,\ldots,n-1\)do solve \(Q_{s_{\ell}}o=y_{\ell}\) for \(s_{\ell}\) for \(i=1,\ldots,r\)do \(\theta_{i}=\sum_{j=1}^{r}a_{i,j}K_{j}\) \(K_{i}=h\ \mathrm{dExp}_{\theta_{i}}^{-1}\Gamma_{\theta_{i}}^{-1}TQ_{s_{\ell}}^{-1}F \big{(}Q_{s_{\ell}}(\mathrm{Exp}(\theta_{i}))\big{)}\) end for \(y_{\ell+1}=Q_{s_{\ell}}\big{(}\mathrm{Exp}(\sum_{j=1}^{r}b_{j}K_{j})\big{)}\) end for ``` **Algorithm 3**CSSI (Canonical Symmetric Space Integrator) \((\mathcal{M},\cdot,o)\). ## 3. Examples ### Lie groups Let \(L\) be a Lie group with Lie algebra \((\mathfrak{g},[\_,]_{\mathfrak{g}})\). For \(V\in\mathfrak{g}\) and \(g\in G\) we write products in matrix style, e.g. \(Vg\) instead of the more elaborate \(TR_{g}(V)\). Let \(\exp\colon\mathfrak{g}\to L\) denote the Lie group exponential. Any Lie group \(L\) defines a pointed symmetric space denoted \(L^{+}:=(L,\cdot,o)\), where \(o=e\) is the identity of the Lie group and \(x\cdot y:=xy^{-1}x\). We obtain the quadratic representation \(Q_{x}=\sigma_{x}\sigma_{e}\) and \[Q_{x}y=xyx.\] For any \(V\in\mathfrak{g}\) and \(g\in L\), we have \(\xi_{V}(g)=\left.\frac{d}{dt}\right|_{t=0}Q_{\exp(\frac{1}{2}tV)}g=\frac{1}{2} \big{(}Vg+gV\big{)}\). The solution of \(y^{\prime}(t)=\xi_{V}(y(t))\), \(y(0)=g\) is \(y(t)=\exp(\frac{1}{2}tV)g\exp(\frac{1}{2}tV)\), in particular for \(g=e\) we have \(y(t)=\exp(tV)\), so we find \[\operatorname{Exp}(V)=\exp(V).\] Let \(V^{r}(g)=Vg\) and \(V^{\ell}(g)=gV\). We have \([V^{r},W^{\ell}]_{J}=0\), \([V^{r},W^{r}]_{J}=-[V,W]_{\mathfrak{g}}^{r}\) and \([V^{\ell},W^{\ell}]_{J}=[V,W]_{\mathfrak{g}}^{\ell}\). Since \(\xi_{V}=\frac{1}{2}(V^{r}+V^{\ell})\) we find the triple bracket \[[V,W,Z]=[[\xi_{V},\xi_{W}]_{J},\xi_{Z}]_{J}(o)=\frac{1}{4}[[V,W]_{\mathfrak{g }},Z]_{\mathfrak{g}}.\] The parallel transport is \[\Gamma_{V}W=\left.\frac{d}{dt}\right|_{t=0}Q_{\operatorname{Exp}(\frac{1}{2} V)}\operatorname{Exp}(tW)=\exp(\frac{1}{2}V)W\exp(\frac{1}{2}V).\] We compute \(\operatorname{dExp}\) and \(\operatorname{dExp}^{-1}\) explicitly. From (1) we have for \(V,W\in\mathfrak{g}\) \[\operatorname{dExp}_{V}W=\Gamma_{V}^{-1}\left.\frac{d}{dt}\right|_{t=0} \operatorname{Exp}(V+tW)=\exp(-\frac{1}{2}V)\left.\frac{d}{dt}\right|_{t=0} \exp(V+tW)\exp(-\frac{1}{2}V).\] In the following computation, let the linear operators \(x,y:\mathfrak{g}\to\mathfrak{g}\) be given as \(xW=[V,W]_{\mathfrak{g}}\) and \(yW=[W,V,V]=\frac{1}{4}[V,[V,W]_{\mathfrak{g}}]_{\mathfrak{g}}=\frac{1}{4}x^{2}W\). It is well known [9] that \(\left.\frac{d}{dt}\right|_{t=0}\exp(V+tW)=\operatorname{dexp}_{V}W\exp(V)\) where \(\operatorname{dexp}_{V}=(e^{x}-1)/x\). This yields, using \(\operatorname{Ad}_{\exp(V)}=\exp(\operatorname{ad}_{V})\): \[\operatorname{dExp}_{V}W =\exp(-\frac{1}{2}V)\frac{e^{x}-1}{x}W\exp(V)\exp(-\frac{1}{2}V) =\operatorname{Ad}_{\exp(-\frac{1}{2}V)}\frac{e^{x}-1}{x}W=e^{-\frac{x}{2}} \frac{e^{x}-1}{x}W\] \[=\frac{\sinh(\frac{x}{2})}{x/2}W=\frac{\sinh(\sqrt{y})}{\sqrt{y}}W.\] This establishes Proposition 2.1. It follows immediately that \(\operatorname{dExp}_{V}^{-1}=\frac{\sqrt{y}}{\sinh(\sqrt{y})}\). This results in the following matrix version of Algorithm 3. As a specific application consider \(y^{\prime}(t)=F(y(t))\), where \(y(t)\) is a symmetric positive definite (SPD) matrix and \(F(y)\) is symmetric. ``` Result:Evolve \(y^{\prime}(t)=F(y(t))\) from \(y(0)=y_{0}\) to \(y_{n}\approx y(t\!=\!T)\). Choose time step \(h=T/n\) and \(\{a_{i,j}\}_{i,j=1}^{r}\), \(\{b_{j}\}_{j=1}^{r}\) coefficients of an RK method. for\(\ell=0,\ldots,n-1\)do solve \(Q_{s_{\ell}}o=s_{\ell}s_{\ell}=y_{\ell}\) for \(s_{\ell}\) (matrix square root) for\(i=1,\ldots,r\)do \(\theta_{i}=\sum_{j=1}^{r}a_{i,j}\tilde{K}_{j}\) \(K_{i}=h\)\(\Gamma_{\theta_{i}}^{-1}TQ_{s_{\ell}}^{-1}F\big{(}Q_{s_{\ell}}( \operatorname{Exp}(\theta_{i}))\big{)}=h\)\(\exp(-\frac{\theta_{i}}{2})s_{\ell}^{-1}F\big{(}s_{\ell}\exp(\theta_{i})s_{ \ell}\big{)}s_{\ell}^{-1}\exp(-\frac{\theta_{i}}{2})\) \(\tilde{K}_{i}=\operatorname{dExp}_{\theta_{i}}^{-1}K_{i}=K_{i}-\frac{x}{6}K_{i} +\frac{7x^{2}}{360}K_{i}-\frac{31x^{3}}{15120}K_{i}+\cdots\) end for \(\theta_{\ell}=\sum_{j=1}^{r}b_{j}\tilde{K}_{j}\) \(y_{\ell+1}=Q_{s_{\ell}}\big{(}\operatorname{Exp}(\theta_{\ell})\big{)}=s_{\ell} \exp(\theta_{\ell})s_{\ell}\) end for ``` **Algorithm 4**CSGI (Canonical Symmetric Group Integrator) Here \(xK_{i}=\frac{1}{4}[[K_{i},\theta_{i}],\theta_{i}]\) (matrix triple commutator), and the \(\operatorname{dExp}^{-1}\) series is truncated to the order of the RK method. The matrix square root is in general not uniquely defined. We return to the choice of square root in the end of Section 3.1.1. #### 3.1.1. Automorphisms on Lie groups and symmetric decompositions For general matrix Lie groups, it is not clear to us if the symmetric space integration algorithm above has any advantages over the conventional Lie group integrators. However, there are many important cases where \(L^{+}\) contains a symmetric sub algebra. In these cases the conventional Lie group integrator destroys this subspace structure, while it is respected by the symmetric integration algorithm. Given a Lie group \(L\) with Lie algebra \(\mathfrak{g}\) and the corresponding symmetric space \(L^{+}\). An involutive automorphism on \(L\) is \(S\colon L\to L\) such that \(S(xy)=S(x)S(y)\) and \(S^{2}=\mathrm{Id}\). There are two important subsets of \(L\) \[L^{S} =\{x\in L\colon S(x)=x\}\] \[L_{S} =\{x\in L\colon S(x)=x^{-1}\}.\] The invariant elements \(L^{S}\) is a subgroup of \(L\) and the alternating elements \(L_{S}\) is a symmetric subspace of \(L^{+}\). Furthermore, \(L_{S}\) is isomorphic to the homogeneous space \(L/L^{S}\) and _every_ connected symmetric space \(\mathcal{M}\) is isomorphic to a symmetric space of this kind. So this is really more than just an example, it is a generic case covering all connected symmetric spaces. On the algebra level the derivative of \(S\) at the identity yields an involutive Lie algebra automorphism \(dS\colon\mathfrak{g}\to\mathfrak{g}\), and the algebra splits as \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}\) where \(\mathfrak{h}\) is the \(+1\) eigenspace of \(dS\) and \(m\) the \(-1\) eigenspace. Here \(\mathfrak{h}\) is a Lie algebra and \(\mathfrak{m}\) a Lie triple system. The split parts induces a \(Z_{2}\) grading on \(\mathfrak{g}\), where the sub spaces satisfy \([\mathfrak{h},\mathfrak{h}]\subset\mathfrak{h}\), \([\mathfrak{h},\mathfrak{m}]\subset\mathfrak{m}\) and \([\mathfrak{m},\mathfrak{m}]\subset\mathfrak{h}\). There are many structured problems in matrix computations which fit into this format. A famous example is the group of all invertible matrices \(L=\mathrm{Gl}(n)\) with the automorphism \(S(A)=A^{-T}\). Here \(L^{S}\) is the group of orthogonal matrices and \(L_{S}\) the symmetric space of symmetric non-singular matrices. An important sub algebra is the symmetric space of symmetric positive definite (SPD) matrices. On the algebra level, \(dS(V)=-V^{T}\) splitting the space of matrices as \(\mathfrak{g}l(n)=\mathfrak{h}\oplus\mathfrak{m}\) where \(\mathfrak{h}\) are the skew symmetric and \(\mathfrak{m}\) the symmetric matrices. Other examples are discussed in [17, 16]. We return to the matrix square root in Algorithm 4. The final step is \[y_{\ell+1}=s_{\ell}\exp(\frac{\theta_{\ell}}{2})\exp(\frac{\theta_{\ell}}{2}) s_{\ell}=S(A_{\ell+1})^{-1}A_{\ell+1},\] where \(A_{\ell+1}=\exp(\frac{\theta_{\ell}}{2})s_{\ell}\). The _generalised polar decomposition_[17] is \(A_{\ell+1}=QP\), where \(S(Q)=Q\) and \(S(P)=P^{-1}\). For the SPD example, this is the classical polar decomposition where \(Q\) is orthogonal and \(P\) SPD. From this we find \[y_{\ell+1}=S(A_{\ell+1})^{-1}A_{\ell+1}=PP,\] and hence \(s_{\ell+1}=P\), the polar part of \(A_{\ell+1}\). This polar part matrix square root is unique in many cases, such as for SPD matrices. We finally note that in a dynamic situation, where \(s_{\ell+1}\) is not too far from \(s_{\ell}\), iterative techniques for the polar decomposition could be considered [8]. There are many such practical issues we will not pursue here, but we will address these in forthcoming papers. ### Spheres Let \((S^{n},\cdot,o)\) be the pointed symmetric space \(S^{n}=\{x\in\mathbb{R}^{n+1}\colon x^{T}x=1\}\) with \[x\cdot y:=2xx^{T}y-y\] The base point could be the north pole \(o=(0,0,\ldots,0,1)\) (or any other point). The automorphisms are the orthogonal group \(\mathrm{Aut}(S^{n})=O(n+1)\) generated by the symmetries \[\sigma_{x}=2xx^{T}-I\in O(n+1). \tag{15}\] The group of displacements \(G(S^{n})=\mathrm{SO}(n+1)\) is the group of pure rotations, and it is generated by the quadratic representation \[Q_{x}=\sigma_{x}\sigma_{o}=(2xx^{T}-I)(2oo^{T}-I),\] where \(Q_{x}\) is the rotation in the \(o-x\) plane through twice the angle from \(o\) to \(x\). Let \(\mathfrak{m}\) be the horizontal vectors at the north pole (the equatorial plane) \[\mathfrak{m}=T_{o}S^{n}=\{v\in\mathbb{R}^{n+1}\colon v^{T}o=0\}\simeq\mathbb{R }^{n},\] where we identify with \(\mathbb{R}^{n}\) by deleting last \(0\) in the vector. The infinitesimal generator is \[\xi_{V}(p)=\left.\frac{d}{dt}\right|_{t=0}Q_{o+tv}p=\left(vo^{T}-ov^{T}\right)p =\hat{v}p,\] where the hat map \(\hat{v}:=\left(vo^{T}-ov^{T}\right)\colon\mathfrak{m}\to\mathfrak{so}(n+1)\) is the unique map such that \(\hat{v}o=v\). From this we compute the triple bracket and the double adjoint on \(\mathfrak{m}\): \[[u,v,w] =[[\hat{u},\hat{v}],\hat{w}]o=\big{(}vw^{T}-v^{T}wI\big{)}u\] \[\mathrm{ad}_{v}^{2}w =[w,v,v]=\big{(}vv^{T}-v^{T}vI\big{)}w.\] For \(x=\mathrm{ad}^{2}v\) we find \(x^{2}=-v^{T}vx=(i\varphi)^{2}x\) for \(\varphi=||v||\). Since \(\frac{\sqrt{x}}{\sinh(x)}=1+\sum_{j=1}^{\infty}a_{j}x^{j}\), we find \[\mathrm{dExp}_{v}^{-1}=\frac{\sqrt{x}}{\sinh(x)}=I+\left(\frac{i\varphi}{ \sinh(i\varphi)}-1\right)\frac{x}{-\varphi^{2}}=I+\left(\frac{\varphi}{\sin( \varphi)}-1\right)\pi_{v}^{\perp},\] where \(\pi_{v}^{\perp}=\frac{x}{-\varphi^{2}}=I-\frac{vv^{T}}{v^{T}v}\) is projection on the orthogonal complement of \(v\). This formula is not hard to understand geometrically. In \(\mathrm{dExp}_{v}^{-1}w\) the normal component of \(w\) is increased by the factor \(\frac{\varphi}{\sin\varphi}\). This is exactly the ratio of the circumference of a circle in the plane \(\mathfrak{m}\) with radius \(\varphi\) and the circumference of the constant latitude circle on the sphere at angle \(\varphi\) from the north pole. The exponential \(\mathrm{Exp}(v)\) can be found by solving \(y^{\prime}(t)=\xi_{v}(y(t))\), \(y(0)=o\) at \(t=1\), giving \(\mathrm{Exp}(v)=\exp(\hat{v})o\). We can compute the matrix exponential \(\exp(\hat{v})\) via Rodrigues formula [9]. From \(\hat{v}^{3}=-\varphi^{2}\hat{v}\) for \(\varphi=(v^{T}v)^{\frac{1}{2}}\) we find \[\exp(\hat{v})=I+\frac{\sin(\varphi)}{\varphi}\hat{v}+\frac{1}{2}\frac{\sin^{2 }(\varphi/2)}{(\varphi/2)^{2}}\hat{v}^{2}.\] This gives \[\mathrm{Exp}(v)=\sin(\varphi)\frac{v}{||v||}+\cos(\varphi)o,\] a rotation of \(o\) in the \(v\)-\(o\) plane through the angle \(\varphi\). We obtain the following spherical integrator, where we at each step choose a new base point \(o=y_{\ell}\). The algorithm should never take steps \(\theta_{i}\geq\pi\), because of the singularity in \(\mathrm{dExp}^{-1}\). If this happens, the step size \(h\) must be reduced. ``` Result:Evolve \(y^{\prime}(t)=F(y(t))\) from \(y(0)=y_{0}\) to \(y_{n}\approx y(t\!=\!T)\), where \(||y(t)||=1\) and \(F(y)^{T}y=0\). Initialisation: Choose time step \(h=T/n\) and \(\{a_{i,j}\}_{i,j=1}^{r}\), \(\{b_{j}\}_{j=1}^{r}\) coefficients of RK. for\(\ell=0,\ldots,n-1\)do for\(i=1,\ldots,r\)do \(\theta_{i}=\sum_{j=1}^{r}a_{i,j}\tilde{K}_{j}\) \(\varphi=||\theta_{i}||\) if\(\varphi==0\)then \(K_{i}=hF(y_{\ell})\) end if else \(E_{i}=\operatorname{Exp}(\theta_{i})=\frac{\sin\varphi}{\varphi}\theta_{i}+ \cos(\varphi)y_{\ell}\) \(s=\operatorname{Exp}(\theta_{i}/2)=(E_{i}+y_{\ell})/||E_{i}+y_{\ell}||\) \(v=h\ F\big{(}E_{i}\big{)}\) \(K_{i}=\Gamma_{\theta_{i}}^{-1}v=v-2ss^{T}v\) \(\tilde{K}_{i}=\operatorname{dExp}_{\theta_{i}}^{-1}K_{i}=K_{i}+\left(\frac{ \varphi}{\sin(\varphi)}-1\right)\left(K_{i}-\frac{\theta_{i}\theta_{i}^{T}}{ \varphi^{2}}K_{i}\right)\) end for end for \(\theta=\sum_{j=1}^{r}b_{j}\tilde{K}_{j}\) \(\varphi=||\theta||\) \(y_{\ell+1}=\operatorname{Exp}(\theta)=\frac{\sin\varphi}{\varphi}\theta+\cos (\varphi)y_{\ell}\) end for ``` **Algorithm 5**CSI (Canonical Spherical Integrator) The algorithm is not time symmetric for a time symmetric dynamical system, even if the underlying RK method is'self adjoint' (time symmetric). The reason for this is the choice of \(o=y_{\ell}\) at each step, which is not symmetric with respect to time reversal. An alternative is to choose the base point as the geodesic midpoint \(o=(y_{\ell}+y_{\ell+1})/||y_{\ell}+y_{\ell+1}||\). This yields a time symmetric integrator for time symmetric problems and self adjoint RK methods, see [19] for such methods in the Lie group case. But, of course, such methods are necessarily implicit. See remark on relations to the Spherical Midpoint Method [13] in Section 4. ### Hyperbolic spaces \(H^{n}\) is the unique simply connected \(n\)-dimensional Riemannian manifold of constant sectional curvature \(-1\). We will present this via the "hyperboloid model", an isometric embedding of \(H^{n}\) in \(n+1\) dimensional Minkowski space [6]. Let \(\big{\{}\,\mathbf{z}=(\underline{z};z_{0})\in\mathbb{R}^{n+1}\ \big{|}\ \ \underline{z}\in \mathbb{R}^{n},z_{0}\in\mathbb{R}\big{\}}\) be standard coordinates on Minkowski space, where \((\underline{z};z_{0})\) denotes a column vector with \(z_{0}\) in last position. Define the indefinite Minkowski inner product \[\langle\mathbf{y},\mathbf{z}\rangle=\mathbf{y}^{T}J\mathbf{z}=y_{0}z_{0}- \underline{y}^{T}\underline{z} \tag{16}\] where \(J=\operatorname{diag}(-1,\ldots,-1,+1)\). Special relativity is defined on spacetime \(\mathbb{R}^{4}\), where \(\underline{z}\in\mathbb{R}^{3}\) is space and \(z_{0}\in\mathbb{R}\) is time. The subset at unit distance from the origin splits in two connected hyperboloids \[H^{\pm}=\big{\{}\,\mathbf{z}=(\underline{z};z_{0})\in\mathbb{R}^{n+1}\ \big{|}\ \ \langle\mathbf{z},\mathbf{z}\rangle=1\big{\}}=\Big{\{}\,( \underline{z};z_{0})\ |\ \ z_{0}=\pm\sqrt{\underline{z}^{T}\underline{z}+1}\Big{\}}\,.\] We identify \(H^{n}:=H^{+}\). Let the _Lorentz group_\(O(n,1)\) be the matrix group preserving the Minkowski inner product: \[O(n,1):= \big{\{}\,A\in\operatorname{GL}(n+1,\mathbb{R})\ |\ \ \langle A \mathbf{y},A\mathbf{z}\rangle=\langle\mathbf{y},\mathbf{z}\rangle\text{ for all }\mathbf{y},\mathbf{z}\in\mathbb{R}^{n+1}\big{\}}\] \[= \big{\{}\,A\in\operatorname{GL}(n+1,\mathbb{R})\ |\ JAJ=A^{-T}\}\,.\] Polar decompositions in Lie groups are derived from involutive automorphisms on Lie groups [16]. The classical matrix polar decomposition \(A=US\), where \(U^{T}U=I\) is orthogonal2 and \(S\) is symmetric positive definite (SPD), is obtained from the involutive automorphism on \(\operatorname{GL}(n+1,\mathbb{R})\) given as \(\alpha(A)=A^{-T}\) by requiring \(\alpha(U)=U\) and \(\alpha(S)=S^{-1}\). An other involutive automorphism important for the Lorentz group is \(\beta(A)=JAJ\). The two automorphisms commute, so \(\beta\alpha(A)=JA^{-T}J\) is also involutive. Note that \(O(n,1)=\{\,A\in\operatorname{GL}(n+1,\mathbb{R})\ |\ \beta\alpha(A)=A\}\). Footnote 2: We usually write orthogonal matrices ‘Q’, but here we need to distinguish from the quadratic representation. **Proposition 3.1**.: _The polar decomposition of \(A\in O(n,1)\) is given as \(A=US_{\underline{s}}\) where_ \[U=\left(\begin{array}{cc}\widetilde{U}&0\\ 0&u\end{array}\right) \tag{17}\] _for an orthogonal \(\widetilde{U}\in O(n)\), \(u=\pm 1\), and_ \[S_{\underline{s}}=\left(\begin{array}{cc}\widetilde{S}&\underline{s}\\ \underline{s}^{T}&s_{0}\end{array}\right) \tag{18}\] _for some \(\underline{s}\in\mathbb{R}^{n}\), \(\widetilde{S}=\sqrt{I_{n}+\underline{s}\underline{s}^{T}}\) (SPD square root) and \(s_{0}=\sqrt{1+\underline{s}^{T}\underline{s}}\), thus \((\underline{s};s_{0})\in H^{n}\)._ Proof.: An alternative and detailed proof is given in [6]. Here we sketch a structural argument, omitting some minor details. We seek \(A=US\) where \(\alpha(U)=U\) and \(\alpha(S)=S^{-1}\). We have \[US=A=\beta\alpha(A)=\beta\alpha(U)\beta\alpha(S)=\beta(U)\beta(S^{-1}),\] which yields \(\beta(U)=U\) and \(\beta(S)=S^{-1}\). From \(\beta(U)=U\) follows \[U=\left(\begin{array}{cc}\widetilde{U}&0\\ 0&u\end{array}\right),\] and \(\alpha(U)=U\) implies \(\widetilde{U}\) is orthogonal and \(u^{2}=1\). Writing \[S=\left(\begin{array}{cc}\widetilde{S}&\underline{s}\\ \underline{s}^{T}&s_{0}\end{array}\right)\] we find from \(\beta(S)=S^{-1}\) that \[S^{-1}=\left(\begin{array}{cc}\widetilde{S}&-\underline{s}\\ -\underline{s}^{T}&s_{0}\end{array}\right).\] Multiplying \(SS^{-1}=I_{n+1}\) yields \(\widetilde{S}^{2}-\underline{s}\underline{s}^{T}=I_{n}\), \(s_{0}^{2}-\underline{s}^{T}\underline{s}=1\) and \(\widetilde{S}_{\underline{s}}=s_{0}\underline{s}\). Let \(O_{0}(n,1)\) denote the matrices in \(O(n,1)\) where \(u=+1\). This is the subgroup mapping \(H^{+}\mapsto H^{+}\) and \(H^{-}\mapsto H^{-}\). The matrices where \(u=-1\) swap the two components of \(H^{\pm}\), but they do not form a subgroup. Note that \[\det(S^{2})=\det(SJS^{-1}J)=\det(SS^{-1})=\det(I)=1,\] and since \(S\) is SPD, we have \(\det(S)=1\). Hence \(\det(A)=\det(U)=u\det(\widetilde{U})=\pm 1\). The special Lorentz group \[\operatorname{SO}(n,1):=\{\,A\in O(n,1)\ |\ \det(A)=1\}\] has two connected components for \(u=\pm 1\). We denote \(\operatorname{SO}_{0}(n,1)\) the connected component containing the identity. \(\operatorname{SO}(n,1)\) consists of those \(A\) where \(u\det(\widetilde{U})=1\) and \(\operatorname{SO}_{0}(n,1)\) those where \(u=\det(\widetilde{U})=1\). We will present \(H^{n}\) as a homogeneous space. Let \(o=(\underline{0};1)\in H^{n}\) be the base point. Since \(So=(\underline{s};s_{0})\), it follows that both \(O_{0}(n,1)\) and \(\operatorname{SO}_{0}(n,1)\) act transitively on \(H^{n}\). The stabiliser subgroup at \(o\) consists of \(A=US\) such that \(Ao=o\). Multiplied out this implies \(S=I_{n+1}\) and \[U=\left(\begin{array}{cc}\widetilde{U}&\underline{0}\\ \underline{0}&1\end{array}\right),\quad\widetilde{U}\in O(n).\] Thus \[H^{n}=O_{0}(n,1)/O(n)\simeq\operatorname{SO}_{0}(n,1)/\operatorname{SO}(n).\] We seek the symmetric product on \(H^{n}\). Note that \(\sigma_{o}:=J\in O_{0}(n,1)\) is the isometric reflection around \(o\). Reflection around a general point \(\mathbf{s}=(\underline{s};s_{0})\in H^{n}\) is obtained by moving the point to \(o\), reflect with \(J\) and move back, \[\sigma_{\mathbf{s}}=S_{\underline{s}}\sigma_{o}S_{\underline{s}}^{-1}=\left( \begin{array}{cc}C&\underline{s}\\ \underline{s}^{T}&c\end{array}\right)\left(\begin{array}{cc}-I&\mathbf{0}\\ \mathbf{0}^{T}&1\end{array}\right)\left(\begin{array}{cc}C&-\underline{s}\\ -\underline{s}^{T}&c\end{array}\right)=\left(\begin{array}{cc}-I-2s_{ \underline{s}}s^{T}&2s_{0}\underline{s}\\ -2s_{0}\underline{s}^{T}&1+2\underline{s}^{T}\underline{s}\end{array}\right).\] From this we find the more 'obvious' expression for the isometric reflection \[\sigma_{\mathbf{s}}=2\mathbf{s}\langle\mathbf{s},\boldsymbol{\rangle}-I. \tag{19}\] The quadratic representation is \[Q_{\mathbf{s}}=\sigma_{\mathbf{s}}\sigma_{o}=S_{\underline{s}}JS_{\underline {s}}^{-1}J=S_{\underline{s}}^{2}=S_{2s_{0}\underline{s}}=\left(\begin{array} []{cc}I_{n}+2s_{\underline{s}}s^{T}&2s_{0}\underline{s}\\ 2s_{0}\underline{s}^{T}&1+2\underline{s}^{T}\underline{s}\end{array}\right). \tag{20}\] The computation of geodesic exponential \(\mathrm{Exp}\), triple bracket and \(\mathrm{dExp}^{-1}\) is very similar to the spherical case, Section 3.2. We compute the geodesic exponential from (8)-(9). From the embedding \(H^{n}\subset\mathbb{R}^{n+1}\) we see that \(\mathfrak{m}=T_{o}H^{n}=\mathbb{R}^{n}\simeq\{\mathbf{v}=(v;0)\in\mathbb{R}^{n +1}\}\). Let \(\gamma(t)=o+\frac{1}{2}t\mathbf{v}+\mathcal{O}(t^{2})\). At \(\mathbf{y}\in H^{n}\) we have \[\xi_{v}(\mathbf{y})=\left.\frac{\partial}{\partial t}\right|_{t=0}Q_{\gamma t }\mathbf{y}=\left.\frac{\partial}{\partial t}\right|_{t=0}\left(\begin{array} []{cc}I_{n}&tv\\ tv^{T}&1\end{array}\right)\mathbf{y}=\left(\begin{array}{cc}\mathbf{0}&v\\ v^{T}&0\end{array}\right)\mathbf{y}=\hat{\mathbf{v}}\mathbf{y}, \tag{21}\] where the hat-map in this case is \(\hat{\mathbf{v}}=\mathbf{v}\langle o,\boldsymbol{\rangle}-o\langle\mathbf{v}, \boldsymbol{\rangle}\). _Remark_: The structure of this hat map can be explained from the involutive automorphisms \(\alpha,\beta\colon\mathit{GL}(n+1,\mathbb{R})\to\mathit{GL}(n+1,\mathbb{R})\). Differentiating these we get involutive automorphisms \(d\alpha,d\beta\colon\mathfrak{gl}(n+1,\mathbb{R})\to\mathfrak{gl}(n+1,\mathbb{ R})\) \[d\alpha(A)=-A^{T},\qquad d\beta(A)=JAJ.\] Since \(\alpha(S)=S^{-1}\) and \(\beta(S)=S^{-1}\), we have \(S=\exp(V)\) for \(d\alpha(V)=-V\) and \(d\beta(V)=-V\). This implies that \(V\) is of the form \(V=\hat{\mathbf{v}}\). Let \(d\alpha^{-}\) and \(d\beta^{-}\) denote the -1 eigenspaces of these involutive automorphisms on \(\mathfrak{gl}(n+1,\mathbb{R})\), then \(\hat{\mathfrak{m}}:=\mathfrak{gl}(n+1,\mathbb{R})\cap d\alpha^{-}\cap d\beta^ {-}\) is a Lie triple system (LTS) of symmetric matrices and \(\hat{\cdot}\colon\mathfrak{m}\to\hat{\mathfrak{m}}\) is the unique LTS isomorphism such that \(\mathbf{v}=\hat{\mathbf{v}}o\). A matrix of the form of \(S\) is called a _Lorentz boost_ in special relativity, and is the exponential of a matrix in \(\hat{\mathfrak{m}}\). From the hat map isomorphism we compute the triple bracket and the double adjoint on \(\mathfrak{m}\): \[[u,v,w]=[[\hat{u},\hat{v}],\hat{w}]o=\big{(}v^{T}wI-vw^{T}\big{)}u\] \[\mathrm{ad}_{v}^{2}w=[w,v,v]=\big{(}v^{T}vI-vv^{T}\big{)}w.\] Note that this is the negative of the triple bracket in the spherical case, which is not strange since the triple bracket is the Riemannian curvature tensor. For \(x=\mathrm{ad}^{2}v\) we find \(x^{2}=\varphi^{2}x\) for \(\varphi=\sqrt{-\langle\mathbf{v},\mathbf{v}\rangle}\). As in the spherical case, we find \[\mathrm{dExp}_{\mathbf{v}}^{-1}=\frac{\sqrt{x}}{\sinh(x)}=I+\left(\frac{ \varphi}{\sinh(\varphi)}-1\right)\frac{x}{\varphi^{2}}=I+\left(\frac{\varphi} {\sinh(\varphi)}-1\right)\pi_{\mathbf{v}}^{\perp}, \tag{22}\] where \[\pi_{\mathbf{v}}^{\perp}=I-\frac{\langle\mathbf{v},\boldsymbol{\rangle}}{ \langle\mathbf{v},\mathbf{v}\rangle} \tag{23}\] is the Minkowski projection on the orthogonal complement of \(\mathbf{v}\). From (21) we find the the geodesic exponential \(\mathrm{Exp}\colon\mathfrak{m}\to H^{n}\) \[\mathrm{Exp}(\mathbf{v})=\exp\left(\hat{\mathbf{v}}\right)o. \tag{24}\] Similar to the spherical case, \(\hat{\mathbf{v}}^{3}=\varphi^{2}\hat{\mathbf{v}}\) yields a Rodrigues type formula \[\exp(\hat{\mathbf{v}})=I+\frac{\sinh(\varphi)}{\varphi}\hat{\mathbf{v}}+\frac{ 1}{2}\frac{\sinh^{2}(\varphi/2)}{(\varphi/2)^{2}}\hat{\mathbf{v}}^{2}. \tag{25}\] This gives the geodesic exponential \[\mathrm{Exp}(\mathbf{v})=\frac{\sinh(\varphi)}{\varphi}\mathbf{v}+\cosh(\varphi)o, \tag{26}\] for \(\langle\mathbf{v},o\rangle=0\). This is a hyperbolic rotation of the \(\mathbf{v}\)-\(o\) plane through the hyperbolic angle \(\varphi\). Similar to CSI Algorithm 5 we will express our hyperbolic integrator using at each step the base point \(o=\mathbf{y}_{n}\), where each step is performed as in Algorithm 1. At an arbitrary point \(o=\mathbf{y}_{n}\in H^{n}\) the tangent space is \[\mathfrak{m}=T_{o}H^{n}=\left\{\,\mathbf{w}\in\mathbb{R}^{n+1}\,\,\big{|}\,\, \,\,\langle\mathbf{w},o\rangle=0\right\}.\] The geodesic exponential is still given by (26), with inverse differential (22)-(23). Finally, for \(\theta\in\mathfrak{m}\), we need \(\Gamma_{\theta}^{-1}\), parallel transport from \(T_{\mathrm{Exp}(\theta)}H^{n}\) to \(\mathfrak{m}\) along the joining geodesic. Let \(\mathbf{s}=\mathrm{Exp}(\theta/2)\) be the midpoint on the geodesic. From Definition 2.3 we find \[\Gamma_{\theta}^{-1}=TQ_{\mathbf{s}}^{-1}=T\left(\sigma_{\mathbf{s}}\sigma_{o }\right)^{-1}=T\left(\sigma_{o}\sigma_{\mathbf{s}}\right)=-T\sigma_{\mathbf{s}}\] since \(T\sigma_{o}=-I\). From (19) this yields \[\Gamma_{\theta}^{-1}=I-2\mathbf{s}\langle\mathbf{s},\lrcorner. \tag{27}\] ``` Result:Evolve \(\mathbf{y}^{\prime}(t)=F(\mathbf{y}(t))\), \(\mathbf{y}(t)\in H^{n}\), from \(\mathbf{y}(0)=\mathbf{y}_{0}\) to \(\mathbf{y}_{n}\approx\mathbf{y}(t\!=\!T)\), where \(\langle\mathbf{y}(t),\mathbf{y}(t)\rangle=1\) and \(\langle F(\mathbf{y}),\mathbf{y}\rangle=0\). Initialisation: Choose time step \(h=T/n\) and \(\{a_{i,j}\}_{i,j=1}^{r}\), \(\{b_{j}\}_{j=1}^{r}\) coefficients of RK. for\(\ell=0,\ldots,n-1\)do for\(i=1,\ldots,r\)do \(\theta_{i}=\sum_{j=1}^{r}a_{i,j}\tilde{K}_{j}\in\mathfrak{m}=T_{\mathbf{y}_{\ell }}H^{n}\) \(\varphi=\sqrt{-\langle\theta_{i},\theta_{i}\rangle}\) if\(\varphi==0\)then \(K_{i}=hF(\mathbf{y}_{\ell})\) end if else \(\mathbf{u}=\mathrm{Exp}(\theta_{i})=\frac{\sinh(\varphi)}{\varphi}\theta_{i}+ \cosh(\varphi)\mathbf{y}_{\ell}\) \(\mathbf{s}=\mathrm{Exp}(\theta_{i}/2)=\frac{\sinh(\varphi/2)}{\varphi}\theta_ {i}+\cosh(\varphi/2)\mathbf{y}_{\ell}\) \(\mathbf{v}=h\)\(F\big{(}\mathbf{u}\big{)}\) \(K_{i}=\Gamma_{\theta_{i}}^{-1}\mathbf{v}=\mathbf{v}-2\mathbf{s}\langle\mathbf{ s},\mathbf{v}\rangle\) \(\tilde{K}_{i}=\mathrm{dE}\mathrm{xp}_{\theta_{i}}^{-1}K_{i}=K_{i}+\left(\frac{ \varphi}{\sinh(\varphi)}-1\right)\left(K_{i}-\frac{\langle\theta_{i},K_{i} \rangle}{\varphi^{2}}\theta_{i}\right)\) end if end for \(\theta=\sum_{j=1}^{r}b_{j}\tilde{K}_{j}\) \(\varphi=\sqrt{-\langle\theta,\theta\rangle}\) \(\mathbf{y}_{\ell+1}=\mathrm{Exp}(\theta)=\frac{\sinh\varphi}{\varphi}\theta+ \cosh(\varphi)\mathbf{y}_{\ell}\) end for ``` **Algorithm 6**CHI (Canonical Hyperbolic Integrator) #### 3.3.1. Other models of \(H^{n}\) There are other well known models of \(H^{n}\), most famous are the Poincare _half space_ and the _disc models_. These are obtained from the hyperboloid model by stereographic projections. The half space model is the stereographic projection from infinity along the edge of the light cone in the \(x_{n}\) - \(x_{0}\) plane, where the hyperbolic space is realised as the upper half space of \(\mathbb{R}^{n}\) given by \(x_{n}>0\), or upper half plane of \(\mathbb{C}\) in the case \(n=2\). The disc model is the stereographic projection of \(H^{n}\) from the point \((\underline{0};-1)\), realising the hyperbolic space as the unit ball in \(\mathbb{R}^{n}\), or unit disc \(\mathbb{C}\) in the case \(n=2\). These models are beautiful, in particular for \(n=2\) where the hyperbolic space becomes a homogeneous space under the action of the projective linear fractional transformations \(\mathrm{PSL}(2,\mathbb{R})\) on \(\mathbb{C}\). However, these models are mathematically equivalent to the model we have presented above, so we omit the details. Spaces of constant negative curvature are important in the geometric theory of dynamical systems. E.g. Anosov flows on compact surfaces \(H^{2}/\Gamma\), where \(\Gamma\) is a discrete subgroup of \(\mathrm{PSL}(2,\mathbb{R})\) (Fuchsian group) are generic examples of Axiom A type dynamical systems with chaotic solutions. Variants of the CHI algorithm should be interesting for numerical studies of Anosov flows. In numerical analysis, Riemannian spaces of negative curvature have recently appeared in geometric generalisations of classical stability theory to Riemannian manifolds [1]. The present algorithms should be of interest in the development of these ideas. ## 4. Final remarks We have in this paper developed the general concept of canonical geometric integration on symmetric spaces. The important issue of numerical qualities of these algorithms and their applications to practical computational problems are left to sequel papers. Here we briefly remark on interesting questions to be pursued, and possible applications. Differential equations evolving on spheres are ubiquitous, with important examples from robotics, computational mechanics, rigid body dynamics and flows on planetary surfaces. Infinite dimensional spheres are natural domains for partial differential equations preserving energy or \(L_{2}\) norms, such as the Schrodinger and Korteweg-de Vries equations. It is interesting that the CSI algorithm on the \(n\)-sphere is not significantly more expensive than classical RK methods on the embedding of the sphere in \(\mathbb{R}^{n+1}\). Hyperbolic geometry is the foundation of special relativity. Our CHI algorithm is formulated in terms of Lorentz transformations and should be interesting for geometric integration of Maxwell equations and dynamical systems in special relativity, as it preserves the Lorentz invariance of the equations. An other interesting class of equations evolving on symmetric spaces are Lie-Poisson systems. Hamiltonian mechanics are naturally formulated on cotangent bundles of Lie groups. By symmetry reduction, these can often be reduced to Lie-Poisson systems evolving on the dual of Lie algebras. The natural coadjoint action foliates the dual Lie algebras in symplectic leafs, which in many cases are symmetric spaces. For example spheres are symplectic leaves for \(\mathfrak{so}(n)^{*}\) and hyperbolic spaces for \(\mathfrak{sl}(n)^{*}\). Questions of symplectic algorithms for such problems are interesting, hard and largely open. We remark that the CSI integrator based on the implicit midpoint RK and the geodesic midpoint as base point for each step is _not_ the same as the celebrated symplectic spherical midpoint method [13]. However the difference between these two methods is closely related to the difference between using the exponential and the Cayley transform as coordinate mappings. For use of Cayley maps, we refer to the pioneering work on symplectic integration on spheres by Lewis and Simo [11]. Symplectic integration on hyperbolic symplectic leaves is to our knowledge largely unchartered territory. Differential equations evolving on the space of symmetric positive definite matrices occur in different contexts. One example is in the inverse eigenvalue problem for SPD Toeplitz matrices, formulated as isospectral flows [4]. A different example is the tracing of nerve fibres in diffusion tensor imaging of the brain, where the voxels are symmetric positive definite diffusion tensors. We have in this paper not written out the example of Grassman manifolds, which is yet another example of symmetric spaces. Differential equations evolving on Grassman manifolds has been considered by several authors, see [2] and references within.
2302.13131
Long-range Velocity Correlations from Active Dopants
One of the most remarkable observations in dense active matter systems is the appearance of long-range velocity correlations without any explicit aligning interaction (of e.g.\ Vicsek type). Here we show that this kind of long range velocity correlation can also be generated in a dense athermal passive system by the inclusion of a very small fraction of active Brownian particles. We develop a continuum theory to explain the emergence of velocity correlations generated via such active dopants. We validate the predictions for the effects of magnitude and persistence time of the active force and the area fractions of active or passive particles using extensive Brownian dynamics simulation of a canonical active-passive mixture. Our work decouples the roles that density and activity play in generating long range velocity correlations in such exotic non-equilibrium steady states.
Leila Abbaspour, Rituparno Mandal, Peter Sollich, Stefan Klumpp
2023-02-25T18:14:27Z
http://arxiv.org/abs/2302.13131v1
# Long-range Velocity Correlations from Active Dopants ###### Abstract One of the most remarkable observations in dense active matter systems is the appearance of long-range velocity correlations without any explicit aligning interaction (of e.g. Vicsek type). Here we show that this kind of long range velocity correlation can also be generated in a dense athermal passive system by the inclusion of a very small fraction of active Brownian particles. We develop a continuum theory to explain the emergence of velocity correlations generated via such active dopants. We validate the predictions for the effects of magnitude and persistence time of the active force and the area fractions of active or passive particles using extensive Brownian dynamics simulation of a canonical active-passive mixture. Our work decouples the roles that density and activity play in generating long range velocity correlations in such exotic non-equilibrium steady states. Introduction:Active matter systems are one of the best-known examples of non-equilibrium systems and are famous for their fascinating collective behaviour across a diverse range of length and time scales [1; 2], from the cytoskeleton to bacterial colonies, tissues, flocks of birds to animal herds. Systems of active Brownian particles (ABP), i.e. particles exhibiting self-propulsion, are a canonical example of active matter [3; 4; 5; 6; 7]. These systems exhibit two types of non-equilibrium pattern formation: in the presence of aligning interactions between the directions of the self-propelled motion of the particles, they show flocking, i.e. collective directed motion of groups of such particles [8; 9]. In the absence of such aligning interactions, they exhibit motility induced phase separation (MIPS), crucially without the need for attractive interactions [3; 4; 5]. It is important to note that this phase separation is not associated with any macroscopic order in the orientation of the self-propulsion directions of the particles. With external driving, on the other hand, such order can appear, see e.g. Ref. [10]. Very recently it has been discovered that while systems without aligning interactions show no macroscopic orientational ordering, there are spectacularly large spatial structures in the instantaneous velocity field, especially within the dense phase created by motility-induced phase separation (MIPS) [11; 12; 13; 14; 15]. It has been shown using both analytical calculations and numerical models (and confirmed in experiments in dense tissues [14]) that in such a scenario, a dense assembly of active particles generates long range velocity correlations in the large persistence time limit; the corresponding correlation length grows as a power law \(\sim\sqrt{\tau_{p}}\) with increasing persistence time \(\tau_{p}\)[11; 12; 13; 14; 15]. The emergence of such non-equilibrium velocity correlation has always been attributed to a (highly persistent) dense active matter system and thus taken to require a high density of active particles. These results raise the question of whether the above two conditions, of high density and high activity, can be decoupled. In particular, could long-range velocity correlations be generated in a dense system of _passive_ particles, by introducing activity only through a small fraction (e.g. much lower than the percolation density) of active particles? A related question of interest is how ordered non-equilibrium states are affected by the inclusion of defect particles (e.g. static defects or motile non-aligning agents known as dissenters) that do not participate in the processes that induce the order. The role of both quenched [16; 17; 18] and annealed [16; 18; 19] disorder in the context of Vicsek-like models has been investigated very recently and it has been shown that presence of both types of disorder tends to destroy the ordered flocking state [16; 17; 18; 19]. Our work explores a similar line of questions, but in a system where long-range velocity correlations appear without any explicit alignment interaction. We ask in particular whether the long-range order in the instantaneous velocity field seen in such systems is stabilised or destabilized by the inclusion of a large fraction of passive particles. Motivated by the above questions, in this paper we study mixtures of active and passive particles to explore whether inclusion of passive particles enhances or suppresses local orientational order and to see whether we can decouple the roles of activity and density in generating long-range velocity correlations. Using extensive particle-based simulation of an active-passive mixture, we demonstrate that velocity ordering is enhanced by an increasing density of passive particles, and show that long-range velocity correlations can be generated in an athermal passive medium by a tiny fraction of active insertions (dopants) as long as the medium is dense enough. We also construct an analytical theory to explain the physics of velocity correlations in a dense passive medium with active dopants. Our hydrodynamic theory predicts that the amplitude of the velocity correlations is proportional to \(f^{2}\) where \(f\) is the magnitude of the propulsion force acting on each active particle, proportional to \(\tau_{p}^{-3/4}\) where \(\tau_{p}\) is the persistence time of the active particles, and proportional to the density of active particles \(\phi_{a}\). The hydrodynamic theory also predicts that the correlation length \(\xi_{L}\) only depends on \(\tau_{p}\), as \(\sqrt{\tau_{p}}\)[11; 12; 13; 14; 15], not on the active forcing magnitude \(f\). We verify these theoretical predictions by performing further targeted simulations. The explicit form of the correlation function that we derive theoretically decouples the roles that density and activity play in generating long range velocity correlations in a non-equilibrium steady state. This insight will be useful in understanding long range ordering in e.g. dense passive colloidal systems driven by a few self-propelled Janus colloids, or assembly of dead bacteria churned up by few living ones. Particle-based Model:We consider a binary mixture [20] of passive and active Brownian particles (ABP) [3; 4; 5; 6; 7] moving in two dimensions and occupying area fractions of \(\phi_{p}\) and \(\phi_{a}\), respectively. The dynamical evolution of the particle positions is described by the overdamped equations of motion: \[\gamma\dot{\mathbf{r}}_{i}=\mathbf{F}_{i}+f\mathbf{n}_{i}\,\Delta_{i}(\mathcal{ A}) \tag{1}\] where \(\mathbf{r}_{i}\) is the position vector of the \(i\)-th particle and \(\gamma\) is the constant drag coefficient governing the friction force acting on each particle. The factor \(\Delta_{i}(\mathcal{A})\) (\(\Delta_{i}=1\) for the active particles and \(\Delta_{i}=0\) for the passive particles) Figure 1: Schematic illustration of (a) a dense assembly of active particles, and (b) a dense mixture of active (blue) and passive (red) particles. Snapshots of the system with each particle coloured according to the orientation of its instantaneous velocity vector, for (c) \(\phi_{a}=0.7\) (\(\phi_{p}=0\)), (d) \(\phi_{a}=0.01\), \(\phi_{p}=0.69\); both systems show strong emergent velocity correlations. restricts the active forces \(f\mathbf{n}_{i}\) to the particles in the subset \(\mathcal{A}\) of active particles. The orientation vectors of the active forces are \(\mathbf{n}_{i}=(\cos\theta_{i},\sin\theta_{i})\), and the orientation angles \(\theta_{i}\) of the active forcing follow the dynamics \[\dot{\theta}_{i}=\sqrt{\frac{2}{\tau_{p}}}\zeta_{i}\quad\text{for}\quad i\in \mathcal{A} \tag{2}\] where the noise \(\zeta_{i}\) has zero mean and time correlations \(\langle\zeta_{i}(t)\zeta_{j}(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime}).\) In Eq. 2, \(\tau_{p}\) is the persistence time. All particles in the system interact only through steric interactions described by the forces \(\mathbf{F}_{i}=-\nabla_{i}U\) where \(U\) is a repulsive WCA (Weeks-Chandler-Anderson) interaction potential [21] (see supplementary information for the details of the potential). Velocity correlations in active-passive mixture:First we reproduce the long range velocity correlations in a completely active system (see Fig.1(a) for a schematic and Fig.1(b) for a snapshot of the system showing the long range ordering of the instantaneous velocities). This effect has been reported before in the context of different _dense_ Figure 2: Snapshot of the system (binary mixture of active and passive particles) for various combinations of values of the area fractions of active and passive particles, \(\phi_{a}\) and \(\phi_{p}\), respectively. The snapshots at the bottom right corner shows that a small fraction of active inclusions can cause long-range velocity correlations in a dense athermal and almost entirely passive system. active matter_ systems, both in simulations and in experiments [11; 12; 13; 14; 15]. Here we want to explore a different scenario (see Fig.1(c) for a schematic) and test whether a dense _passive_ system that is driven by just a few active particles can show similar ordering. Indeed, Fig.1(d) shows remarkably similar order in such a dense passive system that is driven by a very small fraction (\(\phi_{a}=0.01\)) of active Brownian particles. To explore this further, we also ran simulations for combinations of different fractions of active and passive particles (see Fig. 2 for snapshots). Strong local velocity correlations are seen to emerge as long as the total density \(\phi_{\rm tot}=\phi_{a}+\phi_{p}\) is high enough and there exists a non-zero fraction of active particles. To get a better understanding of this velocity ordering in an active-passive mixture we developed a hydrodynamic theory that we describe in the next paragraph. Theory:To derive the correlation of the hydrodynamic velocity field of the system, we extend the approach used by Henkes _et al._ for a dense active system [14] to the case of a passive system with active dopants. We consider the entire collection of passive particles as a dense medium having a smooth velocity field, with the active particles as random point-like defects that are the sources of a force density of the form \[{\bf F}_{i}({\bf r},t)=a^{2}f{\bf n}_{i}(t)\delta({\bf r}-{\bf r}_{i}(t)) \tag{3}\] for the \(i\)-th active particle. Here \(a\) is a microscopic length scale given by the particle size, \(f\) is the magnitude of the propulsion force acting on each active particle as before, and \({\bf n}_{i}\) is the unit vector associated with the orientation of this active force. Assuming now that in the large persistence time limit and at sufficiently high density, the active particles deform the elastic solid-like medium by pushing other particles without changing their positions significantly, we can evaluate the correlation between the forces from active particles \(i\) and \(j\) in Fourier space \(({\bf q},\omega)\) as \[\langle{\bf F}_{i}({\bf q},\omega)\cdot{\bf F}_{j}({\bf q^{\prime}},\omega^{ \prime})\rangle=\frac{4\pi\tau_{p}a^{4}f^{2}}{1+(\tau_{p}\omega)^{2}}e^{i{\bf q }\cdot{\bf r}_{i}}e^{i{\bf q^{\prime}}\cdot{\bf r}_{j}}\delta_{ij}\delta( \omega+\omega^{\prime}) \tag{4}\] where the frequency dependence arises from the dynamics of the active force orientation \({\bf n}_{i}(t)\)[14]. We then sum over all the particles (over the indices \(i\) and \(j\)) and also over the steady state probability distribution of the active particles' positions \({\bf r}_{i}\), which we take as uniform, to arrive at the total force correlator \[\langle{\bf F}({\bf q},\omega)\cdot{\bf F}({\bf q^{\prime}},\omega^{\prime}) \rangle=\frac{N_{a}a^{2}f^{2}(2\pi)^{3}}{N}\delta({\bf q}+{\bf q^{\prime}}) \frac{2\tau_{p}}{1+(\tau_{p}\omega)^{2}} \tag{5}\] Figure 3: (a) Scaled velocity-velocity spatial auto correlation as a function of spatial distance \(r\), for different active force magnitudes \(f\) as indicated in the legend, and \(\tau_{p}=3\) (b) The correlation length \(\xi_{L}\) as a function of the active force for different persistence times \(\tau_{p}\) as shown; the plot demonstrates that, at fixed persistence time, the correlation length is independent of the magnitude of the active force. where \(N_{a}\) is the number of active particles and \(N\) the total number of particles in the system. After taking an angular average, the form of the velocity correlation function can be derived [14] (see supplementary information for the full calculation and the final form of the correlation function). Its behaviour can be approximated at large distances as \[C(\mathbf{r})\approx\left(\frac{\phi_{a}}{\phi_{a}+\phi_{p}}\right)\left(\frac{a ^{2}f^{2}}{4\pi\zeta^{2}}\sqrt{\frac{\pi}{2r}}\right)\frac{1}{\xi_{L}}^{3/2}e^ {-r/\xi_{L}} \tag{6}\] where \(\xi_{L}=\left(\frac{(B+\mu)\tau_{p}}{\zeta}\right)^{1/2}\), \(r\) is the spatial distance, \(\zeta\) is the friction coefficient as before, and \(B\) and \(\mu\) are the bulk and shear moduli of the overall medium. These are expected to be dependent only on the total area fraction \(\phi_{\rm tot}=\phi_{a}+\phi_{p}\) of active and passive particles. Therefore we now have a testable prediction about the equal time velocity auto correlation function from our hydrodynamic theory in terms of the control parameters \(f,\tau_{p},\phi_{a},\phi_{p}\) etc. Comparison with Theory:To validate the predictions of our theory we ran further simulations to test explicitly the effects of varying active force magnitude \(f\), persistence time \(\tau_{p}\), and area fractions of active \(\phi_{a}\) and passive particles \(\phi_{p}\), and compared the correlation functions from those simulations with those calculated using the hydrodynamic theory. Eq. (6) predicts that the prefactor of the velocity-velocity spatial auto-correlation function will scale as \(f^{2}\) when we vary the active force magnitude \(f\), without any associated variation in the correlation length. Therefore the values of the scaled auto-correlation function \(C(r)/f^{2}\) are expected to collapse into a single curve for different values of the active forcing \(f\) as long as the other parameters are kept constant. Fig. 3(a) clearly shows that the scaled correlation functions for different magnitudes of the active force do indeed collapse on top of each other. In Fig. 3(b) we show the correlation length (\(\xi_{L}\)) as a function of the active force \(f\) for different persistence times (\(\tau_{p}\)), which provides further evidence that the correlation length is independent of the magnitude of the active force when the persistence time is kept constant. To shed light on the effects of the persistence time scale we vary \(\tau_{p}\) next, keeping the both area fractions \((\phi_{a},\phi_{p})\) and the active forcing magnitude \(f\) constant. Our hydrodynamic theory (see Eq. 6) suggests that the prefactor of the velocity-velocity spatial auto-correlation function scales as \(\tau_{p}^{-3/4}\) (due to the dependence of \(\xi_{L}\) on \(\tau_{p}\)), apart from the standard exponential dependence on \(r/\xi_{L}\). Indeed as Fig. 4(a) shows the simulation data points for different persistence times \(\tau_{p}\) for a given active force \(f\) fall nicely on the same curve once we scale \(C(r)\) appropriately, i.e. \(C\) by \(\tau_{p}^{-3/4}\) and \(r\) by \(\xi_{L}\). Fig. 4(b) further indicates that the correlation length \(\xi_{L}\) grows as the square root of Figure 4: (a) The scaled equal-time velocity-velocity correlation as a function of spatial distance \(r\) for different persistence times \(\tau_{p}\) as indicated, for active force magnitude \(f=0.25\). (b) The correlation length \(\xi_{L}\) as a function of the persistence time \(\tau_{p}\) for different values of the active force magnitude \(f\) shows that the correlation length grows as a power law (with exponent \(\frac{1}{2}\)) with the persistence time \(\tau_{p}\) and this behaviour is independent of the magnitude of the active force \(f\), which is shown in the legend. persistence time \(\tau_{p}\) regardless of the magnitude of the active force. This is also consistent with our theory and the earlier studies [11; 12; 13; 14; 15]. We finally explore the dependence on the area fractions of passive and active particles \(\phi_{p}\) and \(\phi_{a}\), respectively, while keeping the active forcing parameters \((f,\tau_{p})\) constant. The theory suggests that apart from the linear dependence on the fraction of active particles \(\phi_{a}\) there is no separate dependence on \(\phi_{a}\), i.e. all other density dependences of the correlation function \(C(r)\) appear only via the total area fraction \(\phi_{\rm tot}=\phi_{a}+\phi_{p}\) of the binary mixture. Our simulations, which involve different mixture compositions, confirm this prediction in Fig. 5 (a) where we scale the correlation function by the area fraction of active particles \(\phi_{a}\) for a fixed value of the total density \(\phi_{\rm tot}\). Fig. 5 (b) shows that the system has practically the same correlation length regardless of the fraction \(\phi_{a}\) of active particles (or the fraction \(\phi_{p}\) of passive particles) when the total area fraction \(\phi_{\rm tot}\) is sufficiently high and kept constant. Conclusion:In this article we have demonstrated that long-range velocity correlations (which have only been observed in dense active mater system until now [11; 12; 13; 14; 15]) can be generated in a dense athermal passive system by including a very small fraction of persistent active Brownian particles. This observation conceptually decouples the roles played by density and activity parameters in generating such non-equilibrium ordering effects. Also, our results extend the discussion on whether inclusion of disorder can increase order in a system or whether conversely it tends to destroy order [16; 17; 18; 19]. We started by providing evidence that with a very small amount of active inclusions or dopants, an otherwise passive, dense athermal system can exhibit long range velocity correlations similar to a pure dense assembly of active particles. We explored the degree of velocity correlation for different numbers of active and passive particles particles in such a mixture. We then derived the hydrodynamic theory to calculate the equal time velocity auto-correlation function in terms of the microscopic system parameters such as \(f\), \(\tau_{p}\) etc. This theory made testable predictions that we confirmed via further molecular dynamics simulations. We examined the impact of different parameters on the velocity correlations and found good agreement between the simulation results and the hydrodynamic theory. Specifically, we found that the correlation length depends only on the overall area fraction of active and passive particles and grows as \(\sqrt{\tau_{p}}\) with the persistence time of self-propulsion. The latter result is in agreement with previous findings on purely active systems [11; 12; 13; 14; 15]. Our predictions and results can be further tested both in simulations [20; 22; 23] and in experiments, e.g. on mixtures of microbes and passive colloidal particles [24], assemblies of active and passive colloids [25; 26], mixture of mobile and immobile bacteria [27], or active granular mixtures [28; 29]. Understanding the decoupling of density and Figure 5: (a) The scaled velocity-velocity spatial auto correlation function as a function of spatial distance \(r\) for different combinations of \(\phi_{a}\) and \(\phi_{p}\) for a fixed value of \(\phi_{\rm tot}=0.95\). (b) Data for the correlation length are consistent with \(\xi_{L}\) being constant across the same combinations of \(\phi_{a}\) and \(\phi_{p}\); the color scheme is the same as in (a). activity makes it possible not only to reproduce a long-range velocity correlation in different active-passive mixtures or assemblies but also paves the way for designing and controlling active matter for practical purposes, e.g. in the context of transport and mixing. ## Acknowledgement This research was conducted within the Max Planck School Matter to Life, supported by the German Federal Ministry of Education and Research (BMBF) in collaboration with the Max Planck Society. The simulations were run on the GoeGrid cluster at the University of Gottingen, which is supported by the DFG (grant INST 186/1353-1 FUGG) and MWK Niedersachsen (grant no. 45-10-19-F-02). This project has received funding from the European Unions Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement no. 893128.
2310.15989
Detecting the phase transition in a strongly-interacting Fermi gas by unsupervised machine learning
We study the critical temperature of the superfluid phase transition of strongly-interacting fermions in the crossover regime between a Bardeen-Cooper-Schrieffer (BCS) superconductor and a Bose-Einstein condensate (BEC) of dimers. To this end, we employ the technique of unsupervised machine learning using an autoencoder neural network which we directly apply to time-of-flight images of the fermions. We extract the critical temperature of the phase transition from trend changes in the data distribution revealed in the latent space of the autoencoder bottleneck.
D. Eberz, M. Link, A. Kell, M. Breyer, K. Gao, M. Köhl
2023-10-24T16:38:09Z
http://arxiv.org/abs/2310.15989v1
# Detecting the phase transition in a strongly-interacting Fermi gas by unsupervised machine learning ###### Abstract We study the critical temperature of the superfluid phase transition of strongly-interacting fermions in the crossover regime between a Bardeen-Cooper-Schrieffer (BCS) superconductor and a Bose-Einstein condensate (BEC) of dimers. To this end, we employ the technique of unsupervised machine learning using an autoencoder neural network which we directly apply to time-of-flight images of the fermions. We extract the critical temperature of the phase transition from trend changes in the data distribution revealed in the latent space of the autoencoder bottleneck. An ensemble of attractively-interacting fermions exhibits a phase transition to a superfluid state below a critical temperature \(T_{\mathrm{C}}\). The exact temperature at which the phase transition occurs depends on the microscopic details, such as inter-particle interactions and correlations. For weak attractive interactions, in the Bardeen-Cooper-Schrieffer (BCS) regime, the phase transition is governed by the opening of a gap due to Cooper instability near the Fermi level. The critical temperature in this regime decays exponentially with decreasing interaction strength. If the system supports a dimer bound state between two fermions, the ensemble can form a molecular Bose-Einstein condensate (BEC). The critical temperature of this state converges towards the value of a weakly repulsive BEC for decreasing interaction strength. These two regimes are known as the limits of the BEC-BCS crossover, connected by the unitarity regime around the point of diverging scattering length. In the regime of strong interaction strength around unitarity the determination of the critical temperature is a field of ongoing research [1; 2; 3; 4; 5; 6; 7; 8; 9]. Detecting the phase transition over a wide range of interactions has been difficult. Only on the BEC side of the crossover, the conventional technique of detecting the bimodal momentum distribution of the dimers directly reveals the condensate [10; 11; 12]. In contrast, at unitarity a measurement of the equation of state and thermodynamic quantities unveiled the critical temperature [13]. On the BCS side of the crossover, the Cooper pairs break upon release from the trap and, therefore, the so-called rapid-ramp technique has been developed to convert these pairs to tightly-bound dimers [14; 15; 16; 17]. Whether or not the rapid-ramp technique closely reflects the situation of the trapped gas depends crucially on the adiabaticity of the ramp and an accurate verification of this is very difficult. A direct detection of the superfluid signature in the momentum distribution of the fermions, on the other hand, is obscured by finite temperature, collisions during ballistic expansion and the shape of the trapping potential. Recently, we have demonstrated that using supervised learning of deep neural networks, the condensate fraction and hence the critical temperature over a wide range of interactions can be detected directly from the momentum distribution of the fermions [18]. However, this method still relies on the rapid-ramp technique for labelling the training data. In this work, we measure the critical temperature of the superfluid phase transition by employing unsupervised machine learning directly on time-of-flight images. Unsupervised machine learning is a technique which does not require labelling the data during training of the network and hence is unbiased. This is a significant advantage because the generation of labels is a potential source of error. To this end, we employ a deep neural network as an autoencoder as illustrated in Figure 1. The autoencoder comprises an encoder and decoder network as well as a bottleneck layer in the middle, with the input of the encoder being of the same dimension and shape as the output of the decoder. The bottleneck layer connects the input and output layers and, preferably, has much lower dimensionality than both input and Figure 1: Architecture and training of an autoencoder network. By keeping the number of neurons in the bottleneck low and training the network to reproduce its input, a low-dimensional representation of the input data can be generated. The output of the bottleneck can then be accessed to search for features in the data structure.
2306.06975
A language-inspired machine learning approach for solving strongly correlated problems with dynamical mean-field theory
We present SCALINN -- Strongly Correlated Approach with Language Inspired Neural Network -- as a method for solving the Anderson impurity model and reducing the computational cost of dynamical mean-field theory calculations. Inspired by the success of generative Transformer networks in natural language processing, SCALINN utilizes an in-house modified Transformer network in order to learn correlated Matsubara Green's functions, which act as solutions to the impurity model. This is achieved by providing the network with low-cost Matsubara Green's functions, thereby overcoming the computational cost of high accuracy solutions. Across different temperatures and interaction strengths, the performance of SCALINN is demonstrated in both physical observables (spectral function, Matsubara Green's functions, quasi-particle weight), and the mean squared error cost values of the neural network, showcasing the network's ability to accelerate Green's function based calculations of correlated materials.
Zelong Zhao, Hovan Lee, George Booth, Weifeng Ge, Cedric Weber
2023-06-12T09:17:07Z
http://arxiv.org/abs/2306.06975v2
A language-inspired machine learning approach for solving strongly correlated problems with dynamical mean-field theory ###### Abstract We present SCALINN -- Strongly Correlated Approach with Language Inspired Neural Network -- as a method for solving the Anderson impurity model and reducing the computational cost of dynamical mean-field theory calculations. Inspired by the success of generative Transformer networks in natural language processing, SCALINN utilizes an in-house modified Transformer network in order to learn correlated Matsubara Green's functions, which act as solutions to the impurity model. This is achieved by providing the network with low-cost Matsubara Green's functions, thereby overcoming the computational cost of high accuracy solutions. Across different temperatures and interaction strengths, the performance of SCALINN is demonstrated in both physical observables (spectral function, Matsubara Green's functions, quasi-particle weight), and the mean squared error cost values of the neural network, showcasing the network's ability to accelerate Green's function based calculations of correlated materials. ## I Introduction In condensed matter physics, the energy scales of electron itinerancy and electron-electron Coulomb interactions often define the level of theory required to adequately describe the phenomena of a system. These two energy scales overlap for strongly correlated systems, and therefore, the problem cannot be simplified as the perturbative expansion of one property against the static backdrop of another. The competing effects of itineracy and interaction-induced localization necessitates treating these properties on equal footing. This competition and corresponding rise of complexity is characterized by the Hubbard model [1], from which a rich collection of physical phenomena emerges. This simple model clearly articulates the aforementioned energy overlap, and retains an important position in condensed matter theory to this day, particularly in research into high temperature superconductivity [2; 3; 4], cold-atom optical trapping [5; 6; 7], and topologically ordered phases [8; 9; 10] among others. The Hubbard model, although simple in formulation, has so far only been analytically solved for the one-dimensional case. This necessitates numerical approaches to solve its important two- and three-dimensional analogues. Dynamical mean-field theory [11] (DMFT) is one such numerical approach that is designed to solve the Hubbard model through an exact mapping at infinite dimensions onto a self-consistent Anderson impurity model (AIM) [12; 13]. In this scheme, the object to be converged is the Green's function: the descriptor of creation, propagation, and subsequent annihilation of an electron or hole in a many-body interacting system. Various methods have been developed to calculate the Green's functions of the AIM within the self-consistent cycle of a DMFT calculation. However, each of these solvers have their own limitations. Examples of these solvers include: The Hubbard-I (HI) solver [1], which assumes no electron itinerancy, and is therefore an approximation that is only reasonable for highly localized systems. The iterative perturbation theory (IPT) solver [11] and its third order extension, which contains all first, second (and optionally third) order irreducible diagrams in the proper self energy. This is an accurate solver in the low correlation regime, but does not generalize well beyond systems that are half-filled and relatively low interaction strength. Continuous time quantum Monte Carlo (CTQMC) methods [14; 15; 16; 17] splits the Hamiltonian of the system into two parts (generally hybridization or interaction), and expands the full partition function as powers of one such part, where these powers are stochastically sampled. While formally exact, this method is nonetheless burdened with the sign problem, random errors, and the requirement to analytically continue the resulting Green's function. Finally, the exact diagonalization (ED) solver [18; 19; 20] which computes the eigenvalues of the AIM Hamiltonian directly, and without approximation in the physics. However, this approach becomes exponentially prohibitive as an increasing number of impurity or bath orbitals are taken into consideration. This therefore limits the number of orbitals in the calculation, and introduces a finite bath discretization error of the hybridization into the resulting Green's function. While this represents an incomplete list of all DMFT solvers that have been considered to date, it is clear that deficiencies remain, and research into improved AIM solvers is still a highly active area of research. A potential solution could well emerge from another discipline; machine learning. Due to advancements in hardware accelerators, improvements in memory capac ity, and access to increasingly large databases, deep learning has dominated the field of machine learning in the last decade [21; 22; 23]. These developments have given hope to materials scientists; methods to overcome long standing bottlenecks in condensed matter may be in reach [24; 25]. A far-from-complete list of these machine learning driven efforts include: predictions of protein structures [26], the learning of exchange-correlation functionals for density functional theory [27], the challenge of ill-conditioned analytic continuation of dynamical quantities [28], or previous efforts aimed at tackling the same problem of DMFT solvers [29; 30; 31]. In this work, we develop and demonstrate the capabilities of SCALINN -- Strongly Correlated Approach with Language Inspired Neural Network -- which is based on the Transformer architecture [23]. SCALINN predicts the Green's functions of strongly correlated systems as ordered sequences in the Matsubara domain, this is done in order to reliably solve single-impurity Anderson models (SIAM) within self-consistent DMFT calculations. Rather than encoding the SIAM Hamiltonian directly into the network as input parameters, the characteristics of the SIAM are instead encoded as (potentially multiple) computationally cheap input Green's functions. Optionally, additional characteristic system parameters are also supplemented to the network input, in order to generate more accurate estimates of the target Green's function from the decoder. The Transformer framework was chosen due to: 1) its potentially infinite memory span -- Green's functions as continuous sequential frequency data exhibits non-local dependence between different frequency points, and therefore requires the memory provided by the Transformer network, 2) the parallelism enabled by Transformer, and 3) in the absence of word embedding, the query, key and value method provides a rich representation between Green's function frequency sequence entries. Through this method, we find that the Transformer network is able to learn the mapping between these cheap, low-level SIAM input Green's functions, and the fully correlated output Green's functions. This enables reliable predictions within DMFT iterations in a non-perturbative manner. The performance of SCALINN at different values of temperature and interaction strengths is showcased in this work, along with predictions of spectral functions, quasi-particle weights and Matsubara Green's functions. More technical parameters, including training and testing errors of the network across different hyperparameters are given in the appendix. ## II Theory ### DMFT Primer In this work, the local propagator of the Hubbard model is self-consistently mapped onto the propagator of an SIAM via the DMFT procedure [13]. That is, the the local propagator \(G^{\text{local}}\) is related to the momentum \(\mathbf{k}\)-dependent lattice Green's functions of the Hubbard model via: \[G^{\text{local}}=\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}^{N_{\mathbf{k}}}G _{\mathbf{k}}^{\text{lattice}}\,, \tag{1}\] where \(N_{\mathbf{k}}\) is the number of \(\mathbf{k}\) points. \(G^{\text{local}}\) can then be calculated by equating it to the impurity propagator \(G_{\text{imp}}\) of the SIAM. Expanding on this, the SIAM Hamiltonian is given as: \[\begin{split}\hat{H}_{\text{SIAM}}=&\sum_{\sigma} \varepsilon_{d}\hat{d}_{\sigma}^{\dagger}\hat{d}_{\sigma}+U\hat{d}_{\uparrow} ^{\dagger}\hat{d}_{\downarrow}\hat{d}_{\downarrow}^{\dagger}\hat{d}_{ \downarrow}+\sum_{\mathbf{k},\sigma}\varepsilon_{\mathbf{k}}\hat{c}_{\mathbf{ k}\sigma}^{\dagger}\hat{c}_{\mathbf{k}\sigma}\\ &+\sum_{\mathbf{k},\sigma}V_{\mathbf{k}}\left(\hat{d}_{\sigma}^{ \dagger}\hat{c}_{\mathbf{k}\sigma}+\hat{c}_{\mathbf{k}\sigma}^{\dagger}\hat{d} _{\sigma}\right)\,,\end{split} \tag{2}\] where \(\varepsilon_{d}\) is the energy level of the impurity site, \(\hat{d}_{\sigma}^{\dagger}\)) annihilates (creates) a spin \(\sigma\) electron at the impurity, and \(U\) determines the energy of the Coulomb interaction between opposite spin electrons at the impurity. The energy levels of the momentum \(\mathbf{k}\)-dependent bath sites are represented as \(\epsilon_{\mathbf{k}}\), and their corresponding annihilation (creation) operators are \(\hat{c}_{\mathbf{k}\sigma}^{(\dagger)}\). Lastly, electrons can hop between the impurity site and the bath sites with hybridization strength \(V_{\mathbf{k}}\). Starting and ending at the impurity orbital, the characteristic propagation of a single fermion across any, all, and none of the bath orbitals is described by: \[G_{\text{imp}}(i\omega_{n})=[i\omega_{n}+\mu-\varepsilon_{d}-\Sigma(i\omega_{n })-\Delta(i\omega_{n})]^{-1}\,, \tag{3}\] where \(i\omega_{n}\) are the Matsubara frequency points defined by the system temperature, and \(\mu\) is the chemical potential which controls the number of electrons within the SIAM. The hybridization \(\Delta(i\omega_{n})\) defines the single-particle coupling of the impurity to the bath sites. Lastly, the proper self energy \(\Sigma(i\omega_{n})\) is obtained by matching \(G_{\text{imp}}\) with \(G_{\text{local}}\), and contains all information about how the \(U\)-induced correlations influence the impurity Green's function within each DMFT cycle. Including these correlation effects in turn modifies the description of the itinerancy in the original system. As a result of this, the correlated Green's function of the SIAM, hybridization function \(\Delta(i\omega_{n})\) and self-energy \(\Sigma(i\omega_{n})\) all have to be updated multiple times until self-consistency of these quantities is achieved. As an example, in order to obtain \(G_{\text{imp}}\) from \(\hat{H}_{\text{SIAM}}\), ED is within a group of wave function approaches that can only be applied on systems with finite-dimensional Hilbert spaces. As such, it is necessary to truncate the continuous dispersion of \(\Delta(i\omega_{n})\), which is achieved with a finite bath discretization, as: \[\Delta^{\text{bath}}(i\omega_{n})\approx\sum_{p=1}^{N_{\mathbf{k}}}V_{p}^{2}/ (i\omega_{n}-\epsilon_{p})\,, \tag{4}\] with number of bath sites \(N_{b}\), coupling of impurity to each bath site \(V_{p}\), and bath energies \(\epsilon_{p}\). These parameters are typically fit via numerical techniques [32]. Since the only term in eq. (3) that describes electron-electron interactions is \(\Sigma(i\omega_{n})\), \(G_{\text{imp}}\) can be represented via the Dyson equation in terms of a free Green's function \(G^{0}(i\omega_{n})\) that does not contain electron-electron interactions: \[(G_{\text{imp}}^{\text{ED}}(i\omega_{n}))^{-1}=\underbrace{[i\omega_{n}+\mu- \varepsilon_{d}-\Delta(i\omega_{n})]^{-1}}_{G^{0}(i\omega_{n})}-\Sigma(i\omega _{n})\,. \tag{5}\] The DMFT loop can then be written in terms of the ED impurity Green's function \(G^{\text{ED}}(i\omega_{n})\) as: \[G_{\text{imp}}^{\text{ED}}(i\omega_{n}) =\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}G_{\mathbf{k}}^{\text{ lattice}}(i\omega_{n})\] \[=\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}[i\omega_{n}+\mu- \varepsilon_{\mathbf{k}}-\Sigma(i\omega_{n})]^{-1}\] \[=\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}[\varepsilon_{d}+ \Delta^{\text{bath}}(i\omega_{n})-\varepsilon_{\mathbf{k}}+G^{\text{ED}}(i \omega_{n})^{-1}]^{-1} \tag{6}\] where \(G^{\text{ED}}(i\omega_{n})\) is present on both sides of the equation, and is updated self-consistently at each iteration. ### SCALINN Overview SCALINN is a version of the original Transformer model by Vaswani et al. [23], modified such that the inputs and output represent series of Matsubara Green's function data. The target Green's function (illustrated in fig. 1.a) is the 7-bath ED solution to the SIAM, relating to a single DMFT iteration. While the input is one or more computationally cheaper Green's function solutions to the same SIAM model. In this way, we can demonstrate the potential of this approach in overcoming the computational expense of ED solvers in DMFT, as well as potentially the bath discretization error inherent in ED solvers. We consider two different modes of operation to test the viability and validity of our approach: In the first, we consider the potential to overcome bath discretization error in ED solvers. This is tested by discarding a number of discretized baths from the 7-bath ED system, and hence the bath orbitals are 'truncated' (shown in fig. 1.b). Given the exponential scaling in calculations with respect to bath size, this method would provide substantial benefit in computational cost. Three of these truncated systems were created, such that all of the 7 bath orbitals from the full system is present in at least one of truncated system. This is a similar motivation to the 'distributed exact diagonalization' approach [33]. The mapping between the Green's functions of these truncated systems \(G_{l}^{\text{Trunc}}\) and the target Green's function \(G^{\text{ED}}\) is then learned by the network. The second approach is for the network to learn \(G^{ED}\) from input Green's functions of complementary characteristics. In our attempts, the HI and the IPT solvers were used. These two input models, while computationally cheap, allow for a description of different extremes in correlated physics. Specifically, HI is superior in the atomic limit, while IPT describes the low-\(U\) itinerant physics as a perturbative expansion in powers of \(U\). We will denote this approach the 'hybrid method'. The cost function to be minimized in the training of both models is defined as the mean squared error to the 7-bath ED solution at a set of training points, \[J(\hat{G}(i\omega_{n}),G^{\text{ED}}(i\omega_{n}))=\frac{1}{N}\sum_{n=0}^{N}( \hat{G}(i\omega_{n})-G^{\text{ED}}(i\omega_{n}))^{2}\,, \tag{7}\] where \(\hat{G}(i\omega_{n})\) is the predicted Green's function, \(G^{\text{ED}}(i\omega_{n})\) is the 7-bath ED solution of the SIAM, and the sum is performed over all Matsubara frequency points. Although the ED solver can calculate Green's functions directly on the real-frequency axis \(\omega\), the choice of working in Matsubara frequency \(i\omega_{n}\) is motivated by the power of the transformer architecture and similarities between \(G(i\omega_{n})\) as a series in the Matsubara domain and word sequences. Matsubara points are the poles of the Fermi-Dirac function and hence the representation is discrete, akin to words of a sentence. Moreover, \(G(i\omega_{n})\) is a well-defined analytic function for all Matsubara points at equilibrium, tending to a smooth and differentiable function as the temperature is lowered. Furthermore, entries within sequences of \(G(i\omega_{n})\) are ordered in terms of energy, as such these entries are not lone, randomly distributed points, but that there are correlations between entries within the sequence, akin to a sequence in a sentence or time-series. These properties, which \(G(i\omega_{n})\) shares with natural language processing, allows the problem of correlated materials to be transcribed and tackled with the Transformer network. Lastly, the condition of causality of \(G(i\omega_{n})\) can be tested in straightforward manner, ensuring predicted solutions remain physical. ### SCALINN Architecture In this section, we consider an overview of the SCALINN architecture in terms of its various neural network blocks, where differences to the standard transformer architecture of Ref. [23] are particularly highlighted. In natural language processing, traditional recurrent neural networks have a short memory, where the context gained from words at the start of a sentence could be lost when the model reaches the end of the sentence. The original Transformer network overcomes this difficulty by positional encoding, whereby the position of words in the sentence are supplemented upon the input word entries. This method of positional encoding however, requires the input words to be cast into a vector format via word embedding techniques; corresponding to the meanings of various words encoded in vectors. Since the Green's function \(G(i\omega_{n})\) are sequences of scalars, instead of sequences of vectors, the original method of positional encoding would not be a viable solution to deal with the aforementioned memory issue. In SCALINN, we consider an explicit inclusion of the Matsubara frequency \(\omega_{n}=\frac{(2n+1)\pi}{\beta}\) of each Green's function sequence entry \(G_{l}(i\omega_{n})\) as positional encoding of the sequence. Moreover, to reintroduce the representability of information afforded by word embedding in the original Transformer network, a multilayer perceptron (MLP) block is employed. This then leads to the encoder block, which is identical to the encoder block of the original network. This encoder block allows the network to perform self-attention, a method of computing the relevance and context between different entries within the input sequence. In total, information passes through this encoder block \(n_{Encoder}\) number of times. In appendix A, we consider further the performance of the models with difference choices of \(n_{Encoder}\) (see table A1). Moving onto the decoder portion of the network, the input to the decoder is subject to the same Matsubara encoding, but is passed to a another MLP block. In an effort to enhance the overall performance of the model, we integrate a masked multi-head self-attention (MHSA) mechanism in the decoder. This ensures that more significance ('attention') is placed on the input Green's function values around the Matsubara point of interest, which is achieved via a masking operation implemented by the \(\mathbf{M}\) matrix. The operation of each head is defined as follows: Figure 1: **Illustrations of the target, inputs and structure of the Transformer model.** a) target output from the model, trained on ED Green’s functions. Quantum fluctuations between spin up and spin down electrons between the correlated impurity and bath are illustrated with green and orange arrows respectively, while hybridizations between impurity and bath are depicted as curved arrows. Exact diagonalization green’s functions \(G^{\mathrm{ED}}(i\omega_{n})\) are shown in red, and act as targets of the Transformer model. b) The truncated scheme input Green’s functions, where a number of bath sites are discarded from the full SIAM model for its solution. Multiple ED Green’s functions are calculated per one target \(G^{\mathrm{ED}}(i\omega_{n})\), each with a different combination of removed bath sites. c) The HI+IPT hybrid scheme, where a Hubbard-I Green’s function and IPT Green’s function is used as input. These Green’s functions are stacked to form the input to the model. d) The Transformer model architecture, where neural network blocks taken from [23] are used as the encoder block and hashed blocks. \[\begin{split}&\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{\mathbf{M} \mathbb{1}+QK^{T}}{\sqrt{d_{k}}}\right)\cdot V\\ &\mathbf{M}_{ij}^{\text{RF-TGT}}=\begin{cases}0&\text{if }i- \text{LB}<j<i+1\\ -\infty&\text{else}\end{cases}\\ &\mathbf{M}_{ij}^{\text{SRC}}=\begin{cases}0&\text{if }i-\text{LB}<j<i+1+ \text{LF}\\ -\infty&\text{else}\end{cases}\end{split} \tag{8}\] The query, keys, and values - represented as \(Q,K,V\), respectively - are the learned vector components in this MHSA. Here, \(d_{k}\) is defined as \(d_{\text{model}}/h\), where \(d_{\text{model}}\) refers to the dimension of the model and \(h\) denotes the number of attention heads. Moreover, LF and LB, which signify lookforward and lookbackward respectively, are positive integers. The matrices \(\mathbf{M}^{\text{RF-TGT}}\) and \(\mathbf{M}^{\text{SRC}}\) are incorporated into the 'Right-Shifted Target Masked Multi-Head Self-Attention' (RF-TGT Masked MHSA) block and the 'Source Masked Multi-Head Self-Attention' (Source Masked MHSA) block respectively. Each attention head is then concatenated and projected to a dimension of \(d_{\text{model}}\), following the methodology described in [23]. Different Green's functions are inputted to the Transformer decoder during training and run-time. During training, the entire \(G^{\text{ED}}(i\omega_{n})\in\mathbb{R}^{N_{\omega}},n\in[1,N_{\omega}]\) is already known. Following the MHSA mechanism, the dimensions of the query and keys are represented as \(Q_{\text{RF-TGT}}\in\mathbb{R}^{N_{\omega}\times d_{k}}\) and \(K_{\text{RF-TGT}}\in\mathbb{R}^{N_{\omega}\times d_{k}}\) respectively. Accordingly, a square-shaped subsequent mask, \(\mathbf{M}^{\text{RF-TGT}}\in\mathbb{R}^{N_{\omega}\times N_{\omega}}\), is applied to make the model predictions of the next Matsubara point while masking out the information of the target solution. In contrast, the inputs to the Transformer encoder remain the same during both training and run-time, thus ensuring the output dimension of the Transformer encoder consistently remains \(\mathbb{R}^{(N_{\omega}+1)\times d_{\text{model}}}\). Within the source masked MHSA layer, K and V are derived from the Transformer Encoder, resulting in \(K_{\text{source}}\in\mathbb{R}^{(N_{\omega}+1)\times d_{k}}\) and \(V_{\text{source}}\in\mathbb{R}^{(N_{\omega}+1)\times d_{k}}\). Concurrently, Q is sourced from the right-shifted target, leading to \(Q_{\text{source}}\in\mathbb{R}^{N_{\omega}\times d_{k}}\). As a result, the source masked MHSA possesses the dimensions of \(\mathbf{M}^{\text{src}}\in\mathbb{R}^{N_{\omega}\times(N_{\omega}+1)}\). This configuration effectively adjusts the attention scores within a scope that ranges between lookforward and lookbackward, enabling the model to adapt its focus according to the complexities of the input data. The RF-TGT Masked MHSA blocks are responsible for performing self-attention with the decoder input, whereas the source MHSA block calculates self-attention between the encoder input and the decoder input. Generally, with a higher number of heads, the richness of information that can be learned in these multi-head self-attention blocks increases. This hyperparameter is varied, and the performances of models with different number of heads, and different number of decoder blocks (\(n_{Decoder}\)) are listed in table 1. Analogous to approximating Matsubara Green's function with a Laurent expansion in powers of \((i\omega_{n})^{-1}\), where Green's function points at large values of \(i\omega_{n}\) can be well understood through analytic commutator expansion for this tail, this Transformer model begins its sequence prediction at the _last_ Matsubara point, \(i\omega_{n=N_{\omega}-1}\). During prediction, the network decoder is ignorant to the \(G^{\text{ED}}(i\omega_{n})\) solution, and as such, an auxiliary Green's function \(G^{\text{aux}}(i\omega_{n})\in\mathbb{R}^{N_{\omega}},n\in[1,N_{\omega}]\) is iteratively constructed to fill the role of \(G^{\text{ED}}(i\omega_{n})\) as input to the decoder. We make the assumption that the highest frequency tail points of \(G^{\text{aux}}(i\omega_{n})\) are close to the input Green's function, so that the sequence is initialized with \(G^{\text{aux}}(i\omega_{n=N_{\omega}})=G^{\text{IPT}}(i\omega_{n=N_{\omega}})\) for the 'hybrid' scheme. This is justified as the asymptotic behavior of the Matsubara Green's function at large values is entirely defined by the low-order perturbative expansion with respect to the interaction, which is described by IPT theory. To begin, \(G^{\text{aux}}(i\omega_{n=N_{\omega}})\) are fed into the decoder, together with the full \(G^{input}(i\omega_{n})\) sequence at the encoder, the network generates the next highest frequency point of \(G^{\text{aux}}(i\omega_{n=N_{\omega}-1})\). Those high frequency \(G^{\text{aux}}(i\omega_{n})\) point are then kept and reintroduced to the decoder to approach low frequency points in \(G^{\text{aux}}\), such that \(G^{\text{aux}}(i\omega_{n=m+1})\) generates \(G^{\text{aux}}(i\omega_{n=m})\) until the lowest frequency \(G^{\text{aux}}(i\omega_{n=0})\) point is determined, resulting in the full predicted Green's function. We denote this \(\tilde{G}(i\omega_{n})\in\mathbb{R}^{N_{\omega}},n\in[0,N_{\omega}-1]\). A important difference compared to the original Transformer model concerns the nature of the final output layer. In a language model, the outputs are the probabilities that inform which word would best fit the output sequence next. However, the output to our network is a prediction of the Green's function -- a sequence of continuous scalars instead of a discrete word -- at the next Matsubara frequency point. As such, the softmax layer of the original model, which outputs normalized probabilities over a finite set (such as the case of word selection in the original work), is unsuited. The network therefore terminates with a linear layer without an activation function. ## III Results ### Computational Details To setup the training data of the model, three \(G^{\text{ED}}(i\omega_{n})\) data-sets at different inverse temperatures of \(\beta=10,50,100\) were generated, each consisting of 14,000 different 7-bath SIAM Hamiltonians. The ED Green's functions were generated using the ED-KCL package [34], with DMFT functionality where required enabled via interface to the TRIQS package [35]. Implementation of the Transformer model made use of the PyTorch package [36]. The training of our base model, detailed in Table 1, was conducted on a single NVIDIA A100 GPU, where each model was trained on 80% of the data for 350,000 steps, which took approximately 9 hours per model, with the rest of the data used for evaluation of the model accuracy. The bath parameters of the SIAM for these data-sets were calculated as approximations to a continuous semi-circular spectral hybridization, \[\Delta^{\mathrm{Bethe}}\left(i\omega_{n}\right)=\frac{1}{2\pi^{2}}\int_{-\infty} ^{\infty}d\omega\frac{\sqrt{W^{2}-\omega^{2}}}{i\omega_{n}-\omega}\Theta(W-| \omega|)\,, \tag{9}\] with bandwidth \(2W\). This hybridization represents the paradigmatic Bethe lattice, which represents an exact model for DMFT [12], with physical hybridizations of relevance to the applicability of DMFT expected to be close to this form. The different 7-bath Hamiltonian parameters were found by initializing the parameters randomly, and subsequently minimizing the error in the effective hybridization of the model (Eq. 4) via changes in \(V_{p}\) and \(\epsilon_{p}\) according to \[\sum_{i\omega_{n}}\frac{1}{i\omega_{n}}(\Delta^{\mathrm{bath}}(i\omega_{n})- \Delta^{\mathrm{Bethe}}(i\omega_{n}))^{2}. \tag{10}\] This fit is constraint to ensure that particle-hole symmetry of the bath parameters is maintained (with one bath energy constrained to \(\omega=0\)), and is stopped early when the squared error in the fit reaches only \(1W\), in order to provide the different Hamiltonians. To complete the definition of the SIAM, \(U\) values are uniformly generated in the range 0-10\(W\). The impurity level \(\epsilon_{d}\) is kept at \(-UW/2\), and the chemical potential \(\mu\) is set to \(-\epsilon_{d}-UW/2\), to ensure consideration of the particle-hole symmetric point of these Hamiltonians. When demonstrating the use of SCALINN as a solver in a DMFT calculation, the self-energy of Eq. 3 is self-consistently updated, with the interactions able to induce significant changes to the approximately semi-circular initial spectrum, including metal-insulator phase transitions. ### Truncated Hamiltonian We first consider the 'truncated' mode of operation of the SCALINN model, whereby predictions of the 7-bath SIAM models are created from ED Green's functions with only 3 bath orbitals, mitigating the exponential increase in cost of ED with respect to bath size. The 3-bath SIAM Hamiltonians were created from their 7-bath counterparts by inclusion of the \(\epsilon=0W\) bath orbital, and then selection of one particle-hole symmetric pair of bath orbitals at finite frequency. This allows us to create three 3-bath SIAM approximations to each SIAM of interest. These are solved at ED to provide the input to SCALINN, as described in Sec. II.2. An example of the effect that this bath dropout has on Green's functions is shown in fig. 2.a, where these truncated bath models differ significantly from the desired 7-bath solution. SCALINN nevertheless predicts the 7-bath ED solution with high accuracy after training of the model. SCALINN models are trained at three different inverse temperatures, with the average training error reduced to below \(J(\hat{G}(i\omega_{n}),G^{\mathrm{ED}}(i\omega_{n}))<10^{-5}\) in all cases. We can analytically continue the predicted Matsubara Green's functions onto the real-frequency axis via Pade approximants, to consider their accuracy of the real-frequency spectrum, shown in fig. 2.b for a representative test-set prediction, with three values of inverse temperature \(\beta\) and three values of interaction \(U\). The SCALINN predictions agree very well with the ED ground truth for these systems, even on the real axis, with only small deviations are observed in fig. 2.b. As the interaction increases, classic hallmarks of correlated materials emerge, with low \(U\) describing metals with a single quasi-particle peak at zero energy. As temperature increases (or as \(\beta\) decreases), so does the rate of scattering between electrons, leading to a broadening of these peaks. At intermediate interaction strength \(U=3W\), lower and upper Hubbard bands at \(\pm U/2W\) emerge around the quasi-particle peaks at \(\omega=0\). Once again, due to increasing rates of scattering, these three sets of peaks are Figure 2: **Representative SCALINN predictions for a test-set SIAM with the truncated bath scheme.** In a), the desired (7-bath) ED solution \(G^{\mathrm{ED}}(i\omega_{n})\) is plotted as a blue line, and the three truncated (3-bath) \(G^{Tranc}(i\omega_{n})\) are plotted in dashed grey. These \(G^{Tranc}(i\omega_{n})\) act as inputs to the Transformer model to produce the SCALINN predictions \(\hat{G}(i\omega_{n})\), which are plotted as blue dots. In b), Pade aanalytic continuation obtains the predicted spectral function \(A(\omega)\), shown here for three different values of \(\beta\) and \(U\). Once again, the ED solution is plotted as blue lines, and the SCALINN predictions are plotted as blue dots. broadened as temperature increases. Lastly, at higher interaction strength still \(U=6W\), the magnitude of the quasi-particle peaks are greatly reduced in favor of the Hubbard bands. However, due to the lack of DMFT self-consistency in this case, the fully insulating Mott solution was not recovered. ### Hybrid Approximation Solver In addition to the truncated bath approach, as discussed in Sec. II.2, we also consider a Hubbard-I+IPT hybrid scheme. In this hybrid method, models were trained from a combination of Hubbard-I and IPT approximate solutions to the target SIAM, and used as inputs to the transformer model to predict target ED-quality outputs. Once again, three separate inverse temperature models were trained to reach errors of \(J(\hat{G}(i\omega_{n}),G^{\mathrm{ED}}(i\omega_{n}))<10^{-5}\) across the entire training set. In fig. 3, the various Matsubara Green's functions are presented, including the Hubbard-I and IPT inputs, the ED ground truth and SCALINN prediction. As expected, the Hubbard-I inputs are all insulating solutions, as observed with \(G^{\mathrm{HI}}(i\omega_{n})\) tending towards zero as \(i\omega_{n}\to 0\), consistent with its description of the atomic solution. This is increasingly erroneous for lower temperatures and interaction strengths, where delocalized solutions should be found. In contrast, the IPT input favors delocalized descriptions, where all solutions reach a maximum absolute value as \(i\omega_{n}\to 0\), which is in error particularly for higher interactions in the non-perturbative \(U/W\) limit. In contrast to these computationally cheap input models, the SCALINN predictions match the ED results to remarkably high accuracy from these inputs. Finally, we consider the utility of the scheme as a solver within a fully self-consistent DMFT calculation. In this, the IPT and Hubbard-I approximations can be found in the absence of bath discretization error of the hybridization. Therefore, while the training of the SCALINN model is performed in the presence of the finite bath approximation to the hybridization, its use within a DMFT scheme can aim to eliminate both Figure 4: **Converged self-consistent DMFT results on the Bethe Hubbard lattice for different solvers: IPT, Hubbard-I, SCALINN and CTQMC.** a) Quasi-particle weight b) Imaginary part of Matsubara Green’s functions. DMFT is converged at \(\beta=10\) for the continuous hybridization of Eq. 9 without bath discretization error, with the IPT and Hubbard-I solutions at each iteration used as input for the SCALINN solver approach. Figure 3: **Representative SCALINN predictions for a test-set SIAM with the HI+IPT hybrid scheme.** The Hubbard-I solution \(G^{\mathrm{HI}}(i\omega_{n})\) (dashed red), the IPT solver solution \(G^{\mathrm{IPT}}(i\omega_{n})\) (dashed yellow), and the ED target \(G^{\mathrm{ED}}(i\omega_{n})\) (solid blue lines) are plotted alongside the SCALINN predictions from these input Green’s functions (blue dots) at \(\beta=10,50,100\) and at \(U/W=1,3,6\). error, as well as approximations to the correlated effects of approximate solvers. In order to benchmark the accuracy of the approach, we therefore turn to comparison with a CTQMC solver which can obtain correlated Green's functions in the limit of a continuous hybridization via Monte Carlo sampling, as long as the temperature is not too low such that the Fermion sign problem manifests. In fig. 4, we consider DMFT on the continuous hybridization of the Bethe lattice (Eq. 9) at \(\beta=10\) compared to CTQMC, for both the final self-consistent Matsubara Greens function, and the quasiparticle weight, \(Z\). This quasiparticle weight can be computed as \[Z=\left.\frac{\text{Im}\left\{\Sigma\left(i\omega_{n}\right)\right\}}{\omega_{ n}}\right|_{\omega_{n}\to 0}\,, \tag{11}\] from the converged DMFT self-energy. Values close to one indicate a metallic solution, with lower values describing the increased effective mass towards a Mott insulating solution. This is observed from the results, with DMFT+SCALINN agreeing almost perfectly with the DMFT+CTQMC renormalization factor, with the DMFT+IPT biasing towards the metallic phase, and DMFT+Hubbard-I the atomic Mott phase. Moreover, the self-consistent DMFT Matsubara Green's functions with these various solvers are plotted in fig. 4.b, where as interaction strength increases, the solutions become increasingly insulating, as can be observed from the \(i\omega_{n}\to 0\) trend of \(G(i\omega_{n})\), with once again the discrepancy between SCALINN and CTQMC solvers is indistinguishable on the scale of the plot. It should be noted that these insulating solutions differ from the single-shot SIAM solutions of fig. 3, due to the self-consistent update of the continuous hybridization in the full DMFT scheme. ## IV Conclusions With the insights gained from drawing comparisons to natural language processing problems, we developed a novel and promising approach to predict Green's function sequences in the Matsubara domain via modifications to a Transformer model. These predicted sequences exhibit levels of accuracy that were previously restricted to comparatively high computational costs of exact diagonalization. We considered approaches to both predict these Green's functions of general SIAM models from inputs based on results from computationally accessible lower levels of theory, as well as an approach to mitigate the bath discretization error in describing SIAM's with a continuous hybridization spectrum. Finally, we combined these developments in a fully self-consistent DMFT scheme to solve the Bethe lattice Hubbard model with results indistinguishable from exact CTQMC benchmarks. However, while the approach showcases much potential, there exist remaining challenges to overcome. Firstly, the approach was restricted to a relatively narrow class of SIAM models, with the extension to multi-impurity, matrix-valued Green's functions and a wider set of representative hybridizations required. Adaptations of the model to _enforce_ desirable features such as causality of the output Green's functions or symmetries would be beneficial, and help avoid convergence issues in the self-consistent DMFT loops which manifested at times for low temperatures or quantum phase transitions. Finally, alternative methods to provide training data of exact Green's functions would allow for an extension to overcome the bath discretization which is manifest in the training of the model. Nevertheless, our findings highlight the power and adaptability of the Transformer model within the field of correlated materials and its potential for pushing the frontiers of computational problem-solving in this domain. ## Acknowledgment G.H.B. gratefully acknowledges support from the Air Force Office of Scientific Research under award number FA8655-22-1-7011. We are also grateful to the King's Computational Research, Engineering and Technology Environment (CREATE) and UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/T022213/1, EP/W032260/1 and EP/P020194/1). ## Competing Interests The authors declare no competing financial or non-financial interests. ## Contributions C.W., G.H.B, W.G., and Z.Z. conceived the project. C.W. and Z.Z build the database. W.G., H.L and Z.Z developed the Machine Learning model and trained the Transformer model. H.L and Z.Z. performed data analysis. All wrote the manuscript. ## Corresponding Author Correspondence to Weifeng Ge and Cedric Weber ## Data Availability An exemplary dataset can be found at [https://dx.doi.org/10.6084/m9.figshare.23144474](https://dx.doi.org/10.6084/m9.figshare.23144474). ## Code availability The code for SCALINN is available at [https://github.com/zelong-zhao/SCALINN](https://github.com/zelong-zhao/SCALINN).
2305.01592
Learning hard distributions with quantum-enhanced Variational Autoencoders
An important task in quantum generative machine learning is to model the probability distribution of measurements of many-body quantum systems. Classical generative models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs), can model the distributions of product states with high fidelity, but fail or require an exponential number of parameters to model entangled states. In this paper, we introduce a quantum-enhanced VAE (QeVAE), a generative quantum-classical hybrid model that uses quantum correlations to improve the fidelity over classical VAEs, while requiring only a linear number of parameters. We provide a closed-form expression for the output distributions of the QeVAE. We also empirically show that the QeVAE outperforms classical models on several classes of quantum states, such as 4-qubit and 8-qubit quantum circuit states, haar random states, and quantum kicked rotor states, with a more than 2x increase in fidelity for some states. Finally, we find that the trained model outperforms the classical model when executed on the IBMq Manila quantum computer. Our work paves the way for new applications of quantum generative learning algorithms and characterizing measurement distributions of high-dimensional quantum states.
Anantha Rao, Dhiraj Madan, Anupama Ray, Dhinakaran Vinayagamurthy, M. S. Santhanam
2023-05-02T16:50:24Z
http://arxiv.org/abs/2305.01592v2
# Learning Hard Distributions with ###### Abstract An important task in quantum generative machine learning is to model the probability distribution of measurements of quantum mechanical systems. Classical generative models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs), can learn the distributions of product states with high fidelity, but fail or require an exponential number of parameters to model entangled states. In this paper, we introduce a quantum-enhanced VAE (QeVAE), a generative quantum-classical hybrid model that uses quantum correlations to improve the fidelity over classical VAEs, while requiring only a linear number of parameters. We provide a closed form expression for the output distributions of the QeVAE. We also empirically show that the QeVAE outperforms classical models on several classes of quantum states, such as 4-qubit and 8-qubit quantum circuit states, haar random states, and quantum kicked rotor states, with a more than 2x increase in fidelity for some states. Finally, we find that the trained model outperforms the classical model when executed on the IBMq Manila quantum computer. Our work paves the way for new applications of quantum generative learning algorithms and characterizing measurement distributions of high-dimensional quantum states. ## I Introduction Research in quantum information science promises to enable the development of a fault-tolerant quantum computer that can perform certain tasks faster and more efficiently than any classical computer. To this end, several scientists have demonstrated that quantum algorithms can, in theory, outperform the best-known conventional algorithms when tackling specific problems and, in some situations, deliver a 'quantum speedup' [1, 2]. For instance, certain quantum algorithms can take exponentially fewer resources for tasks such as factorization and eigenvalue decomposition, and quadratically fewer resources to search through unsorted databases [3, 4, 5]. This pursuit of 'quantum speedup' has motivated generations of physicists and engineers to discover novel algorithms that leverage the properties of superposition, entanglement and interference. Through advances in processing power and algorithmic ability of computing devices, machine learning techniques have evolved into fundamental tools for detecting patterns in data. Theoretically, models like deep neural networks have the potential to learn some of the most complex patterns that exist in nature or human-made systems and have been demonstrated to be highly competent at complex tasks like playing Go, identifying protein structures, and self-driving cars [6, 7, 8]. However, many tasks are still intractable or very expensive for these methods. Some learning tasks, for example, include sampling from complex distributions or estimating the average values of numerous parameters under a complicated distribution, both of which are typically intractable (requiring exponential time or space resources). Moreover, certain distributions derived from quantum-mechanical systems are fundamentally intractable to conventional approaches [9, 10, 11]. Enhancing and augmenting classical machine-learning methods using quantum correlations has been the focus of quantum-enhanced machine learning and our work. One of the important tasks for machine learning is that of generative learning, that involves modeling or learning a distribution given independent samples from the same. Previously, models such as Generative Adversarial networks (GANs [12]) and Variational Autoencoders (VAEs [13]) have been used to learn distributions of classical data such as texts and images. In this paper, we study the problem of modeling the measurement distributions obtained from unknown quantum states. This problem is fundamental in quantum information science, as it can reveal useful information about the properties and dynamics of quantum systems. Moreover, it can enable applications such as quantum state reconstruction, and entanglement quantification [14, 15, 16, 17]. These applications can help us understand and manipulate quantum systems in various fields, such as chemistry, materials science, and cybersecurity. For our problem setting, Variational Autoencoders have also been shown to learn and compress the measurement distribution obtained from product states, termed as 'easy states' but require an exponential number of parameters to learn the distribution of Haar random states ('hard states') [18, 19]. In this work, we propose a quantum-enhanced VAE (QeVAE) which incorporates quantum correlations through quantum circuits to enhance the performance of classical VAEs. We show that a QeVAE constructed by substituting the decoder (generator) of a conventional VAE with a parameterized quantum circuit can learn these 'hard states' with only a linear number of parameters in the number of qubits and outperform conventional methods when the dataset contains quantum-correlations. In this paper, we make three main contributions to the field of quantum generative learning. Firstly, we propose the Quantum-enhanced VAE (QeVAE), which can enhance the expressive power of classical VAEs and produce distributions that are classically intractable. Secondly, we provide a mathematical closed-form expression to theoretically analyze the class of models that QeVAE can model better than classical models. Finally, we demonstrate experimentally that our QeVAE outperforms classical VAEs on modeling measurement samples of different classes of quantum states, such as haar random states, quantum circuit states, and quantum kicked rotor states, and report an increase in fidelity by more than 2x for an 8-qubit quantum circuit state. Our work is organized as follows: In Section II, we discuss the background work on generative learning, variational autoencoders, and the problem of learning measurement distributions of quantum states. In Section III, we propose the QeVAE and mathematically characterize the output distribution of the model. We describe our experiments and results in section IV and V respectively. Finally, in Section VI, we discuss our conclusions and future outlook in the greater context of developing quantum algorithms for generative learning. ## II Background and Related work ### _Generative learning and Variational Auto Encoders_ The task of generative modeling involves modeling a parameterized distribution \(p_{\theta}(\mathbf{x})\), given independent and identically distributed (idl) samples from the distribution \(\{\mathbf{x_{i}}\}_{i=1}^{m}\) where each \(\mathbf{x_{i}}\sim p(\mathbf{x})\). Here the goal is to maximize the log likelihood of the data given by \(L(\theta)=\sum_{i}\log p_{\theta}(\mathbf{x_{i}})\). One of the common approaches for generative modeling is that of adversarial learning (for example, GANs, [12]), which involves learning a parameterized generative network and a parameterized discriminative network. While the generative network maps samples from a fixed distribution to the data distribution, the discriminative network distinguishes between data samples and the generated samples. On the other hand, variational learning involves modeling the distribution through a latent vector \(\mathbf{z}\). One can define a prior on the latent variables \(\mathbf{z}\sim p(\mathbf{z})\) and a parameterized likelihood function \(p_{\theta}(\mathbf{x}|\mathbf{z})\). The joint distribution over the variables \(\mathbf{x}\) and \(\mathbf{z}\) can then be written as \(p(\mathbf{x},\mathbf{z})=p(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})\). The log likelihood can then be expressed as \(L(\theta)=\log(p_{\theta}(\mathbf{x}))=\log(\sum_{\mathbf{z}}p(\mathbf{x}, \mathbf{z}))\). Variational learning considers a lower bound for the above as:- \[\log(p_{\theta}(\mathbf{x}))\geq\mathop{\mathbb{E}}_{\mathbf{z}\sim q( \mathbf{z})}\left(\frac{\log(p(\mathbf{x},\mathbf{z}))}{q(\mathbf{z})}\right) \tag{1}\] which holds for any distribution \(q(\mathbf{z})\). The above lower bound called Evidence Lower Bound (ELBO) holds tight when \(q(\mathbf{z})=p(\mathbf{z}|\mathbf{x})\). Variational Autoencoders [13] seek to maximize the ELBO where \(q_{\phi}(\mathbf{z}|\mathbf{x})\) can be defined through a parameterized neural network also called the encoder. Here we can split the ELBO as \[\text{ELBO}=\mathop{\mathbb{E}}_{q(\mathbf{z}|\mathbf{x})}\left(\log p( \mathbf{x}|\mathbf{z})\right)-KL(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))) \tag{2}\] The algorithm involves using the posterior network (encoder) to create the distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and provides samples \(\mathbf{z}\), which are then passed through the likelihood network (decoder) to produce a distribution over \(\mathbf{x}\) conditioned on sampled vector \(\mathbf{z}\). One then seeks to maximize the difference of log likelihood of generated distribution \(p(\mathbf{x}|\mathbf{z})\) and the KL divergence between posterior (\(q(\mathbf{z}|\mathbf{x})\)) and prior distributions \(p(\mathbf{z})\). The first term (reconstruction term) tries to maximize the likelihood of recovering back \(\mathbf{x}\) from the latent variable. The second term is a regularization term that tries to ensure that posterior distribution is close to prior. In order to be able to backpropagate through the sampling step, one uses a trick called the reparametrization trick. For example, when \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is a Gaussian with mean \(\mu(\mathbf{x};\theta)\) and a covariance matrix (typically diagonal) \(\Sigma(\mathbf{x};\theta)\), to sample \(\mathbf{z}\sim\mathcal{N}(\mu,\mathbf{\Sigma})\), one redefines \(\mathbf{z}=\mu+\Sigma^{1/2}\epsilon\) where \(\epsilon\sim\mathcal{N}(0,I)\). Separating out the noise \(\epsilon\), enables us to backpropagate through the \(\mu\) and \(\Sigma\) which depend on the parameters of the network. Giving a higher weight to the second term leads to posterior becoming independent of \(\mathbf{x}\) (known as the posterior collapse problem [20]). In practice one often weighs the KL divergence term by a hyperparameter \(\beta\) to control the effect of regularization term [21], producing the overall cost function as: \[\mathop{\mathbb{E}}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left(\log p_{\theta}( \mathbf{x}|\mathbf{z})\right)-\beta KL(q_{\phi}(\mathbf{z}|\mathbf{x})||p( \mathbf{z}))) \tag{3}\] Fig. 1: **A schematic for a classical VAE.** A classical VAE consists of an encoder, decoder and a continuous latent dimension that can be modeled as a multivariate gaussian with diagonal covariance. The dataset \(\mathbf{x}\) is mapped to a distribution p(z) and after training random samples \(\mathbf{x}\) can be generated by sampling from the decoder. where \(\beta\) is a hyperparameter that indicates the relative importance of the regularization term with respect to reconstruction term. ### _Quantum computation and quantum machine learning_ Quantum computing is a model of computation with a potential to provide a speedup over its classical counterpart. The fundamental units in quantum computing framework consist of quantum bits or **qubits**. A qubit can take value as a unit vector in two dimensional complex Hilbert Space \(\mathbb{C}^{2}\). The basis states \(\{\ket{0}\) and \(\ket{1}\}\) correspond to the classical bits {0,1}. An arbitrary state \(\ket{\psi}\) can be considered as a superposition of the basis states \(\ket{\psi}=\alpha\ket{0}+\beta\ket{1}\), where \(\ket{\alpha}^{2}+\ket{\beta}^{2}=1\). More generally, an \(n\) qubit state lies in a Hilbert space spanned by basis states corresponding to \(2^{n}\) classical bit strings, \(\ket{0...0}\) through \(\ket{1...1}\). An arbitrary state can be a unit vector spanned by the \(2^{n}\) basis strings as \(\ket{\psi}=\sum_{x\in\{0,1\}^{n}}\alpha_{x}\ket{x}\), where \(\sum_{x}|\alpha_{x}|^{2}=1\). If there are \(n\) qubits each in state \(\{\ket{\psi_{i}}\}_{i=1}^{m}\), then the overall state of the n qubit system (product state) is \(\otimes_{i=1}^{n}\ket{\psi_{i}}\). However there are \(n\) qubits states that cannot be expressed in this form and are refered to as entangled states. The measurement of these states yields correlated bit strings. The basic operations on qubits include quantum gates, which are unitary operators acting on one or more qubits. Common examples of single qubits gates include the Hadamard, Pauli Gates(X,Y,Z), and S gate. The exponentiated Pauli Gates also provide us a set of parameterized gates, \(R_{x}(\theta)=exp(-i\frac{\theta}{2}X)\), \(R_{y}(\theta)=exp(-i\frac{\theta}{2}Y)\) and \(R_{z}(\theta)=exp(-i\frac{\theta}{2}Z)\). A 2 qubit gate can be described by a 4 dimensional unitaries. A common 2 qubit gate is \(CNOT\) which acts as \(CNOT\ket{x,y}=\ket{x,x\oplus y}\) on basis states. A quantum circuit consists of a sequence of single qubit and two-qubit gates acting on an initial state (typically \(\ket{0...0}\)) followed by measurement of the final state. A measurement of a state \(\ket{\psi}=\sum_{x}\alpha_{x}\ket{x}\) yields one of the bit strings \(x\) with probabilities \(\ket{\alpha_{x}}^{2}\). A parameterized quantum circuit (PQC) is a quantum circuit that has some gates that depend on parameters \(\theta\). The unitary operator of the PQC is denoted by \(\hat{U}(\theta)\), which can produce a state \(\ket{\psi(\theta)}=\hat{U}(\theta)\ket{0}^{\otimes n}\). The PQC can also be conditioned on an input \(\mathbf{x}\) as \(\hat{U}(\mathbf{x},\theta)\). This can be decomposed into a feature map \(\hat{U}_{\phi}(\mathbf{x})\) and a trainable ansatz \(\hat{V}(\theta)\). The feature map is a data encoding circuit that transforms input data \(\bar{x}\in\mathbb{R}^{n}\) into a quantum state using single qubit and two qubit gates, potentially parameterized by the input variables. The ansatz is a parameterized circuit that consists of alternating rotation layers and entanglement layers. The rotation layers are single-qubit gates applied on all qubits. The entanglement layer uses two-qubit gates to entangle the qubits according to a predefined scheme. The parameters of the PQC can be optimized to achieve a certain goal, such as minimizing the energy of a quantum system or maximizing the accuracy of a machine learning task. The parameters can be updated iteratively using classical optimization methods such as gradient-based (ADAM [22], SPSA [23]) or gradient-free (COBYLA) algorithms. The measurement distribution of \(\ket{\psi(\mathbf{x},\theta)}=\hat{U}(\mathbf{x},\theta)\ket{0}\) in Z-basis defines the conditional distribution \(p(\mathbf{y}|\mathbf{x})=\left|\left\langle y|\psi(\mathbf{x},\theta)\right\rangle \right|^{2}\,\forall y\in\{0,1\}^{n}\). A variational quantum algorithm (VQA) is then commonly defined as a hybrid quantum-classical algorithm that utilizes a PQC to optimize a cost function [24]. The cost function can be based on the sampled measurement distribution of the state (QML, or our work), or computing the expectation value of a Hamiltonian (for quantum chemistry problems). In the context of Quantum ML, this model is also called a quantum neural network. Moreover, a quantum circuit born machine (QCBM) is a generative quantum neural network that uses a PQC to represent the probability distributions over bit strings as the measurement distribution of the quantum state \(\ket{\psi(\theta)}=\hat{U}(\theta)\ket{0}^{\otimes n}\). The parameters of the model can be trained by minimizing the cross entropy loss with the generated samples. After training, the model generates samples from the desired distribution. ### _Prior work in Learning distributions of quantum states_ In this section, we review prior work on learning the measurement distribution of quantum states, i.e., given samples from measuring an \(n\)-qubit quantum state \[\ket{\psi}=\sum_{x\in\{0,1\}^{n}}\alpha_{x}\ket{x}, \tag{4}\] with probabilities \(p(x)=|\alpha_{x}|^{2}\) for each bitstring \(x\), we want to find a model that approximates the distribution with parameters \(\theta\) and yields a distribution \(p_{\theta}(x)\). This problem can have potential applications in quantum state compression, quantum state transfer, and quantum state discrimination. Previous works have shown that classical generative models such as restricted Boltzmann machines (RBMs), variational autoencoders (VAEs), and autoregressive models can model such distributions but require an exponential number of parameters to learn and generate samples [18, 19, 25]. Moreover, a recent work based on the probably approximately correct (PAC) framework has shown that these distributions can be efficiently learned with quantum resources, but not with purely classical approaches [26]. However, the quantum learner proposed in that work assumes the availability of a fault-tolerant quantum computer. There have been some previous works on employing quantum algorithms and the variational autoencoder framework for different problems using classical data. The first proposed algorithm within the VAE framework utilized the annealing-based framework [27]. A recent work focused on improving the latent space representation of classical VAEs through parameterized quantum circuits (PQCs) [28]. Only recently, PQCs within the VAE framework were proposed for the problem of drug discovery [29]. To the best of our knowledge, this is the first work to consider quantum VAEs for data generation from quantum states. We propose a quantum-enhanced variational autoencoder (QeVAE) that can learn the measurement distribution of an unknown quantum state using noisy quantum devices. Our QeVAE can reconstruct the measurement distribution through an iterative learning process that involves a parameterized quantum circuit (PQC) as the generative model and a classical neural network as the inference model. We also show that our QeVAE reduces to a quantum circuit born machine (QCBM) in the zero latentisze limit. We hypothesize that our QeVAE can leverage the quantum properties of superposition and entanglement to learn the complex and high-dimensional measurement distributions of quantum states. ## III Quantum-enhanced Variational Autoencoders In this section, we define a quantum-enhanced variational autoencoder (QeVAE) to model the measurement distribution of an unknown \(n\)-qubit quantum state. As a baseline, classical VAEs have already been used to model such a distribution [18]. The hybrid model that we propose consists of a feed-forward classical encoder, a continuous latent space, and a parametrized quantum circuit as a decoder. We model the approximate posterior (encoder network) \(Q_{\phi}(\mathbf{z}|\mathbf{x})\) through a classical feedforward neural network and the latent variable as \(\mathbf{z}\sim\mathcal{N}(0,I)\). The likelihood (generator) distribution \(p_{\theta}(\mathbf{x}|\mathbf{z})\) is defined via a quantum circuit i.e. \(p_{\theta}(\mathbf{x}|\mathbf{z})=|\bra{x}\!\!\ket{U(\theta,\mathbf{z})}\!\! \ket{0^{\otimes n}}|^{2}\). The Evidence Lower bound loss (ELBO) is optimized through a classical optimizer such as ADAM. The model is trained to mimic the given measurement distribution of states. During training, the parameters of the encoder and the rotation gates in the decoder (with a pre-selected entanglement type) are iteratively varied and learned. Such a model has multiple applications: It will enable scientists to generate certain quantum states in different physical quantum computers just by knowing the set of rotation and entangling gates to perform. The algorithm also has applications in state compression and transferring a state from one system to another upto a phase (entire phase information can be learnt with additional models [18]). We now present a theorem that allows us to mathematically characterize the class of distributions that can be obtained via the above model. **Theorem 1**.: _Consider a latent variable model with \(\mathbf{z}\sim p(\mathbf{z})\) and \(p(\mathbf{x}|\mathbf{z})=|\bra{\mathbf{x}}V_{\theta}U_{\phi}(\mathbf{z})\ket{ 0^{n}}|^{2}\), where \(U_{\phi}(\mathbf{z})\) is a feature map and \(V_{\theta}\) is a parameterized ansatz. Then_ 1. \(\exists\) _a density matrix_ \(\rho\)_, such that_ \(p(x)=\bra{x}V_{\theta}\rho V_{\theta}^{\dagger}\ket{x}\)_. In other words the distribution of_ \(\mathbf{x}\) _can be obtained by evolving a density matrix_ \(\rho\) _under unitary ansatz_ \(V_{\theta}\) _followed by measurement in standard basis._ 2. _Conversely, for each density matrix_ \(\rho\)_, there exists a prior_ \(p(\mathbf{z})\) _and a feature map_ \(U_{\phi}(\mathbf{z})\)_, such that_ \(p(x)=\bra{x}V_{\theta}\rho V_{\theta}^{\dagger}\ket{x}\)_._ Proof.: 1. \[p(\mathbf{x}) =\int p(\mathbf{x},\mathbf{z})d\mathbf{z}\] \[=\int p(\mathbf{z})|\bra{\mathbf{x}}V_{\theta}U_{\phi}(\mathbf{z} )\ket{0^{n}}|^{2}d\mathbf{z}\] \[=\int p(\mathbf{z})\bra{\mathbf{x}}V_{\theta}U_{\phi}(\mathbf{z} )\ket{0^{n}}|U_{\phi}(\mathbf{z})\!\!\ket{V_{\theta}^{\dagger}}\mathbf{x}\rangle d \mathbf{z}\] \[=\bra{\mathbf{x}}V_{\theta}(\int p(\mathbf{z})U_{\phi}(\mathbf{z} )\ket{0^{n}}|U_{\phi}(\mathbf{z})\!\!\ket{t}d\mathbf{z})V_{\theta}^{\dagger} \ket{\mathbf{x}}\rangle\] ( By linearity of inner product) We define \(\rho\coloneqq\int p(\mathbf{z})U_{\phi}(\mathbf{z})\ket{0^{n}}\bra{0^{n}}U_{ \phi}(\mathbf{z})\!\!\ket{t}d\mathbf{z}\) **Note** 1. \(\rho\) is a valid density matrix i.e. \(\rho^{\dagger}=\rho\), \(\rho\geq 0\) and \(Tr(\rho)=I\) 2. \(\rho\) is independent of both \(\mathbf{x}\) and \(\mathbf{z}\). Then, \[p(\mathbf{x})=\bra{\mathbf{x}}V_{\theta}\rho V_{\theta}^{\dagger}\ket{\mathbf{x}}\] Thus, such the \(p(x)\) is equivalent to preparing a density state \(\rho\), evolving under unitary \(V_{\theta}\), and performing a measurement in standard basis. 2. For the second part, given a density matrix \(\rho\), consider the sepcrtal decomposition of \(\rho\) as \(\rho=\sum_{z}\lambda_{z}\ket{\psi_{z}}\bra{\psi_{z}}\). Since \(\rho\) is a valid density matrix, \(\lambda_{z}\)'s define a distribution \(p(\mathbf{z})=\lambda_{z}\). Now, for each \(\mathbf{z}\), choose a unitary such that \(U_{\phi(\mathbf{z})}\ket{0}=\ket{\psi_{z}}\). Now, one can verify that such a feature map satisfies the required equation. Moreover, such a distribution defined using PQCs can encode information about the entanglement structure of the states via the entangling ansatz \(V_{\theta}\) and density matrix \(\rho\) which is inaccessible to purely classical models. Note that this fully includes the set of QCBM distributions since setting \(\rho=\ket{0}\bra{0}\), will enable us to get QCBMs. ## IV Methods In this section, we provide the methods followed to setup a hybrid quantum-classical neural network that follows the variational autoencoder framework. We provide details on the architecture of the model, the training algorithm, metrics followed, datasets utilized and end the section with the hyper-parameters used for various models. ### _Architecture Details_ The hybrid quantum-classical neural network consists of three components: An encoder, a continuous latent-space and a decoder. In section we discuss the structure of each component. The _encoder_ is modeled through a classical network which defines the mean \(\mu(\mathbf{x},\theta)\), and diagonal covariance \(\Sigma(\mathbf{x},\theta)\) for the posterior distribution \(Q(\mathbf{z}|\mathbf{x})\), modeled as a multivariate Gaussian. In our experiments, we use feed-forward neural networks where the input bitstring is transformed through successive operations of linear layers each followed by a non-linearity. Here, one starts with \(h^{0}=x\), which is the initial input. The \(l^{th}\) layer transforms this as \(h^{l}=f^{l}(h^{l-1})=\phi(W^{l}h^{l-1}+b^{l})\), where \(\Phi\) is the Leaky ReLu activation function and \(\{W^{l},b^{l}\}\) are the parameters of the \(l^{th}\) layer of the neural network (weights, biases). The final layer gives us the vectors corresponding to mean \(\mu\) and the diagonal entries of log covariance matrix. Since we benchmark the performance of classical VAEs with QeVAS, the architecture of the encoder is constant over all simulations. The encoder consists of 2 hidden layers each containing 8 and 7 neurons with the leak of the activation function set to 0.01. The output of the encoder is propagated to the latent space that is modelled as a continuous gaussian with diagonal covariance. The encoded sample is obtained from the latent space through the reparametrization trick. The _decoder_ network in the QeVAE is modeled via a variational quantum circuit with trainable rotation gates. We use a Pauli feature map (such as the Z or ZZ featuremap given in figure 2) to encode the sampled latent variable \(\mathbf{z}\) onto the circuit. The Pauli featuremap is a data encoding circuit that transforms input data \(\bar{x}\in\mathbb{R}^{n}\)\(U_{\Phi(\bar{x})}\left|0\right\rangle=\exp\left(i\sum_{S\subseteq[n]}\phi_{S}( \bar{x})\prod_{i\in S}P_{i}\right)\left|0\right\rangle\)[30]. The variable \(P_{i}\in\{X,\dot{Y},Z,I\}\) denotes the Pauli matrices. The index \(S\) describes connectivities between different qubits in a given circuit. If \(S=\{Z\}\), we obtain a \(Z\) feature map. For \(S=\{Z,ZZ\}\) we have ZZ feature map. We then use a two-local of ansatz with linear entanglement due to their compatibility with current-era hardware. The two-local circuit is a parameterized circuit consisting of alternating rotation layers and entanglement layers. The rotation layers are single qubit gates applied on all qubits while the entanglement layer uses two-qubit gates to entangle the qubits according to a predefined scheme. In addition, while training, we find it useful to add an additional linear feedforward layer before the quantum circuit, after sampling a latent vector. This provides two benefits: (a) Flexibility in choosing a latent size. The layer can linearly transforming the latent vector of a different size to fit the input requirements of the quantum circuit; (b) This also adds power to the network by introducing additional parameters. To benchmark the performance of the QeVAE, we train classical VAEs on different 4 qubit and 8 qubit states, and different number of layers. The number of trainable parameters is kept fixed close to \(2^{\text{No. of qubits}}\). Table I shows the sizes of classical decoder layers for circuits corresponding to 8 qubit measurements. ### _Training details_ We train both the classical and hybrid models using ADAM optimizer. The training is iterative where each iteration involves forward propagating a single bitstring ('x') (obtained from the distribution) through the encoder, obtaining a sample from the latent space ('z'), preprocessing the sample through a linear layer, embedding onto the PQC, propagating through the quantum circuit, and computing the measurement distribution with multiple shots. The loss function utilizes the quasi-probability of \(p(x|z)\) and the KL divergence of the latent space. To avoid overfitting on the training data, we perform early stopping on the validation set loss with a patience factor \(\delta\). We use python packages Pytorch and the Torchonnector module in qiskit to build these hybrid models [31, 32]. In table II, we summarize the various hyperparameters used in the QeVAE model. ### _Metrics_ The metric we use to quantify the generated distribution is the fidelity between two discrete distributions. If \(\rho\) and \(\sigma\) are \(n\)-qubit states, we say that \(\sigma\) is a good representation of \(\rho\) if the fidelity F = Tr(\(\sqrt{\rho^{1/2}\sigma\rho^{1/2}})>1-\epsilon\) for an \(\epsilon>0\). Through the result of [33], the fidelity can be expressed Fig. 2: **Ansatz and Feature maps for learning quantum distributions** (a) A two-local ansatz on three qubits with two repeating layers of \(Rx\) and \(Ry\) gates along with linear entanglement (b) A Pauli-Z feature map that embeds a three-dimensional vector. H in terms of the probability distributions over a measurement that maximally distinguishes the two states. Thus given two random variables X, Y with probabilities \(p=(p_{1},p_{2},..p_{n})\) and \(q=(q_{1},q_{2},..q_{n})\), the fidelity of X and Y is defined to be the quantity: \[F(X,Y)=(\sum_{i}\sqrt{p_{i}q_{i}})^{2} \tag{5}\] where the measure \(\sum_{i}\sqrt{p_{i}q_{i}}\) is the Bhattacharyya coefficient between the two distributions. To compute the fidelity between the original and learnt distributions, we propagate 5000 random samples from the latent space through the decoder and construct the output distribution by computing the average distribution. ### _Datasets_ In order to evaluate our proposed approach, we try to model measurements from multiple families of states as described below. For each family we generate multiple 4 and 8 qubit states with different random seeds. Multiple copies of a state are measured in standard basis to yield a dataset consisting of samples in \(\{0,1\}^{n}\) for each n qubit state. Here, each measurement dataset contains 1024 samples. We use 70% samples for training and the 30% for validation. We benchmark the performance of QeVAE on several datasets as shown in figure 3. We train both classical and quantum models to reproduce these distributions. We are interested in discovering if a classical learner can learn the same distribution and how the number of parameters required scales with the size of the system. In the following section, we describe the families of states that have been considered for creating our datasets. _Random product states_: Product states (i.e. Tensor Product of single qubit states) are classically easy to simulate and are empirically found to be _classically easy_ to learn. We generate random product states by simulating quantum circuits with only single qubit gates with arbitrary angles of rotation, generated according to a random seed (figure 3(a)). The state prepared is on the form \(|\psi\rangle=\otimes_{i=0}^{n-1}\left\{\alpha_{i}|0\rangle+\beta_{i}|1\rangle\right\}\), where \(n\) is the number of qubits and projective Z-basis measurements generate the samples, \(x\in\{0,1\}^{n}\). _Haar random states_ are quantum states that are uniformly distributed over the Hilbert space according to the Haar measure and represent _classically hard states_ i.e, they require exponential number of parameters in the number of qubits to learn. These states can either be generated by first creating a Haar unitary \(U\) and then applying it on an initial state of dimension \(2^{n}\) or by normalizing a complex-valued vector of dimension \(2^{n}\). We use the later method, where a complex-valued vector of \(|\psi\rangle=\sum_{l=0}^{2^{n}-1}(c_{l1}+ic_{2l})|l\rangle\) is initialized with \(|l\rangle\) corresponding to the orthonormal basis vector in the \(2^{n}\)-dimensional Hilbert space, \(\mathbb{C}^{2^{n}}\), and \(c_{l1},c_{2l}\) are real numbers chosen independently from a standard Gaussian distribution. This vector is normalized to yield a quantum state by using the constraint: \(\langle\psi|\psi\rangle=1\). After normalization, the states are uniformly distributed on a unit hyper-sphere. _Random quantum circuit states_ are obtained from random quantum circuits with a pre-defined entanglement structure and circuit depth, as shown in figure 4(b). These states are useful for circuit compression and circuit compilation. _Quantum kicked rotor states_ are obtained from the quantum kicked rotor (QKR), a well-known model in quantum chaos and quantum information that exhibits rich dynamics under time evolution. The QKR is defined by the Hamiltonian: \[\hat{H}=\frac{\hat{p}^{2}}{2I}+k\cos\hat{x}\sum_{n}\delta(t-nT) \tag{6}\] where \(\hat{x}\) and \(\hat{p}\) are the position and momentum operators, \(I\) is the moment of inertia, \(k\) is the kick strength, and \(T\) is the kick period. We set \(I=1\) and introduce dimensionless parameters \(\hat{h}_{s}=\hbar T/I\) and \(\kappa=k/\hbar\), where \(\hbar\) is the reduced Planck's constant and \([\hat{x},\hat{p}]=i\hbar_{s}\). The time-evolution operator for one period is then given by: \[\hat{U}=\hat{U}_{kick}\hat{U}_{free}=\exp\left(-i\kappa\cos\hat{x}\right)\exp \left(-\frac{i}{2\hbar_{s}}\hat{p}^{2}\right) \tag{7}\] To simulate the QKR, we apply \(\hat{U}\) repeatedly to an initial state \(|\psi_{p}(0)\rangle=0\) and perform forward and inverse Fourier transforms between each kick. The resulting wavefunction shows different behaviors depending on the value of \(\kappa\). For weak kicking (\(\kappa\lessapprox 5.95\)), the system exhibits quantum diffusion until a break time, where the variance of momentum \(\langle p^{2}\rangle\) grows linearly with time, and after the break time, \(\langle p^{2}\rangle\) saturates. For strong kicking (\(\kappa\gtrapprox 5.95\)), the system exhibits dynamical localization, where \(\langle p^{2}\rangle\) reaches a finite value. In contrast, the classical kicked rotor shows chaotic behavior, where the future trajectory is highly sensitive to initial conditions. For a comprehensive review of the QKR, we refer the reader to [34]. In this work, we investigate whether a generative model can learn the probability distribution of the wavefunction \(|\psi_{p}|^{2}\) after 1000 kicks for different values of \(\kappa\in\{0.5,6\}\) and \(\hbar_{s}=1\). ## V Results In the tables presented below, we summarize the best fidelity obtained across each type of measurement dataset. We compare the final fidelity between the target distribution and that produced by a random uniform guess, a classical variational autoencoder (CVAE), and a Quantum-enhanced variational autoencoder (QeVAE). For each type of state, we consider five different random seeds. QeVAE results include the best fidelity observed across different hyper-parameters like latent size, feature-map, prepossessing-layer, and relative KL-divergence term \(\beta\). From tables(I-V), we observe that across all entangled quantum states we considered (i.e. all classes of states other than product states) the final fidelity obtained from a QeVAE outperforms the classical VAE and a random guess. In addition, the number of learnable parameters in the classical VAE is typically of the order \(2^{(n)}\) (Table I) while those in a QeVAE is \(4n+\epsilon\) where \(n\) is the number of qubits and \(\epsilon\) is a constant (\(\epsilon<4\)). To further validate our findings, we run the best QeVAE models on real quantum devices and see that the obtained fidelity is higher than those achieved by
2310.05542
Harmful Conspiracies in Temporal Interaction Networks: Understanding the Dynamics of Digital Wildfires through Phase Transitions
Shortly after the first COVID-19 cases became apparent in December 2020, rumors spread on social media suggesting a connection between the virus and the 5G radiation emanating from the recently deployed telecommunications network. In the course of the following weeks, this idea gained increasing popularity, and various alleged explanations for how such a connection manifests emerged. Ultimately, after being amplified by prominent conspiracy theorists, a series of arson attacks on telecommunication equipment follows, concluding with the kidnapping of telecommunication technicians in Peru. In this paper, we study the spread of content related to a conspiracy theory with harmful consequences, a so-called digital wildfire. In particular, we investigate the 5G and COVID-19 misinformation event on Twitter before, during, and after its peak in April and May 2020. For this purpose, we examine the community dynamics in complex temporal interaction networks underlying Twitter user activity. We assess the evolution of such digital wildfires by appropriately defining the temporal dynamics of communication in communities within social networks. We show that, for this specific misinformation event, the number of interactions of the users participating in a digital wildfire, as well as the size of the engaged communities, both follow a power-law distribution. Moreover, our research elucidates the possibility of quantifying the phases of a digital wildfire, as per established literature. We identify one such phase as a critical transition, marked by a shift from sporadic tweets to a global spread event, highlighting the dramatic scaling of misinformation propagation.
Kaspara Skovli Gåsvær, Pedro G. Lind, Johannes Langguth, Morten Hjorth-Jensen, Michael Kreil, Daniel Thilo Schroeder
2023-10-09T09:08:11Z
http://arxiv.org/abs/2310.05542v1
Harmful Conspiracies in Temporal Interaction Networks: Understanding the Dynamics of Digital Wildfires through Phase Transitions ###### Abstract Shortly after the first COVID-19 cases became apparent in December 2020, rumors spread on social media suggesting a connection between the virus and the 5G radiation emanating from the recently deployed telecommunications network. In the course of the following weeks, this idea gained increasing popularity, and various alleged explanations for how such a connection manifests emerged. Ultimately, after being amplified by prominent conspiracy theorists, a series of arson attacks on telecommunication equipment follows, concluding with the kidnapping of telecommunication technicians in Peru. In this paper, we study the spread of content related to a conspiracy theory with harmful consequences, a so-called digital wildfire. In particular, we investigate the 5G and COVID-19 misinformation event on Twitter before, during, and after its peak in April and May 2020. For this purpose, we examine the community dynamics in complex temporal interaction networks underlying Twitter user activity. We assess the evolution of such digital wildfires by appropriately defining the temporal dynamics of communication in communities within social networks. We show that, for this specific misinformation event, the number of interactions of the users participating in a digital wildfire, as well as the size of the engaged communities, both follow a power-law distribution. Moreover, our research elucidates the possibility of quantifying the phases of a digital wildfire, as per established literature. We identify one such phase as a critical transition, marked by a shift from sporadic tweets to a global spread event, highlighting the dramatic scaling of misinformation propagation. Additionally, we argue that the driving forces behind this observed transition are attributed to influential users, who act as catalysts, accelerating the spread of misinformation. Lastly, our data suggest that the characteristics of such events may be predictable, at least in some instances. From this data, we hypothesize that monitoring minor peaks in user interactions, which precede the critical phase culminating in real-world consequences, could serve as an early warning system, aiding in the prediction and potentially the mitigation of digital wildfires. ## Introduction and Background Before the advent of the internet, people primarily relied on print media and radio as their main sources of news. During that era, information dissemination was characterized by a clear separation between the source and the consumer. The flow of information was unidirectional and slow-paced, and it was common practice to trust journalists and newspapers as authoritative and reliable sources of information. The inception of the internet1on January 1, 1983, marked a turning point in the way we exchange and receive information. Initially, the Internet was primarily populated by users with expertise in science, technology, engineering, and mathematics. At this stage, discussions were restricted to a small audience, and traditional broadcasting media outlets still held sway. However, a significant shift occurred in the late 1990s with the introduction of the first online social networks (OSNs) like Bolt.com2 or SixDegrees.com3, which opened the gates for diverse individuals to participate in a broader online discourse. With the ensuing absence of a clear separation between information sources and consumers, the issue of the trustworthiness of the sources started to become more pressing. The reliability of news agencies today can vary widely depending on the country and agency in question. Nevertheless, news agencies are generally held to a higher level of accountability than OSNs. While fact-checking of social media content may occur in rare cases, it typically only happens after posts potentially reach a large audience [4], and even then, the desired effect often fails to materialize [5]. Despite these efforts, a remaining challenge for fact-checking is the extreme amount of data available online. Although estimates for the amount of produced data vary depending on the source and the definition of 'data', it is estimated that the amount of data produced globally has been growing exponentially in recent years. According to a report by Seagate and IDC from 2020 [6], the global datasphere - which includes all the data created, captured, and replicated in a year - was projected to reach 175 zettabytes by 2025. Even though OSN data is only a smaller part of this, with this amount of information, it is impossible to manually fact-check; thus, misinformation often spreads unnoticed, especially on social media. We argue that the fact that (1) anyone, regardless of their qualifications, can post about anything online, (2) the resulting sheer amount of unchecked misinformation, and (3) the lack of accountability imposed on the providers of OSNs, turns the sea of online information into a maze; tricky to navigate even if aware of these challenges, and potentially dangerous if not. As a result, misinformation travels at a speed never previously seen, possibly resulting in severe real-world implications [7, 8, 9, 10]. According to the World Economic Forum [11], these phenomena are called Digital Wildfires (DWs) and defined as **the rapid spread of information or rumors, amplified by the power of social media, which can create significant societal and economic damage in a short amount of time. The term DW we use in this paper extends this definition by adding a temporal dimension. Here, DWs begin with the first social media post that addresses the potentially fast-spreading topic and end after real-world consequences occur**. Langguth et al. [12] have shown that the topic leading to real-world consequences may continue to be discussed even after these consequences occur. Even more, a new DW may emerge around the same complex of topics in a different context. However, for this article, we refrain from such an extended definition and stick to defining the lifecycle of a DW from the first social media post until after the real-world consequences. To effectively combat the rise of DWs, it is vital to establish automated systems capable of early misinformation detection, thereby allowing for prompt interventions. Despite the wealth of research dedicated to the automated detection of misinformation, categorizable into approaches such as linguistic-based [13], visual-based [14], user-based [15], post-based [16, 17], and network-based detection [18, 19], a comprehensive strategy targeting DWs specifically remains undeveloped. Thus, before devising effective automated systems, it is imperative to delve deeper into understanding the mechanisms and dynamics fostering the proliferation of DWs. Acquiring such knowledge lays a crucial foundation for crafting robust and precise algorithms vital for early detection and prevention. In this article, we aim for a generic approach exploiting not only the content but rather the underlying interactions of a particular DW, namely _the 5G and COVID-19 misinformation event_[20], within the OSN Twitter to gain knowledge about the properties and dynamics of the spread of DWs on a societal scale. Specifically, we investigate the evolution of the temporal networks induced by the interactions between Twitter users to uncover the emergence of DW. Previous research has shown that investigating only the diffusion pattern of this kind of misinformation on an individual, per social media post, -basis is not promising at all [21, 22]. Even more, it seems like we can only understand a DW when examining the entirety of information cascades associated with it [23]. In this paper, we address the following question: **Given the interaction data of an entire DW from the online social network Twitter, can we explain its dynamics and temporal evolution on a societal scale by using complex and temporal networks?** The specific temporal network we study originates from interactions between Twitter users connected to the 5G and COVID-19 misinformation event, a series of tweets claiming a link between the COVID-19 virus and 5G technology that lead to a DW. This DW reached its peak around April 2020 (see Figure 1), resulting, among other things, in the destruction of 5G-related telecommunication equipment and the harassment as well as the kidnapping of telecommunication workers. We undertake a examination of the temporal evolution of interaction networks -- precisely focusing on the dissemination of misinformation surrounding the 5G and COVID-19 event as it unfolded on Twitter. Individual information cascades on this platform predominantly manifest through threads of tweets and retweets. Understanding that DWs are essentially conglomerates of numerous such cascades, we venture to scrutinize the evolution of the DW through a lens encompassing a significantly comprehensive set of related cascades -- a holistic approach facilitated by our comprehensive dataset that, to the best of our knowledge, stands as the most expansive in network-based study of DWs. Utilizing community detection methodologies [24, 25, 26], we delve into discerning the dynamics. Furthermore, by evaluating both the centrality and activity of the vertices, we aim to pinpoint the roles and repercussions of group and individual activities in steering the temporal trajectory of the network. Our objective is to investigate dynamics to describe and predict the evolution of complex temporal networks related to the spread of DWs. The insights gained from this study can aid in the early detection and prevention of misinformation events before they escalate into DWs and end with real-world consequences. In the following, we list our contributions to the understanding of the 5G and Covid-19 DW. First, we demonstrate that the DW displays phase transition behavior, highlighting the need to approach this phenomenon from a complex systems and network science perspective. Second, our investigation of community dynamics reveals a synchronization of communities towards the peak of the DW, which corresponds to the time when real-world consequences occur. Third, we identify a small group of influential users who are crucial in driving the conversation on a large scale, drawing in a significant number of new users. Finally, our analysis of the largest cluster shows that it is unstable and characterized by oscillations of partly contradictory narratives. Previous work addresses similar questions and problems. Vosoughi et al. [23] investigate the dissemination of true and false news on Twitter, as detailed in their 2018 study. The authors analyze a dataset encompassing approximately 126 thousand news stories tweeted by around three million users, assembled into rumor cascades. Their findings underscore that false news stories propagate significantly faster, reach deeper, and spread more broadly than true stories. Furthermore, false information was found to be retweeted more frequently than true news. Intriguingly, true stories take approximately six times longer to reach a similar audience size of 1,500 users compared to their false counterparts. Starbird [27] analyzes Twitter data related to eight mass shooting events from 2013 to 2016, identifying alternative narratives that emerged on the platform. These alternative narratives often contradict mainstream media reports and suggest that the shootings were false flag operations or hoaxes. The study uncovers that alternative media sources play a central role in the production and dissemination of these narratives, acting as key amplifiers of misinformation. The research also highlights the interconnected nature of the alternative media ecosystem, with multiple alternative media sources cross-promoting each other's content and reinforcing the alternative narratives. This interconnectedness contributes to the spread of misinformation and fosters distrust in mainstream media sources. Del Vicario et al. [28] conducted a study investigating the spreading of misinformation on social media, with a specific focus on Facebook. Their research identifies similar consumption patterns among users who prefer scientific news and conspiracy theories. However, the patterns of information spread, or "cascade dynamics," shows differences. The authors discover that users tend to form "echo chambers," polarized, homogenous clusters where they share content that aligns with their beliefs. Furthermore, the authors introduce a data-driven model that successfully mimicked these dynamics, reinforcing that homogeneity and polarization are key determinants of content spread. In their study, Friggeri et al. [29] examine the dynamics of rumor propagation on Facebook. They find that rumors, irrespective of their veracity, spread deeply through social networks, with true rumors generating larger cascades. Their propagation continues even after debunking, indicating that users might overlook or ignore debunking comments. Furthermore, they observe that the popularity of rumors is bursty, with humor sometimes serving as an antidote to rumor propagation. However, despite these findings, the authors acknowledge potential biases in their sample collection and analysis. Langguth et al.'s study [12] expanded on the concept of DWs, focusing on the event we aim to research in this study, the misinformation linking 5G technology with the COVID-19 pandemic. They trace the origin of this rumor and reveal how it grew across social media platforms. The study encounters that even contradictory narratives could strengthen DWs, and that the role of commercially-influenced videos is often underestimated in Twitter-only analyses. The authors suggest several countermeasures, including focusing on the financial motivations behind the spread of misinformation in general and DWs in particular and promoting international cooperation in research on DWs. However, the analyses in this study are more qualitative in nature and do not include a structural analysis of the underlying communication in networks. Figure 1: Most significant events that occurred during the course of the COVID-19 and 5G Misinformation event that transpired online in 2020. COVID-19 and 5G Conspiracy Theories as a Specific Digital Wildfire Case As the COVID-19 pandemic swept across the globe in early 2020, a proliferation of tweets emerged linking the virus's origins to 5G wireless technology. Initially confined to a small and insignificant number, the volume of such tweets surged exponentially throughout April 2020, culminating in a series of argon attacks on 5G towers in multiple countries, including the United Kingdom [30], Nigeria [31], and Canada [32]. As mentioned in the previous section, formally, such fast-growing dissemination of online misinformation leading to real-world implications is known as a Digital Wildfire (DW) and ranked as a top global risk by the World Economic Forum [11]. In the following, we introduce the chronology of the 5G and COVID-19 misinformation event delineated into three distinct phases, a classification grounded in a qualitative evaluation of the DW: pre-real world events, during-real world events, and post-real world events. The demarcation of these phases serves as a framework for unraveling the intricate dynamics at play. Notably, the event persists to this day, warranting continued scrutiny. However, this study is confined to exploring the events until May 2020. For a more detailed overview of the DW under investigation, including developments stretching to late 2022, we refer to the adept assessment provided by Langguth et al. [33]. Pre-real world event:With the first tweet collected in early January, we observed a slow growth in daily tweets, insinuating a connection between COVID-19 and 5G throughout January and February 2020. In addition, we note the gradual uptick in traction for such content on platforms beyond Twitter, including notable activity on YouTube. Pinpointing the exact inception point, however, presents a challenge due to the presence of multiple sub-narratives that are arguably leading to the DW. Furthermore, we use Twitter data only, leaving open the possibility that discussions regarding the event were initiated on a platform other than Twitter. However, when investigating the early tweets, we discover an entire spectrum of conspiracy narratives claiming a causality between 5G radiation and the coronavirus. Even though these narratives seem to be as diverse as the individuals spreading them, they share the idea that the 5G technology is dangerous, can hurt people, and thus should not be implemented. For detailed descriptions and tweet samples for subnarratives, we point to the datasets published by Konstantin et al. [34] and Schroeder et al. [21]. At this point, we would like to draw your attention to the fact that before the end of January 2020, only 685 tweets and 1,081 retweets containing both keywords referencing COVID-19 and 5G appeared on Twitter. This comparatively small number has led to the decision that we limit ourselves to the period from the first of February for the purposes of this study. During-real world event:In March, when the pandemic began to take a foothold in Europe, the spread of tweets also picked up its pace, resulting in four times as many tweets from late March to early April. Consequently, the first series of argon attacks happened in the UK, the Netherlands, and New Zealand during the weekend of April 3, 2020. Multiple more followed in the week after, and later some occurred in Canada as well. By July 2, 2020, there were reports of 273 cases of clashes between people who believed in some version of the conspiracy, as well as 121 reports on argon and other types of destruction [35], including the detainment of 8 telecommunication workers in Peru. Post-real world event:In late April of 2020, Twitter banned material and users promoting attacks on 5G infrastructure, and the spreading of content related to the connection seemed to halt. However, even as late as the first quarter of 2021, suspected cases of argon in Africa and Canada [36, 37] started to occur. Figure 2: (Left) Illustration of the concept of temporal slices and (Right) accumulative slices. Temporal slices do not contain the vertices and edges of previous slices, while accumulative slices contain all vertices and edges from the previous slices. In this paper, we recognize the phase characterized as "during-real world events" as indicative of a phase transition phenomenon, a concept borrowed from the field of statistical physics [38, 39]. This classification draws parallels with transitions seen in critical phenomena such as percolation, a well-studied concept in physics. As we delve deeper, it will be evident that the evolution of the DW during the COVID-19 and 5G misinformation event exhibits characteristics akin to the onset of a percolation threshold, signifying a critical phase in the information dissemination process. ## Data Collection and Preprocessing of Massive Twitter Datasets Since Twitter's Terms of Service prohibit storing large datasets, we choose a streaming-based approach, first introduced in Schroeder et al. [40]. We keep only in-stream-anonymized user IDs, the corresponding timestamps, and texts to create the tweet-retweet-user mapping. Moreover, we do neither store nor process any other information. The data collection took place using a custom build framework for Twitter graph analysis [41] and a custom scraping strategy [42]. Since Twitter's search API, at that time, only returned tweets that were not older than two weeks, it is necessary to collect data preventively while hoping that this collection then, at the time of a DW, contains the relevant tweets. In the following, we describe exactly this procedure. Between December 2019 and May 2020, we amassed a total of 6,286,886,977 COVID-19-related tweets, retweets, replies, and quotes (referred to as statuses) leveraging the keywords outlined in Appendix A and using Twitter's search API. It is pertinent to note that querying the Twitter search API frequently yields duplicate entries. To ensure the robustness of our dataset, we meticulously identified and removed these duplicates in the initial phase of our data processing. This filtration process resulted in a refined dataset comprising 2,570,581,178 unique statuses, which formed the basis for our subsequent analysis. Next, we filter for those tweets that mention " 5G " and " 5g ". We do not remove the whitespaces because doing so produces too many false positives completely unrelated to 5G. Moreover, we include alternative spellings such as " 5-G ", although the number of tweets containing these is negligible. All keywords are listed in Appendix B. After applying the second filter, \(364,325\) Covid-19-related and 5G-related tweets remain. As a concluding procedure, we exclude any statuses not originating from the specific timeframe of February 1, 2020, up to and inclusive of May 11, 2020. **Hence, our study spans a precise duration of 100 days, beginning on February 1, 2020, and ending on May 11, 2020.** The enrichment phase commences with the curated dataset derived from the preceding filtering phase. Central to this phase is the intricate process of Twitter thread completion. To elucidate, a Twitter thread is a cohesive series of interconnected tweets stemming from an inaugural tweet and encapsulating all ensuing replies and quoted tweets to foster a consolidated conversation. Such threads offer a structured vantage point, enabling a comprehensive insight into the contextual dynamics enveloping the primary tweet. During this phase, individual threads pertaining to a specified tweet are queried to facilitate the incorporation of statuses surpassing the two-week retrieval constraint imposed by the Twitter search API. Consequently, this method permits the inclusion of tweets devoid of the keywords delineated in Appendix A and B. It warrants mention that despite endeavors to augment the dataset by resurrecting more dialogues linked to the DW, the potential for thread incompletion persists, attributed to the inherent limitations of the Twitter API, which confines queries to parent statuses within a thread exclusively. Notwithstanding this limitation, we contend that the augmentation process, albeit yielding incomplete threads, substantially enhances the dataset by infusing it with valuable context. Following the enrichment phase, we devoted substantial time and resources to delve extensively into the investigation of Figure 3: Three-part depiction of Network \(G_{\downarrow}\). The left subfigure presents the global degree distribution, illustrating the range of degree centralities. The central subfigure displays the distribution of cluster sizes across all accumulative slices, with clusters identified via the Leiden algorithm [43]. The rightmost subfigure explores the cluster size distribution across all temporal slices (\(\Delta t=4h\)). Both the middle and rightmost figures are created by counting the occurrence of clusters with size \(C\) in every slice before averaging the number of occurrences by the number of slices in the experiment. Notably, both degree and cluster sizes exhibit a power law distribution, mirroring patterns found in other social networks. the resulting dataset, which has fostered a rich bedrock for a plethora of scientific publications. A testament to the rigorous endeavors undertaken in this period is our meticulous labeling effort encompassing more than 9,688 tweets to ascertain whether they were indeed a segment of the 5G COVID-19 misinformation event. This scrupulous initiative bifurcated into two distinct datasets: one housing the tweets, and the other cataloging as many as 3,492 individual tweet-retweet cascades, also labeled to demarcate if a cascade was intertwined with the misinformation event -- the datasets are referenced as WICO-Text and WICO-Graph, detailed in publications by Pogorelov et al. [44], and Schroeder et al. [34], respectively. Following the enrichment phase, we engaged in a detailed analysis of the resulting dataset, a process that fostered the foundation for multiple scientific publications. One significant step in this process was the meticulous labeling of over 9,688 tweets to determine their association with the 5G COVID-19 misinformation event. This effort yielded two distinct datasets: one archiving the tweets and another detailing 3,492 individual tweet-retweet cascades, each labeled to indicate association with the misinformation event. These datasets, denominated as WICO-Text and WICO-Graph, are discussed in depth in works by Pogorelov et al. [44] and Schroeder et al. [34], respectively. To further leverage these datasets, we organised a MediaEval Benchmark Challenge task, wherein both datasets were subdivided into testing and training sets, and distributed to an initial pool of 15 groups. These groups embarked on developing distinct classifiers capable of differentiating between tweets and cascades genuinely associated with the misinformation event. The entire endeavor is documented in detail in [18]. ## From Temporal Interactions to Interaction Networks Given the filtered and enriched dataset, we now extract user interaction by counting contacts between each pair of users. We define \(Z_{u}=\) set of users and \(Z_{s}=\) set of statuses. A contact between two users is defined as \[\text{any user }j\text{ interacting with any user }i\text{ through user }j\text{ either retweeting, replying, or quoting user }i. \tag{1}\] The set of contacts induces a symmetric adjacency matrix \(A\) with \(A_{ij}=1\) to label an existing contact between users \(i\) and \(j\) and \(0\) if such contact does not exist. By keeping track of the number of retweets, replies, and quotes between users, we are able to build the directed and weighted network of interactions. Since in this paper, we focus on assessing the size of connected users, we consider, for simplicity, the contacts as unweighted and undirected edges, forming the interaction network and its communities. Furthermore, we call such a network temporal interaction network when contacts have timestamps allowing for only considering excerpts of an interaction network within an arbitrary time window. More precisely, we build temporal interaction networks based on the adjacency matrix \(A\), defining the _underlying graph_\(G_{\downarrow}\) as the temporal graph containing the entirety of vertices and edges, i.e. \[G_{\downarrow}=(V_{\downarrow},E_{\downarrow}), \tag{2}\] where \(V_{\downarrow}=\{v_{i}\}_{i=0}^{N}\) is the total set of vertices labeling the users and \(E_{\downarrow}=\{(u,v,t)|u,v\in V_{\downarrow},t_{0}<t\leq t_{0}+nT\}\), with \(T\) the size of the time window during which connections composing the graph occur, \(t_{0}\) is the initial time and \(n\) labels the time-window. Notice that both \(G_{\downarrow}\) and \(E_{\downarrow}\) are functions of the time-window \(n\). ## Assessing the Evolution of Interaction Networks To examine the network dynamics in a temporal way, we slice \(G_{\downarrow}\). Thus, a slice is a subgraph of \(G_{\downarrow}\) and the set including all slices for the observing period is \(S=\left\{G(V_{\downarrow},E_{i}),i\in L,E_{i}\in E_{\downarrow}\right\}\) where \(L\) is the number of slices. Although a set of slices is temporal, each slice is a static "snap-shot" of a time period in the interaction network. In the following, we present two distinct types of slices: accumulative slices and contact slices (see Figure 2). The rationale behind developing multiple slice types is the potential to extract diverse information from each one. In the context of network analysis, pure temporal slices offer valuable insights into the overall temporal evolution of the system. However, their limitation lies in their inability to trace clusters across multiple slices because of the removal of nonactive vertices in subsequent time periods. Alternatively, accumulative slices provide the advantage of cluster tracking but come with the trade-off of rapidly increasing in size, posing a challenge in handling them effectively. We define temporal slices (see Figure 2 left) and divide the interaction network into slices of sub-graphs based only on the timestamps, and in a non-accumulative manner. Moreover, we remind the reader that edges are contacts, e.g., retweets or comments, and thus associated with a timestamp. We divide our set of edges into intervals, e.g., a day, a week, which we call \(\Delta t=(t^{s},t^{e})\) so that we get \(L\) slices in total and the entire time interval where we collected our dataset \(T=t_{L}^{e}-t_{1}^{s}\). This results in a temporal graph \(\mathcal{G}=(V_{\downarrow},E_{1},...,E_{L})\). Now each slice \(s_{i}\in S\) contains all edges added to the network in the time period \(\Delta t_{i}\) with \[S=\left\{G(V_{\downarrow},E_{i}),i\in L\right\}, \tag{3}\] and \(\bigcup_{i=1}^{L}s_{i}=G_{\downarrow}\). We define accumulative slices (see Figure 2 right) as the set of all contacts made in the time interval \([0,t_{i}],\quad 0<t_{i}<t_{i+1}\leq T\), where \(T\) is the data acquisition time window. This means that the slice \(s_{i+1}\) contains all contacts in \(s_{i}\) plus all other contacts made in the time interval \([t_{i},t_{i+1}]\). \[s_{i+1}=\big{\{}u,v\subset V_{\downarrow}:\exists(u,v,t)\in E_{\downarrow} \wedge t\in[0,t_{i+1}]\big{\}}=s_{i}\cup\big{\{}u,v\subset V_{\downarrow}: \exists(u,v,t)\in E_{\downarrow}\wedge t\in[t_{i},t_{i+1}]\big{\}}.\] Furthermore, we define the distance between two subsequent timestamps, \(t_{i+1}-t_{i}\), as \(\Delta t\), which is equal for all \(i\). The last slice of the experiment is, by definition, the entire underlying graph, \(G_{\downarrow}\). ### Degree Centrality as a Proxy for User Activity Centrality measures give insight into which users or vertices contribute the most to the flow of information. As we are dealing with an undirected network, we cannot determine whether a vertex is highly active, e.g., comments on other statuses with a high frequency or is made highly active by others, e.g., many other statuses are responses to a status. Thus, we define vertex activity as _the number of contacts a user experiences_. In other words, this is the number of edges connected to a given vertex and thus equal to the degree centrality defined as \[\mathcal{C}_{\text{deg}}(i)=\sum_{i,j\in\mathcal{N}(i)}e_{j}, \tag{4}\] where \(\mathcal{N}(i)\) is the neighborhood of the vertex \(i\) defined as \[N(v)=\big{\{}u\subset V_{\downarrow},u\neq v:\exists(u,v,t)\in E_{\downarrow} \big{\}}. \tag{5}\] Degree centrality can be calculated for each slice \(s_{i}\) or the entire network \(G\downarrow\). Later, we use this measure to explore the correlation between the overall vertex activity of a group and other properties of the system, such as the size of the largest clusters and the number of overall contacts. ### Power Law Distribution of User Contacts and Community Sizes Many empirical studies have shown that social networks exhibit power-law degree- and community-size distributions. The phenomenon occurs across complex networks ranging from social [45, 46] to informational [47, 48] and biological networks [49, 50]. Here, power law distributions arise for various reasons, such as preferential attachment, i.e., new nodes are more likely to connect to already well-connected nodes; growth, i.e., networks expand over time; and homophily, i.e., similar nodes tend to connect with each other. Figure 4: (Left) Total number of users and established contacts in each time “slice” and (Right) the corresponding accumulative slices. Contacts encompass retweets, quotes, and comments, along with the count of distinct users per time segment. The 100-day investigation span, from February 1, 2020, to May 11, 2020, separates into \(\Delta t=24h\) accumulative slices (shown on the right) and \(\Delta t=4h\) temporal slices (shown on the left). Both charts clearly demonstrate a phase transition between slices 360 to 390 and slices 61 to 66, respectively. Additionally, potential predictors emerge between slices 270 and 280 on the right chart. For the internet itself [15] as well as for social networks [51], including Twitter [40], it has been shown that both connectivity in general and the number of communication contacts in particular follow power-law distributions. Figure 3 shows that this also applies to communication within the DW under investigation. As with general communication in social networks, there are many users with few contacts to others as well as few users with many contacts to others. Figure 3 depicts \(G_{\downarrow}\)'s global degree distribution, illustrating the range of degree centralities, the community sizes across all accumulative slices, and the community size distribution across all temporal slices. For this, we investigate the community structure underlying our interaction network using the Leiden algorithm [43] with standard configurations for the resolution parameter. The power-law degree distributions denote the existence of few hubs, i.e., Twitter users with numerous connections or interactions. Simultaneously, numerous nodes exhibit fewer connections. In the context of DWs, this distribution suggests that these hubs play a critical role in driving the dynamics of interactions within the DW. A minor proportion of users can significantly impact the course of interactions, substantiating their role in the progression of DWs. Community size distributions following a power law indicate a composition of multiple small communities alongside a few large ones within the DW. While clusters of users exhibit intensive interaction amongst themselves, the network's major interaction activity concentrates within a handful of large communities, for which we later show that conversation within these communities is mostly driven by influential users. Power law distributions, in degree and community size, bring forth numerous implications for DWs. The resilience to random node failures associated with power-law degree distributions implies that deactivating or removing a random user might not disrupt the spread of the wildfire. The highly connected hubs maintain the continuum of interaction. Furthermore, we argue that hubs and large communities significantly influence the conversation direction, narrative shape, and information spread within the DW while network structure facilitates the rapid and broad diffusion of information, ideas, or behaviors, especially if instigated or promoted by the hubs or large communities. ### Defining the Life Cycle via Phase Transition A phase transition is a well-established concept in physics [38, 39], with examples such as the transition from a solid to liquid state (melting) or from liquid to gas (evaporation) or the transition from a set of small disconnected groups of individual to a large set of individuals, spanning throughout the entire society. This latter phase transition is usually called transition to percolation [39], and occurs often within the realm of complex systems and network science, usually associated with the emergence of new structures, functionalities, or patterns. Classic examples of phase transitions in complex systems are the emergence of a giant connected component in a random network [52], or the abrupt transition from free-flowing traffic to a traffic jam, a scenario often referred to as a 'phantom traffic jam' [53]. Moreover, phase transitions also serve as effective metaphors for sudden changes in collective human behaviors, particularly in the digital realm [54]. A specific idea or movement might transition from being recognized by a limited number of individuals to achieving widespread recognition or even reaching viral status. Qualitatively, a phase transition can be investigated by identifying a parameter whose changes can drive the systems from one phase to another and by keeping track of some observable which characterizes the phase. In the case of liquid-to-gas transition, the parameter is, of course, temperature, and the observable is the density of water. In the case of transition to percolation, the parameter is a sort of probability that pairs of individuals have in establishing one contact, and the observable is the size of the largest group of individuals (cluster). In this section, we argue that the concept of phase transitions is integral to our understanding of DW. In particular, we use it to describe the critical moment when the propagation of misinformation abruptly accelerates, shifting from a slow and steady pace to a rapid, wildfire-like spread. At the same time, the phase transition Figure 5: The Average nearest neighbour degree (ANND) function at three different stages: (Left) before the phase transition (before slice 360), (Middle) during the phase transition (between slices 360 and 390) and (Right) after phase transition (after slice 390). quantitatively derives the three phases of the DW under investigation; see our discussions of Figure 4. The 5G COVID-19 DW we analyze in this paper exhibits phase transition characteristics in its spread and contact pattern, suggesting that this phenomenon might apply to other DWs as well. Figure 4 on the left side presents temporal slices, specifically time slices containing only temporary interaction networks for the period from February 1, 2020, to May 11, 2020. These slices capture both the number of contacts as described in Equation 1 and the visualization of different user counts. Further, a slice (on the left) encompasses the interaction network composed of tweets, retweets, comments, and quotes in time intervals of four hours each. The number of contacts and different users reveals phase transition characteristics between slices 360 and 390, corresponding to the period from April 1 to April 6, 2020. This period precisely precedes and follows the first arson attacks. These observations make it plausible to argue that the categorization of DWs into three phases, namely before, during, and after real-world consequences, as qualitatively proposed by Langguth et al. [12], can also be assessed quantitatively. Figure 5 shows further evidence that the three stages of a phase transition in time occur during the 5G COVID-19 DW. While overall, all three phases appear dominantly disassortative, the average nearest neighbor degree, as well as its variance, significantly vary among these phases. Moreover, Figure 5 shows the average nearest neighbor degree dependence on the node degree in a qualitatively different way. Before and after the transition, there is a clear degree-range separation: for \(k\lesssim 400\) the nearest neighbor degree is typically larger than for \(k\gtrsim 400\). At the transition, the nearest neighbor degree seems to distribute more closely to a power-law. This variance could represent an exciting area for future research and a potential unique feature of DWs. ### Confluence of Coalescing Narratives In the study of community dynamics over time, it is crucial to recognize that a community, in this sense, represents more than just a group of users interconnected through high volumes of interaction. In our network, an edge represents an undirected contact, indicating a larger discourse occurring within a particular timeframe. We refer to this discourse as a narrative. This terminology draws from the findings of Langguth et al. [12], who suggest that DWs, particularly the 5G COVID-19 DW, initially combine followers from various conspiracy narratives. There wasn't one specific source for the DW. Instead, various conspiracy theorist groups had already been discussing anti-vaccination and anti-radiation theories prior to the COVID-19 pandemic, which then provided an opportunity for these narratives to merge. Figure 6 portrays the evolution of user communities in the analyzed DW from February 1, 2020, to May 10, 2020. This is represented as time slices on the \(x\)-axis. The left subplot, indicating accumulative slices and a \(\Delta t=24h\), shows a clear and linear increase in the relative size of the largest community. However, the proportion of users belonging to the top 10% of all communities does not increase as significantly. The subplot on the right, which represents temporal slices with a \(\Delta t=4h\), shows a similar trend. Importantly, the composition of the largest community changes over time, meaning the largest community at a given moment may not include the same users as the largest community at a later moment. Figure 6: Evolution of user clustering in the analyzed DW from February 1, 2020, to May 10, 2020, represented in time slices on the x-axis. The left subplot illustrates the situation with accumulative slices and a \(\Delta t=24h\). It shows a clear and linear increase in the relative size of the largest cluster. This implies a growing participation in the 5G-COVID conspiracy narrative, with more users becoming part of the largest cluster over time. However, the fraction of users who are part of the top 10% of all clusters does not exhibit such strong growth. The subplot on the right, displaying temporal slices with a \(\Delta t=4h\), shows a similar trend. It is noteworthy that the identity of the largest cluster changes over time, i.e., the largest cluster at a given time does not necessarily contain the same group of users as the largest cluster at a later time. When comparing the growth in relative size of the largest community with that of the top 10% of all communities, we observe that the latter grows at a slower rate. This suggests increasing participation in the 5G-COVID conspiracy narrative, with a growing number of users becoming part of the largest community over time. Therefore, a trend emerges toward the merging of different narratives. ### Influential Users are Steering the Course of Large-Scale Conversations A critical aspect of the dynamic evolution of DWs is the role of influential users. These users, often characterized by a high degree centrality, have the potential to significantly shape the course of large-scale conversations. Given the network structure of social media platforms like Twitter, a message from an influential user can quickly reach a vast audience, potentially altering the trajectory of an ongoing narrative or sparking a new one. Figure 7 illustrates the correlation between total contacts and the fraction attributable to the most active users, gauged by degree centrality. The left subplot reveals the percentage of vertices in relation to the total number in an accumulative slice with a \(\Delta t=24h\) interval, marking the transition around slice 63. The changing ratio of connections during this transition suggests that highly active users interact with a larger set of distinct users compared to those with lower activity levels. Furthermore, the figure indicates a potential predictor around slice 45, which appears to be primarily noticeable to the active users. The right-hand plot presents the same relationship for the temporal slices with \(\Delta t=4h\). Here, we observe similar patterns as on the left. Additionally, the gap becomes apparent between the most active 2% of users and the less active 5%, 10%, or 20% between slices 350 and 470, underlining the influential role of a small proportion of highly active users in shaping the network dynamics. Our analysis of the 5G-COVID-19 DW reveals that these influential users contribute significantly to the formation and growth of the largest community, as well as to the overall narrative evolution within the DW. Notably, we find that users with a high degree centrality have a disproportionate impact on the narrative dynamics, reaching more users compared to those with lower activity levels. Furthermore, our data shows that influential users can act as connectors between different narratives, further fueling the growth of the DW. This finding is consistent with the idea that the confluence of coalescing narratives is a fundamental characteristic of DWs. In the context of mitigating the impact of harmful DWs, our findings underscore the importance of monitoring the activity of influential users. Effective strategies could include promoting accurate information through these users or mitigating their influence if they are spreading harmful narratives. Our analysis, however, also highlights the complexity of this task. As the identity of the largest community is not constant over time, so too are the influential users within it. This fluidity necessitates dynamic strategies for monitoring and intervention, tuned to the temporal evolution of the DW. Figure 7: Correlation between total contacts and the fraction attributable to the most active users, gauged by degree centrality. The left subplot reveals the percentage of vertices in relation to the total number in an accumulative slice with a \(\Delta t=24h\) interval, marking the transition around slice 63. The changing ratio of connections during this transition suggests that highly active users interact with a larger set of distinct users compared to those with lower activity levels. Furthermore, the figure indicates a potential predictor around slice 45, which appears to be primarily noticeable to the active users. The right hand plot presents the same relationship for the temporal slices with \(\Delta t=4h\). Here, we observe similar patterns as on the left. Additionally, the gap becomes apparent between the most active 2% of users and the less active 5%, 10%, or 20% between slices 350 and 470, underlining the influential role of a small proportion of highly active users in shaping the network dynamics. ### Examination of Pre-transition Phenomena and Exploration of Potential Predictors In the realm of DWs, understanding the dynamics prior to a transition phase can yield critical insights into the potential predictors of such large-scale shifts. Our research illuminates several pre-transition phenomena that suggest impending changes in the narrative or the community structure. We observe a small spike visible around slice 260 in Figure 4 (left), which portrays the time or time slice on the x-axis and the number of users and user contacts on the y-axis. This peak, preceding the actual transition, could indicate that a topic is gaining potential to generate wider reach, serving as a visible spark that signals an upcoming phase shift. Before the transition, we also observe modifications in user behavior patterns. Specifically, the most active users begin to expand their range of contacts. These users, who typically have a high degree centrality, start interacting with a more extensive set of unique users compared to less active ones. This observation reinforces the central role of influential users in guiding the DW into a new phase, a notion we discussed in the previous section. We also see changes in the structural properties of the network before the transition. For instance, the ratio of connections, i.e., the number of communications with different actors, shifts during the transition. This suggests the possibility of using network structure changes as predictive indicators of a DW phase transition. These observations suggest that, with further study and refinement, we might be able to establish a predictive model for DW transitions. Such a model would not only enhance our understanding of DW dynamics but could also provide actionable insights to contain or guide these narratives. Consequently, our future research will focus on refining these potential predictors and testing their predictive power across different contexts. ## Discussion and Conclusion In this paper, we address the dynamics and temporal evolution of a specific DW, namely the COVID-19 and 5G misinformation event from early 2020. By processing large amounts of Twitter data, we trace the course of this DW beyond the investigation of individual information spreading cascades. Rather, we examine the entirety of the underlying communication in interaction networks and thus provide a more holistic view of the dynamics that underlie a DW. We show that both the average count of user contacts and the average community sizes over the total duration of the phenomena adhere to a power law distribution. Thus, underline the intrinsic structure and similarity to other communication networks. Based on the study of these dynamics, we propose a framework that not only allows for accessing the three stages of a DW quantitatively through the concept of phase transitions but also allows for the application of well-established methods to understand DWs through the lens of physics. Physicists are able to predict phase transitions by identifying drivers. Temperature, for example, is the driver for the transition water undergoes between gas, liquid and solid phases. In this work, we identify potential candidates for drivers in the phase transition of a DWs based on the underlying communication and thus paving the way towards a DW prediction model. Thus, we observe a confluence of coalescing narratives by studying the community dynamics over the course of the DW. We ask the reader to recall that a community in a time slice of a temporal interaction network is a group of people, in our case, Twitter users, talking to each other (about a topic) within a certain time window. In future work, we plan not only to extend this research to other DWs, but also to understand the mechanism underlying this confluence, model it, and possibly comprehend it as a building block of a driver. Furthermore, we identify a subset of all users involved in the DW that could be a major driving force for the transition to a global event. This result suggests that potential drivers can be found in the underlying characteristics of the communication of this user group. As with coalescing narratives, further studies need to show that this result also manifests in other DWs. Our study's findings have considerable real-world implications. Unraveling these dynamics can be a potent tool for entities wishing to manage or control information propagation. For democratic societies, these tools can be invaluable in identifying and mitigating nascent extremist groups or misinformation campaigns. On the other hand, they could potentially be exploited to suppress democratic dissent in totalitarian regimes. Given the high stakes, an effective response to DWs requires an approach that harmonizes technological tools with education, media literacy, and an informed public. Moreover, it's critical to ensure democratic accountability for those who wield these powerful tools. To conclude, this study represents an effort to model and understand the dynamics of a DW within complex temporal interaction networks, with results shedding new light on the societal-scale understanding of these phenomena. Our ultimate objective is to devise methods that can predict and mitigate the spread of harmful misinformation. We encourage future research to validate and expand upon the identified patterns and hypotheses, thus ensuring steady progress toward this paramount objective.
2303.02315
Optimizing Fuel-Constrained UAV-UGV Routes for Large Scale Coverage: Bilevel Planning in Heterogeneous Multi-Agent Systems
Fast moving unmanned aerial vehicles (UAVs) are well suited for aerial surveillance, but are limited by their battery capacity. To increase their endurance UAVs can be refueled on slow moving unmanned ground vehicles (UGVs). The cooperative routing of UAV-UGV multi-agent system to survey vast regions within their speed and fuel constraints is a computationally challenging problem, but can be simplified with heuristics. Here we present multiple heuristics to enable feasible and sufficiently optimal solutions to the problem. Using the UAV fuel limits and the minimum set cover algorithm, the UGV refueling stops are determined. These refueling stops enable the allocation of mission points to the UAV and UGV. A standard traveling salesman formulation and a vehicle routing formulation with time windows, dropped visits, and capacity constraints is used to solve for the UGV and UAV route, respectively. Experimental validation on a small-scale testbed (http://tiny.cc/8or8vz) underscores the effectiveness of our multi-agent approach.
Md Safwan Mondal, Subramanian Ramasamy, Pranav Bhounsule
2023-03-04T04:19:48Z
http://arxiv.org/abs/2303.02315v2
A Bilevel Optimization Framework For fuel-constrained UAV-UGV Cooperative Routing: Planning and Experimental Validation ###### Abstract Fast moving unmanned aerial vehicles (UAVs) are well suited for aerial surveillance, but are limited by their battery capacity. To increase their endurance UAVs can be refueled on slow moving unmanned ground vehicles (UGVs). The cooperative routing of UAV-UGV to survey vast regions within their speed and fuel constraints is a computationally challenging problem, but can be simplified with heuristics. Here we present multiple heuristics to enable feasible and sufficiently optimal solutions to the problem. Using the UAV fuel limits and the minimum set cover algorithm, the UGV refueling stops are determined. These refueling stops enable the allocation of mission points to the UAV and UGV. A standard traveling salesman formulation and a vehicle routing formulation with time windows, dropped visits, and capacity constraints is used to solve for the UGV and UAV route, respectively. Experimental validation of the approach on a small-scale testbed shows the efficacy of the approach. ## 1 Introduction The integration of Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) has been increasingly utilized in various applications, such as search and rescue, surveillance, and reconnaissance missions, and transportation [1, 2, 3, 4] etc. One of the most significant challenges in such cooperative routing is the limited fuel capacity of UAVs, which restricts their operational time and range. However, effective cooperation between UAVs and UGVs can enhance the mission efficiency and extend the coverage range of UAVs, thereby enabling them to achieve longer range and persistent operations. The complexity of such cooperative routing for UAV-UGV systems lies in its combinatorial nature, making it computationally challenging to solve the formulation with exact methods. Therefore, suitable heuristics should be used to achieve high-quality solutions quickly. In this paper, we propose a bi-level optimization framework for solving the fuel-constrained UAV-UGV cooperative routing problem that optimizes the operational time and fuel consumption of both vehicles. To validate our proposed algorithm, we conducted hardware testing that provides practical insights into the real-world application and feasibility of our proposed approach. By maximizing the efficiency of the cooperative system, our algorithm has the potential to overcome the limitations posed by fuel capacity and speed constraints, enabling successful implementation of UAV-UGV cooperative routing in a variety of applications. ### _Related works_ Fuel-constrained vehicle routing problem of UAVs has been an area of considerable research. Several studies have investigated the routing of multiple fuel-constrained UAVs with recharging on fixed depo. Levy et al. [5] investigated this routing of multiple fuel-constrained UAVs with fixed recharging depots using variable neighborhood descent (VND) and variable neighborhood search (VNS) heuristics to find feasible solutions for large instances. Similarly, Sundar et al. [6] developed a mixed-integer linear programming model (MILP) that can be solved using an off-the-shelf MILP solver. Instead of fixed depot, Maini et al. [7] addressed cooperative routing of a single UAV-UGV system, with the UGV serving as a mobile charging point for the UAV on a road. Unlike previous studies, the authors developed a greedy heuristic method to find the rendezvous locations of recharging along the UGV route. Manyam et al. [8] investigated the cooperative routing problem of a team comprising a UAV and UGV subject to communication constraints. They formulated the problem as a mixed-integer linear program (MILP) and developed a branch-and-cut algorithm to solve the problem optimally. Researchers extended the UAV-UGV cooperative vehicle routing problem by solving it in a tiered two-echelon approach. To solve the two-echelon cooperative routing problem, Luo et al. [9] proposed a binary integer programming model with two heuristics. Liu et al. [10] developed a two-stage route based framework to optimize both the truck's main route and the drone's adjoint flying routes for a truck and drone based parcel delivery system. They created a hybrid heuristic that combined nearest neighbor and cost-cutting strategies to quickly build a viable solution. In our previous research [11, 12], we explored a hierarchical bi-level optimization framework for cooperative routing of multiple fuel-constrained UAVs and a single UGV. The framework employs K-means clustering at the outer level to generate UGV visit points, which can be connected using the traveling salesman problem (TSP) to obtain the UGV route. At the inner level, a vehicle routing problem with capacity constraints, time windows, and dropped visits for the UAV is formulated and solved using the UGV path. In an extension to our work [13], we demonstrated that optimizing the heuristics parameters with Genetic Algorithm (GA) and Bayesian Optimization (BO) methods can lead to a significant improvement in the quality of the solutions. But the past framework was scenario specific making it hard to generalize for any unknown scenario, which was driving force for this study to come up with a robust optimization framework that can be generalized and implemented experimentally on hardware. On the experimental front, few significant works have been done to demonstrate localization and mapping of UAVs in indoor environments. Nigam et al. [14, 15, 16] investigated for high-level scalable control techniques for unmanned aerial vehicles (UAVs) for performing persistent surveillance in an uncertain stochastic environment in a hardware testbed. Two UAVs were used by Frew et al. [17] to demonstrate road following, obstacle avoidance, and convoy protection in a flight testing, while Jodeh et al. [18] provided an overview of cooperative control algorithms of heterogeneous UAVs by the Air Force Research Laboratories (AFRL). Leahy et al. [19, 20] experimentally validated their proposed method for automating persistent surveillance missions involving multiple vehicles. Automata-based techniques were used to generate collision-free motion plans and vector fields were created for use with a differential flatness-based controller, allowing vehicle flight and deployment to be fully automated according to the motion plans. They used charging platforms for the UAVs for truly persistent missions. Boeing's Vehicle Swarm Technology Laboratory (VSTL) [16, 21, 22, 23] and MIT's RAVEN laboratory [24] testbed have conducted significant UAV flight testing demonstrations in indoor lab scale setups. The novelty of our work lies in the development of a robust bi-level optimization framework that uses minimum set cover algorithm and a task allocation technique to solve UAV-UGV cooperative routing in two echelon fashion. We have validated our proposed methodology with a hardware flight testing in lab scale experimental setup. To this end, we present following novel contributions: 1) The overall framework uses bilevel optimization with task allocation technique for mission allocation and constrained programming based routing solvers. 2) The task allocation technique based on minimum set cover algorithm divides the entire problem into decoupled subproblems which radically simplifies the overall problem. 3) A constraint programming-based formulation for vehicle routing problem with time windows, dropped visits, and fuel constraints enables quick solutions of each subproblem. 4) Hardware validation of our work demonstrates the practical feasibility and real-world applicability of our proposed algorithms and methods. ### _Problem Description_ The aim of the problem is to perform an cooperative mission involves visiting a set of designated mission points ( see figure 0(a)) \(x_{n_{i}}\in X\equiv(x_{i},t_{i})\), using either the UGV road-based visit or the UAV aerial flyover, where \(x_{i}\in R^{2}\) is a position on the ground plane; \(t_{i}\) is the timestamp of the last visit to the node. The cost of travel between any two mission points is defined as the time required to travel from one point to another, \(c_{ij}=t_{j}-t_{i}\). The UGV \(g_{i}\in G\equiv(\tau_{i},f_{i},x_{i})\), and UAV \(a_{i}\in A\equiv(\tau_{i},f_{i},x_{i})\), are heterogeneous in nature, with the UAV having a higher velocity but a lower fuel capacity compared to the UGV. During the mission, the vehicles are represented by current task or state \(\tau_{i}\), fuel level \(f_{i}\) and position \(x_{i}\). Unlike UAV, the UGV is restricted to traveling along the road network, and the fuel consumption rate of both vehicles is a function of the velocity of the vehicle. The UAV can be recharged by the UGV at any refueling stop or at the starting depot, with the recharging time dependent on the fuel level of the UAV. The UGV is assumed to have an infinite fuel capacity due to its larger fuel capacity compared to the UAV. With the described assumptions, the objective is to find the quickest route for the UAV and UGV to visit all mission points together, with the starting depot being both the starting and ending point, while ensuring that the UAV never runs out of fuel. To find the time-optimized route, the following goals must be achieved: Fig. 1: Minimum set cover algorithm and task allocation technique a) Given mission scenario with minimum set cover algorithm. Blue circle indicates radial coverage of UAV. Here, three refuel stops (including starting depot) can cover entire mission b) First subproblem where UGV moves from starting depot to first refuel stop and UAV missions points within radial coverage are assigned. c) Second subproblem where UGV moves from first refuel stop to second refuel stop and UAV missions points within radial coverage are assigned. 1. Identification of suitable stop locations where UAV will synchronize with UGV to get recharged to cover all the mission points, i.e, \(x_{r}\equiv x_{i}=x_{j}\), here, \(x_{i}\in a_{i},x_{j}\in g_{i}\). 2. Determination of optimal times during the mission, when UAV, UGV will meet at those refuel stops i.e, \(t_{r}\equiv t_{i}=t_{j}\), here, \(t_{i}\in A,t_{j}\in G\). 3. Determination of the optimal routes \(\tau_{i}\in A,\ \ \tau_{j}\in G\) for both the UGV and UAV based on the refueling locations \(x_{r}\) and times \(t_{r}\). ## 2 Optimization Framework For the UAV-UGV cooperative routing, we proposed a bilevel optimization framework, which is shown in Fig. 2. The framework is designed as a two-level hierarchical structure, where at the higher level, we determine the UGV route using the "UGV First, UAV Second" heuristic method, which involves prioritizing the UGV route and then constructing the UAV route based on it. To ensure the feasibility of the cooperative route, it is critical to locate suitable refueling sites along the UGV route. We employed a minimum set cover algorithm to identify the best locations for refueling. The inner-level UAV route was built based on the UGV route by dividing the entire scenario into subproblems what could be solved by modeling them as an energy constrained vehicle routing problem (E-VRP). ### _Outer level: UGV route_ Maini et al. [7] demonstrated that in order to establish a viable cooperative route, it is necessary to ensure that at least one refueling stop is located within the UAV fuel coverage radius for each data point. Thus to determine the minimum number of refueling stops required to cover the entire mission scenario, we can adopt the minimum set cover algorithm (MSC). This is a well-established problem that can be solved using a variety of methods, including greedy heuristics [7]. However, in this study, we proposed a constraint programming formulation for minimum set cover algorithm. For a same scenario, we employed both greedy method and constraint programming approach individually to solve the minimum set cover problem where constraint programming method outperformed greedy heuristics. #### 2.1.1 Greedy heuristics method Using a greedy heuristic approach can help to reduce the complexity of the minimum set cover problem. This method starts with the mission scenario points that require coverage \(x_{n}\) and the UAV's fuel capacity \(F\) as the inputs, as an output we aim of determining the minimum number of mission points that can serve as refueling stops \(x_{r}\) to cover the entire mission points \(x_{n}\). In other words, given a set of mission points \(X=\{x_{1},x_{2},...,x_{n}\}\) the goal is to identify a subset \(X_{r}=\{x_{1},x_{2},...,x_{r}\}\) that has the least number of elements to act as refueling stops to cover the entire scenario. The greedy algorithm selects the initial point as the first refueling stop and then sequentially adds the mission points that cover the greatest number of other mission points to the refueling stop set until all points are covered. Although the greedy heuristic can produce an optimal result for a minimum set cover problem quickly, there is possibility of multiple optimal solutions for a given scenario. Since we are implementing a bilevel optimization framework, it is essential to consider all the other optimal solutions of the outer level algorithm. As it is not possible to acquire all optimal solutions using the greedy heuristic, we employed the constraint programming method. This approach can rapidly generate multiple optimal results, if any are present. Also, the greedy method may result into locally optimal solution which can be overcome through an alternate constraint programming formulation of the minimum set cover problem. #### 2.1.2 Constraint programming method The minimum set cover problem can be transformed into a linear integer programming model using the constraint programming method (CP method). The objective function in Eq. 1 determines the minimum number of refueling stops, and the constraint function Eq. 2 ensures that each mission point \(x_{i}\) has at least one refueling stop \(x_{r}\) assigned to it. The decision variable \(y_{i}\) determines whether a mission point will be selected as a refueling stop or not. We utilized Google's OR-Tools(tm) CP-SAT solver [25] to solve this linear integer formulation, and it can record multiple optimal solutions if they exist. After identifying the refueling stop locations, a UGV route can be created by connecting these stops on the road network. We can solve a simple travelling salesman problem (TSP), considering the refueling stops, to determine an optimal UGV route. Once the optimal UGV route is established, we can proceed to the inner loop UAV routing. Figure 2: optimization framework Objective, \[\min\sum_{i=1}^{r}y_{i} \tag{1}\] s.t., \[\sum_{j=1}^{r}y_{j}\geq 1,\forall x_{i},\ i=1,\ldots,n \tag{2}\] \[y_{i}\in\{0,1\},\forall i=1,\ldots,n \tag{3}\] ### _Inner level: UAV route_ At the inner level of our framework, we employed a task allocation technique to divide the entire mission scenario into independent subproblems which were solved individually as an energy constrained vehicle routing problems (E-VRP). #### Ii-B1 Task allocation technique Given the scenario and the obtained UGV route from outer loop MSC algorithm, we can divide the entire problem into \(n-1\) number of subproblems (\(n\) = number of refuel stops with starting depot) with an assumption that UGV travels only between two refuel stops in each subproblem. Before the subproblem division, each mission point is assigned to its nearest refuel stop (including starting depot) that covers it. In the subproblems, the first refuel stop is the origin node and the second refuel stop is the destination node of UGV route. The sub-problems are decoupled from each other by allocating separate mission points in them. The UAV mission points assigned to the destination refuel stop under each subproblem are allocated to that subproblem. Only, for the first subproblem the mission points assigned to both origin and destination node should be allocated to it. Figure 1 demonstrate the process of subproblem division and mission allocation. Figure 0(a) shows refuel stop locations along the UGV route obtained from outer level MSC algorithm for a given scenario, the first subproblem (figure 0(b)) is created by taking the starting depot as the origin node and the refuel stop 1 as the destination node. The UAV mission points covered by origin node (starting depot) and destination node (refuel stop 1) are assigned for subproblem 1. Similarly, the second subproblem (figure 0(c)) is created by taking the refuel stop 1 as origin node and refuel stop 2 as destination node and the mission points covered by the destination node (refuel stop 2) are assigned for this subproblem. Once we get an independent set of subproblems through task allocation, we tried to solve each subproblem by modeling it as energy constrained vehicle routing problem (E-VRP). #### Ii-B2 E-VRP model The formulation of the E-VRP can be described with a graph theory. Consider an undirected graph \(G=(V,E)\) where \(V\) is the set of vertices \(V=\{0,1,2,...k\}\) and \(E\) is the set of edges between the vertices \(i\) and \(j\) as \(E=\{(i,j)\ \|\ i,j\in V,i\neq j\}\). The non-negative arc cost between the vertices \(i\) and \(j\) is expressed as \(c_{ij}\) and \(x_{ij}\) is a binary decision variable whose value will be 1 if a vehicle travels from \(i\) to \(j\), and 0 otherwise. The objective function of the E-VRP problem is indicated by Eq. 4 which minimizes the total travel time, the other formulation of the constraints like fuel constraint, time window constraint and generic constraints of the E-VRP can be found here [13]. \[\min\sum_{i}\sum_{j}c_{ij}x_{ij}\quad\forall i,j\in V \tag{4}\] Again, we used Google OR-Tools TM as our heuristic implementation for solving the E-VRP model with constrained programming (CP). OR-Tools TM uses a search tree, local search, and meta-heuristics to identify feasible and optimal solutions. The heart of OR-Tools TM is a CP-SAT solver that employs _DecisionBuilder_ to find an initial feasible solution using the _Path Cheapest Arc Strategy_[26]. OR-Tools TM then performs a local search to find the best solution in the neighborhood of the current solution. This local search uses move operators to rewire nodes and check for feasibility and cost, and the moves are repeated until a termination criterion is met [12]. ## III Results Figure 3 provides an illustrative example of the input and output of the problem at hand. The input, depicted in Figure 3, consists of mission points denoted by black crosses. The UAV and UGV must both initiate and terminate at the starting depot, while ensuring that all mission points are covered either by the UAV or UGV. The UAV can recharge at the depot or at designated refueling sites from the UGV. The UAV and UGV have fixed velocities of 10 m/s and 4 m/s, respectively, and their fuel capacities are 287.7 kJ and 25.01 MJ, respectively. To carry out the optimizations, we employed Python 3 and OR Tools TM library, which ran on a 3.7 GHz Intel Core i9 processor with 48 GB RAM on a 64-bit operating system. For the scenario, two types of cooperative routes (if different) were generated by implementing the Greedy method and the CP method at the outer loop of the suggested framework. The UGV-only route, where only a UGV completes the whole mission was also determined for the scenario. Based on the metrics of total mission completion time and total energy consumption, the impact of collaboration between the UAV and UGV on mission execution was assessed by comparing the cooperative route with the UGV only route which served as the upper limit. Table I shows the improvement that was achieved through cooperative routing of UAV-UGV on the mission scenario. Cooperative routing is extremely energy efficient. Both CP method and greedy method at the outer loop showed positive improvement reducing total energy consumption in the mission by 36-39%. However, for total mission time greedy method at the outer loop had a negative impact. This is due to position of refuel stops ( see trajectory in figure 0(b) ) what made the UAV to take frequent detours (6 times) for recharging at the refuel sites elongating the total mission time. However, appropriate refuel stop locations obtained CP method ( see trajectory in figure 0(a) ) helped UAV to complete its route with less recharging detour (4 times), which effectively reduced total mission time. Further insights about the trajectory of the cooperative route can be drawn from Table II. As discussed earlier, greedy results in longer UAV travel time which ultimately costs higher mission time. Energy consumption of UGV is low in greedy method as UAV is visiting majority of mission points compare to CP method. In sum, both CP method and greedy heuristics are capable of providing feasible cooperative route for constrained complex mission scenario; however CP method outperforms greedy method at the cost of some computational efficiency. ## 4 Experiment Design The most stringent way of validating a framework is by hardware demonstration, but it often has limitations in terms of scope and variety. Therefore, simulation results are often utilized to assess method performance, while experiments are used to verify some of those outcomes. The validation of a surveillance planning framework through experiments is especially significant as it is a highly challenging problem to solve. We devised a small laboratory scale scenario to test the proposed framework methodology with just one UAV and UGV. In order to fully achieve autonomy during the experiment, each robot must be capable of independently locating, planning, and executing its desired route to each location without relying on external input. Our localization, planning, and control strategies are built on the hardware's sensing, processing, and communication capabilities. However, developing an appropriate experimental system poses various challenges as we need to consider the integration of software, hardware, and communication to accomplish the task. We have separately described the individual blocks of the hardware architecture (see figure 4) as follows: 1. **Hardware:** For our UAV, we chose the DJI Tello quadcopter due to its compact size and affordability. It is a small quadrotor measuring 9.8 x 9.2 x 4.1cm and weighing 80 g, including the battery (Li-Po, 1100mAh, 3.8V) and 3 inch propellers. The drone comes with \begin{table} \begin{tabular}{l l l} \hline \hline Metrics & CP method & Greedy method \\ \hline Total time (min) & 200 & 272 \\ Computational time (min) & 9 & 4 \\ **UGV results** & & \\ Travel time (minutes) & 200 & 272 \\ Energy consumed (MJ) & 20.79 & 19.52 \\ Mission visited & 22 & 18 \\ **UAV results** & & \\ Travel time (minutes) & 100 & 136.203 \\ Energy consumed (kJ) & 1186.464 & 1618.092 \\ Recharging stops on UGV & 3 & 6 \\ Recharging stops on Depot & 1 & 0 \\ Missions visited & 22 & 26 \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison between trajectories of CP method and greedy heuristics \begin{table} \begin{tabular}{l l l l l l} \hline \hline Metrics & \multicolumn{2}{c}{Cooperative} & \multicolumn{2}{c}{UGV} & \multicolumn{2}{c}{Improvement (\%)} \\ \cline{2-6} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{Greedy} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{Greedy} \\ & \multicolumn{1}{c}{method} & \multicolumn{1}{c}{method} & \multicolumn{1}{c}{method} & \multicolumn{1}{c}{method} \\ \hline Time & & & & & \\ consumption (min.) & 200 & 272 & 233 & 14.16 & -16.74 \\ Energy & & & & & \\ consumption (MJ) & 21.98 & 21.14 & 34.69 & 36.62 & 39.06 \\ \hline \hline \end{tabular} \end{table} TABLE I: Impact of the optimal solution of the cooperative routing. Fig. 3: UAV & UGV trajectory obtained from bilevel optimization with Greedy and CP method at the outer loop. Numerical and alphabetical order shows the UAV and UGV motion respectively. a) CP method based trajectory b) greedy method based trajectory. a built-in flight controller that offers basic features such as flight stabilization and attitude regulation, translational velocity control, and simple trajectory execution. This flight controller is closed hardware and operated using a dedicated command protocol. However, a higher level of autonomy can be achieved by using an external ground-based computer that uses received telemetry and video feed to control the drone with the same communication protocol. For the UGV, we built a small raspberry-based omnidirectional car with a landing pad for charging the UAV. The UGV can be controlled by controlling its four wheels through the raspberry pi. 2. **Control & communication:** A wireless 2.4 GHz 802.11n WiFi connection was used to communicate with the drone. The approach makes use of the official Tello SDK 2.0. The UDP port is used to send text messages to the drone programming interface. To create the application, we used the SDK and the low-level Python library DJItelloPy. Wireless 5 GHz Wifi communication was also established with the Raspberry Pi for controlling the UGV. 3. **Central manager:** The final element of the system design is the centralized system manager. Based on the input scenario, this manager solves a fuel constrained vehicle routing problem using the proposed bi-level optimization framework to generate separate routes for the UAV and UGV. Then the manager communicates with the UAV and the UGV, assigns respective tasks to them to begin the surveillance. The manager continuously monitors their progress by collecting the positional data of the vehicles from motion capture system. The UGV runs on an open loop control and stops on its assigned stop locations on the route. While the central system enforce an feedback control on the UAV to navigate it correctly and make a successful landing on the UGV during recharge. 4. **Experiment scenario:** The experiments were conducted in the Robotics and Motion Lab at the University of Illinois Chicago. The lab has a designated flight area equipped with a motion capture system that serves as a reference for the position of reflective markers placed on the quadrotor and the ground vehicle. This enables real-time localization of the robots during the experiment. The positional data of the vehicles can be obtained at a rate of 100 Hz, with a latency of less than 9 ms. A mission scenario was created by selecting 12 different points over an area of \(4m\times 4m\) for the UAV, a road network was designed for UGV and a fuel constraint was introduced by limiting the UAV's flight time in a single recharge (endurance limit). For this experimental setup the endurance limit was setup to be 50 seconds, the UAV and UGV speed was 0.20 m/s and 0.15 m/s respectively This required the UAV to visit the UGV at regular intervals for recharging in order to complete the mission. However no real recharging took place, it was only hypothesized that UAV gets recharged instantly when it lands on the UGV. The UGV road network was also designed to be challenging, with the farthest points on the mission requiring the UAV to operate near its maximum endurance limit, thus testing the robustness of the proposed framework. ### _Flight test results_ Multiple trials of the experiment were carried out on the scenario. The algorithm was fed with the locations of the Fig. 4: Hardware architecture Fig. 5: Comparison between simulation and experimental results. In the trajectory, the alphabetical order represents the direction of motion of the UAV mission points of the UAV and the road network points as input. The outer-loop of the algorithm determined the UGV traversal path with refueling spots in space and time, while the inner-loop of the framework generated the UAV route. Both the routes were provided to the individual agents from the central manager, and the agents started performing their missions. The purpose of the experiment was to verify the feasibility of the algorithm's output and to determine if the multi-agent experiment could be successfully carried out with our experimental architecture. The motion capture system was used to track the positional data of the UAV and UGV, which was processed to produce the experimental route. The figure 4(a) shows a comparison between the simulation route and the experimental route. During the experiment, due to dynamics, the UAV drifted away in some places but successfully managed to visit the mission points and get recharged from the UGV by landing on it, because of the feedback control. The endurance limit constraint was also tested, and it was observed that the maximum flight time in a single recharge was always below the maximum limit in figure 4(b). Dynamics of the UAV played an important role in the experiment which was compensated by considering buffer time period in the modeling of take off and landing of the UAV in the simulation counterpart. Instances of the flight test can be seen in figure 6. ## 5 Conclusions We conclude that a bilevel optimization framework with suitably designed heuristics is an effective method for solving cooperative routing problems. Our heuristics involved solving for UGV route first using the minimum set cover algorithm and traveling salesman followed by the UAV route based on mission allocation and a vehicle routing formulation. We found constraint programming outperform greedy heuristics to solve the minimum set cover algorithm although the former takes more computational time than the latter. Experimental validation of the framework on a small testbed shows a close match between simulation and hardware and confirming the proposed approach. ## Acknowledgment The authors would like to thank Jean-Paul F. Reddinger, James M. Dotterweich, and Marshal A. Childers from DEVCOM Army Research Laboratory, Aberdeen Proving Grounds, Aberdeen, MD 21005 USA and James D. Humann is with DEVCOM Army Research Laboratory, Los Angeles, CA, 90094 USA for providing solutions that helped improve the formulation and solutions.
2301.03672
Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems
Satellite communications emerged as a promising extension to terrestrial networks in future 6G network research due to their extensive coverage in remote areas and ability to support the increasing traffic rate and heterogeneous networks. Like other wireless communication technologies, satellite signals are transmitted in a shared medium, making them vulnerable to attacks, such as eavesdropping, jamming, and spoofing. A good candidate to overcome these issues is physical layer security (PLS), which utilizes physical layer characteristics to provide security, especially due to its suitability for resource-limited devices such as satellites and IoT devices. In this paper, we provide a thorough and up-to-date review of PLS solutions for securing satellite communication. We classify main satellite applications into five domains, namely: Satellite-terrestrial, satellite-based IoT, Satellite navigation systems, FSO-based, and inter-satellite. In each domain, we discuss and investigate how PLS can be used to improve the system's overall security, preserve some desirable security properties and resist popular attacks. Finally, we highlight a few gaps in the related literature and discuss open research problems and opportunities for leveraging PLS in satellite communication.
Nora Abdelsalam, Saif Al-Kuwari, Aiman Erbad
2023-01-09T20:47:56Z
http://arxiv.org/abs/2301.03672v1
# Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems ###### Abstract Satellite communications emerged as a promising extension to terrestrial networks in future 6G network research due to their extensive coverage in remote areas and ability to support the increasing traffic rate and heterogeneous networks. Like other wireless communication technologies, satellite signals are transmitted in a shared medium, making them vulnerable to attacks, such as eavesdropping, jamming, and spoofing. A good candidate to overcome these issues is physical layer security (PLS), which utilizes physical layer characteristics to provide security, especially due to its suitability for resource-limited devices such as satellites and IoT devices. In this paper, we provide a thorough and up-to-date review of PLS solutions for securing satellite communication. We classify main satellite applications into five domains, namely: Satellite-terrestrial, satellite-based IoT, Satellite navigation systems, FSO-based, and inter-satellite. In each domain, we discuss and investigate how PLS can be used to improve the system's overall security, preserve some desirable security properties and resist popular attacks. Finally, we highlight a few gaps in the related literature and discuss open research problems and opportunities for leveraging PLS in satellite communication. Physical layer security, Satellite communication, terrestrial networks, Internet of things (IoT), LEO, inter-satellite. ## I Introduction As more devices are inter-connected, and the demand for high bandwidth and reliable communication is increasing. The current terrestrial networks have limited resources and spectrum, which cannot meet the requirements of next-generation applications and the increasing number of internet of things (IoT) devices. In addition, current cellular networks cannot cover all geographical areas, especially remote locations with low populations, because network installation in these areas is both expensive and challenging [1]. As a result, satellite networks have gained tremendous interest in recent years as a promising solution for the limitations of terrestrial networks. Satellite communication systems cover large areas due to their broadcast nature, providing a large-scale footprint service and flexible connectivity [1, 2]. However, satellite links suffer from a significant round trip time that increases latency and decreases service quality. Therefore, using satellite links alone will not fulfill the expected network requirements. Leveraging the benefits of terrestrial and satellite networks, the researchers start designing new network architecture to achieve the quality of service required with ubiquitous connectivity. Furthermore, there is a significant interest in satellite constellation projects such as Starlink, which SpaceX initiated in 2015 to provide high-speed internet connection worldwide using LEO satellites[3]. Signals sent by satellites are transmitted in a shared medium and can be received by all surrounding nodes, both legitimate and malicious. Hence, satellite communications are vulnerable to both passive and active attacks. In passive attacks, such as eavesdropping, the adversary observes the data transmitted through the channel, attempts to retrieve confidential information, and causes illegal leakage [4]. On the other hand, in active attacks, such as spoofing, jamming, and data manipulation, the adversary tampers with the traffic during its transmission. For example, the attacker can pretend to be a legitimate satellite or station and obtain illegal access to sensitive information. These attacks can cause ripple effects to the satellite networks and enormous damage, such as distraction, denial of service, and economic losses. There are two approaches to secure such communication networks, the upper layer approach, and the physical layer approach. The upper layer approach is based on classical cryptographic techniques, including symmetric cryptography (using a secret key shared by legitimate users) and public key cryptography, and it proved stable against many widespread attacks. However, encryption and decryption are expensive processes requiring high computational power, which is not usually available for satellites and IoT devices [5]. Furthermore, key management and distribution are usually problematic for satellites because they require more complex architectures and protocols. Moreover, cryptographic techniques assume that the adversary does not have enough computational capabilities to break its schemes, such as RSA, making it vulnerable once fault-tolerant quantum computers are available [6]. On the other hand, physical layer security (PLS) techniques are based on the wiretap model introduced by Wyner [7], in which information-theoretic security can be achieved without pre-shared keys. This model has two channels: the main channel between the legitimate users (transmitter and receiver) and the wiretap channel between the legitimate transmitter and the eavesdropper. PLS aims to lower the quality of the wiretap channel compared to the main channel to maximize the mutual information difference between the two channels. PLS utilizes wireless channel characteristics such as reciprocity and spatial and temporal variations to encode the information exchanged. ### _Existing Surveys_ PLS is a lightweight approach applicable for securing many wireless communication systems, such as satellites and IoT devices. Many works in the literature have recently focused on designing PLS schemes to secure terrestrial and satellite networks with different architectures. Some existing surveys mainly discuss satellite evolution from multiple perspectives. Surveys such as [8] and [1] review the convergence of terrestrial and satellite network integration. The authors of [8] identify the critical building blocks and the applicability of the current networks for convergence from architecture and performance aspects. Similarly, [1] is a comprehensive survey that discusses the motivation and requirements for the convergence of satellite and terrestrial networks and the key enablers. Another type of surveys focuses on the physical layer techniques and optimization approaches. The authors in [9] explain PLS fundamentals, including attack models and wiretap channel models. Similarly, the authors in [10] divide PLS types against passive attacks into the signal-to-interference-plus-noise ratio-based approach and complexity-based approach (key-based). Likewise, the authors in [6] focus on the SNIR approaches and reviews works related to secure beamforming and antenna node selection. The authors in [5] provided a broader view of PLS techniques and reviewed the three main security properties: node authentication, message integrity, and confidentiality. Some surveys cover the security aspect of satellite communication [2],[11],[12]. Authors in [12] discuss the vulnerabilities and security challenges of LEO satellite systems, including passive eavesdropping, active attacks, and interference with possible countermeasures. On the other hand, authors in [11] focus on the security enhancement techniques dividing it into physical layer security mechanisms and cryptography. They consider the PLS schemes that ensure confidentiality, authentication, and availability of satellite communications. Lastly, [2] studies the state-of-art PLS implementations in three satellite network architectures: land mobile satellite networks, hybrid satellite and terrestrial relay networks, and integrated networks. A comparison of existing surveys is available in table 1. In our comparison, we focus specifically on PLS in satellite domains, which multiple surveys in the literature do not cover. ### _Contribution_ Extending terrestrial coverage is generally the main application of satellites. However, Satellites can help to realize many other important and practical applications. For example, satellite systems are vital in navigation applications, which continuously provide positions and time information for any location on earth. Global navigation satellite systems are a growing development field from the global positioning system (GPS) project in 1973 to regional systems projects planned to be launched in 2023 [13]. Similarly, satellite-based IoT is a promising domain leveraging satellite coverage to connect internet of things devices distributed over large geographical locations and harsh environments, where mission-critical IoT applications such as rescue operations or disaster recovery can finally be efficiently realized [14]. Furthermore, satellites are prominent in optical communication, where free-space optical (FSO) networks are extended to cover increasingly larger areas via satellites. Eventually, satellite networks will further grow in size and maturity, and that will entail providing and maintaining communication between the satellites (inter-satellite communication), and that will inevitably lead to even more interesting applications. Existing surveys provide reasonable coverage of the use of PLS at several satellite networks. However, the literature lacks a comprehensive survey covering multiple satellite-based domains, and this is what we attempt to address in this survey. In this paper, we classify satellite applications into five main application domains and analyze the PLS in each domain. These domains are: terrestrial and satellite networks, navigation satellite systems, satellite-based IoT, FSO-based systems, and inter-satellite communications. There are many other applications for satellites, such as those related to military and research, but to the best of our knowledge and based on our literature analysis, we found that these five areas are the main domains where satellites play a major role. Hence it is critical to analyze the security threats for each of these domains and study how PLS can provide the necessary protection, utilizing the unique characteristics of each domain. The contributions of this survey can be summarized as follow. * We classify satellite networks as follows: Satellite-terrestrial, satellite-based IoT, Navigation, FSO-based, and Inter-satellite, as shown in Figure 1. We highlight the primary usage for each domain and discuss their security concerns and vulnerabilities. * We review the state-of-art works of PLS techniques for each satellite domain, considering their design and security analysis. We provide an intensive comparison between all available solutions for each area and discuss their goals and results. * We identify future trends and directions for PLS development in satellite networks. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Survey & Year & IOT networks & GNSS satellites & satellite-terrestrial networks & FSO satellites & inter-satellites \\ \hline [8] & 2016 & ✗ & ✗ & ✗ & ✗ \\ \hline [9] & 2016 & ✗ & ✗ & ✗ & ✗ \\ \hline [10] & 2018 & ✓ & ✗ & ✗ & ✗ \\ \hline [6] & 2018 & ✗ & ✗ & ✗ & ✗ \\ \hline [1] & 2019 & ✗ & ✗ & ✗ & ✗ \\ \hline [2] & 2019 & ✓ & ✗ & ✓ & ✗ \\ \hline [11] & 2021 & ✗ & ✓ & ✓ & ✗ \\ \hline [12] & 2022 & ✗ & ✗ & ✗ & ✗ \\ \hline Our survey & 2023 & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} TABLE I: Comparison between existing surveys and this survey ### _Organization_ The rest of this paper is organized as follows: Section II provides the necessary background for satellite and space networks while section III provides background for PLS. In section IV, we discuss the five domains of satellite networks. For each domain, we describe the architecture of the satellite network, its importance, and review recent PLS schemes and compare them. Finally, we conclude this paper in section V by highlighting open challenges and future directions of PLS in satellite networks. Figure 1 provides a visualization of the structure of this survey. ## II Space Networks Researchers propose several integration architectures to leverage the benefits of both satellites and terrestrial networks. A generic architecture illustrated in figure 2 extends the current system to achieve global coverage by dividing terrestrial-satellite networks into three main parts: terrestrial networks, air-based networks, and satellites network [15, 1] and [16]. Terrestrial networks include conventional wired and wireless communication networks, satellite ground stations, and all user terminals on the earth's surface. The next level is the air-based networks, including High-altitude platform stations (HAPS), Unmanned Autonomous Vehicles (UAVs), and planes. The last level is the satellite networks, which include three types of satellites: low earth orbit (LEO), medium earth orbit (MEO), and geostationary earth orbit (GEO). Different satellite networks operate in different orbital heights resulting in different delays and coverage areas. GEO satellites operate at an orbit height of 36000 km, covering a large area of the earth, which increase their availability but introduces latency due to longer round trip. Hence, GEO satellites work as the space backbone network and are responsible for network management. MEO satellites operate in lower orbits at the height of 2000-20000 km, which decreases the latency, but increases the number of satellites needed to cover the earth. LEO satellites are the closest to the earth, operating on 500-2000 km, with decreased latency, path loss, and attenuation. Hence, LEO satellites are considered access points to space networks with inter-satellite links. All these types of satellites with their communication links are vulnerable to several attacks. Like conventional adversaries, space adversaries can either be passive (attempting to compromise the confidentiality of the communication) or active (attempting to spoof the communication by tampering with the traffic). Furthermore, adversaries can target the service availability of the satellite systems by jamming or compromising them. Therefore, various security concerns should be considered for each satellite network design and architecture. ## III Physical layer security (PLS) The physical layer security concept was first introduced by Wyner [7], who proposed a wiretap channel model. This model secures legitimate users' channels from unauthorized eavesdropping or signal interception. It aims to utilize the wireless channel noise and randomness to outperform the quality of the eavesdropper's channel (wiretap channel). When the wiretap channel is degraded, the mutual information difference between the two channels is maximized, preventing the eavesdropper from being able to decode the transmitted signals. Mutual information is the amount of information the receiver obtains from the original message from the received one. Mutual information measures the reliability of the link because it quantifies the amount of information sent correctly to the receiver. Hence, PLS aims to maximize the mutual information of the legitimate link (increase reliability) while minimizing it on the wiretap link, which increases randomness and reduces the amount of information received by the eavesdropper [17]. PLS techniques are unconditionally secure (assuming a threat model where adversaries have access to unlimited resources), which makes them quantum-resistant by nature. Furthermore, PLS does not require pre-shared keys between legitimate users, which solves the current key management and distribution difficulties. Finally, PLS is lightweight, making it an ideal choice for IoT applications. In the following section, we discuss some PLS approaches dominant in the literature. ### _Signal processing approach_ PLS signal processing approach increases the main channel's SINR (signal-to-interference and noise ratio) compared Fig. 1: Organization of this survey paper to the wiretap channel. As a result, the eavesdropper will have a more noisy channel (degraded), which prevents her from accessing the transmitted data. This is a keyless approach and can be based on channel adaptation or inserting artificial signals to degrade the wiretap channel. An example of an enabling technology is multiantenna. Multiple input multiple-output (MIMO) is multiantenna technology where the transmitter and receiver have several antennas for communication. These antennas help in increasing the transmission rate and enhancing signal strength. Furthermore, MIMO techniques vary signal strength between legitimate users and eavesdroppers to increase secrecy capacity. MIMO technologies have recently attracted increasing attention in satellite communications [18, 19]. Examples of PLS techniques based on MIMO are listed below: * Beamforming: a technique that focuses the signal strength in one direction creating a beam of signals using multiple antennas. This beam is used to direct the signal towards the legitimate user to maximize his SINR while downgrading the adversary's (as the latter only receives a negligible amount of leaked signal). Figure (a)a illustrates how beamforming varies the signal strength between the two channels. * Zero forcing: a method for canceling the interference for multiple users in wireless communication. Zero forcing (ZF) precoding can be used to increase secrecy capacity by sending the message to cancel interference orthogonal to the eavesdropper's channel resulting in a null signal on the eavesdropper's side. Hence, this technique creates the null signal based on the eavesdropper's channel state information (CSI). Figure (b)b illustrates how the ZF signal is orthogonal to the eavesdropper. * Artificial noise: an interference signal that increases the noise in the eavesdropper channel to protect transmitted data. The transmitter will divide the transmission power for transmitting the message and sends the artificial noise in the direction of the eavesdropper. The transmitter needs to know the CSI of the eavesdropper to send this artificial noise. This noise is added orthogonal/ perpendicular to the data vector means in the null space of the receiver. Hence, it is designed to be canceled on the legitimate user side and only affects the illegitimate user. Figure (c)c illustrates how artificial noise can be designed. Alternatively, the transmitter can use relay devices to amplify the signal on its way to the receiver [20]. It can be used to increase the secrecy capacity through different cooperative strategies/communication with the transmitter. ### _Key-agreement approach_ key agreement techniques leverage the randomness of the channel between legitimate users to generate secret keys. In this method, there is no assumption that eavesdroppers will have a degraded channel or less SNIR. It works on systems where transmitter and receiver experience theoretically identical wireless channel states while it is different for eavesdroppers, which should be located half wavelength away [10]. In these systems, the users do not exchange explicit messages about the channel; they use only pilot messages, which the eavesdropper can use to estimate the channel and recover the key. This method utilizes channel state information (CSI), received signal strength (RSS), or phase shift between the legitimate user to extract randomness, which is then quantized into random bits. Next, a reconciliation process is used to remove erroneous bits. Finally, the generated key's randomness is improved using privacy amplification, which removes any correlation between bits. This method is used to achieve confidentiality and authentication via the physical layer. ### _PLS Evaluation_ Several security metrics for measuring the effectiveness of PLS techniques exist. In this section, we summarize the most commonly used security metrics with their definitions and expressions. Secrecy CapacitySecrecy Capacity is the highest communication rate that guarantees reliability for the legitimate link with perfect security, which means that mutual Fig. 2: Satellite and terrestrial integration architecture information for the eavesdropper channel is zero. Hence, it is the difference between the main channel capacity and the wiretap channel capacity. Secrecy Capacity can be expressed as: \[C_{s}=max[I(X,Y_{B})-I(X,Y_{E})] \tag{1}\] where \(I\) is the mutual information, \(X\) is the message sent, and \(Y_{B}\) and \(Y_{E}\) are the messages received by the legitimate user and the eavesdropper, respectively. Achievable Secrecy Rate :The achievable secrecy rate is the difference between the data rate achieved by the main channel and the wiretap channel with the Gaussian codebook. It can be expressed as: \[R_{s}=[R_{B}-R_{E}]^{+} \tag{2}\] Where \([]^{+}\) means \(max(x,0)\) and \(R_{B}\), \(R_{E}\) are the legitimate and eavesdropper link rates, respectively. The achievable secrecy rate is the lower bound of the secrecy capacity, which is used for more computationally affordable calculations since the difference in the secrecy capacity is a non-convex optimization problem [21]. Secrecy outage probabilityDue to variations in channel state and quality, the secrecy rate can be affected over time. If the secrecy rate drops below a specific target rate, the secrecy outage happens, which indicates that the security goal can not be achieved. The secrecy outage probability can be measured as \[SOP=Pr(C_{R}<R_{0}) \tag{3}\] where \(C_{R}\) is the channel capacity and \(R_{0}\) is the target rate [22]. For some SOP is practical to know the likelihood of the secrecy outage to give more insights about the security level. Secrecy Energy Efficiency :In power-constrained systems, it is essential to consider energy consumption in the security scheme design. Secrecy energy efficiency measures the energy consumption by all nodes with the security performance [23]. Specifically, it can be measured as \[SEE=R_{s}/E_{t}otal \tag{4}\] where \(R_{s}\) is the secure bits, and E total is the total energy the system uses. Therefore, SEE measures the number of secure bits transmitted in the energy unit. Security GapThe security gap is a practical measure of security performance that was introduced in [24]. It is based on the bit error rate at the legitimate receiver and eavesdropper and can be expressed as: \[SG=SNR(B)_{min}-SNR(E)_{max} \tag{5}\] This equation denotes the difference between the minimum signal-to-noise ratio needed by the receiver to decode the signal correctly and the maximum signal-to-noise ratio for the eavesdropper that ensures a specific level of error rate. The gap between these two ratios indicates the quality advantage that legitimate users should have to guarantee the security requirement [17]. ## IV PLS on Satellite Communication In this survey, we classify satellite networks into 5 domains: Hybrid, IOT, Navigation, FSO, and inter-satellite. We review each of these domains in the next subsections. ### _Domain 1: Satellite-terrestrial Networks_ Extending terrestrial networks is the primary goal of satellite network integration to overcome current network limitations of coverage and spectrum. Specifically, leveraging the large footprint of satellites, the integration of satellite and terrestrial networks supports connecting rural and off-shares areas that current network infrastructure does not cover. Furthermore, the current congested spectrum poses serious challenges to applications requiring high bandwidth. Hence, spectrum sharing between satellite and terrestrial networks allows efficient spectrum use and provides reliable communication. However, several challenges must be solved to achieve this integration, including performance, resources, and security. Integration designs can be categorized into land mobile systems, hybrid and cognitive networks. In the following subsections, we will review each design architecture with its main security concerns and PLS works. Land Mobile satellite Networks :Land mobile satellite (LMS) networks consist of satellites directly connected to ground stations using radio frequency signals. Satellites can be spot beams or multibeams, where the difference is the area of Fig. 3: Techniques for PLS based on signal processing (a) Beamforming technique that directs the signal towards the legitimate user (b) Zero-forcing technique sends messages to cancel interference orthogonal to the eavesdropper’s (c) Artificial noise technique sends interference signal to the eavesdropper side. coverage and spectrum reuse. Specifically, multibeam allows coverage of multiple users using a single beam and sends the information to multiple spots on the ground with frequency reuse for each beam. Figure 4 shows the architecture of LMS with spot beam and multibeam satellites. As the figure shows, coverage of the satellite signals allows illegitimate users to receive the signal, which threatens link confidentiality and authenticity. Hence, several works focus on the security issues of LMS and study PLS performance in such systems. [25] consider the security performance of Non-Geostationary satellites with different orbit models. Non-Geostationary satellites are not in a fixed position with respect to the earth, but it is moving around it in multiple orbits, such as LEO and MEO. They utilize satellite movement to evaluate the system's security in different positions and atmospheric conditions, including rain attenuation. They drive the closed-form expression of secrecy capacity and secrecy outage probability and provide Monte Carlo simulation. Authors in [26] propose a wiretap channel model with a finite length regime. Their model includes a coding method that creates wiretap code using linear error-correcting codes such as polar codes and randomized hash functions. They consider RF satellite uplink and measure the secrecy capacity for the proposed scheme. Furthermore, their method identifies the spital regions where secrecy can be granted. Their analysis assumes the receiver satellite is on GEO or MEO orbits and the eavesdropper is in LEO, MEO orbits, or UAV in different heights. Subsequently, authors in [27] and [28] consider an LMS system with multiple users cooperating to receive the message, and multiple eavesdroppers are trying to intercept data sent to legitimate users. They obtain the closed-form expressions for the non-zero probability of secrecy capacity, the secrecy outage probability (SOP), and the average secrecy capacity (ASC). [27] extends the eavesdropping scenario to two scenes: colliding and non-colliding, where eavesdroppers collaborate together to wiretap the channel in the colliding case, and the best eavesdropper will be selected to wiretap the channel in the non-colliding scenario. Analysis in [28] considers imperfect channel estimation to study the effect of various fading scenarios on the secrecy performance. A recent attempt to leverage the advancements in MIMO technology in satellite PLS was conducted in [29], where they used a multibeam satellite with a single feed-per-beam (SFPB) antenna and full-frequency-reuse (FFR). This architecture allows the satellite to send multiple beams to the earth's surface with the same spectrum and polarization for all beams. They consider two designs for the antenna reflector: one parabolic reflector and multiple reflectors. They aim to maximize the secrecy capacity of each user against its eavesdropper, constrained by the power consumption for each beam. Hence they formulate the optimization problem and solve it using convex approximation. Furthermore, the authors introduce artificial noise to the multiple reflector case to increase secrecy when the number of eavesdroppers exceeds the number of beams. Giving more focus to authenticating satellite-to-ground links, authors in [30] proposed a physical layer authentication mechanism to authenticate Iridium satellites from ground stations. Iridium is a constellation of 66 LEO satellites for voice and data transmission, and its signal can be received by dedicated mobile satellite devices or integrated transceivers. The proposed scheme depends on hardware impairments of the satellites making their IQ samples different and can be used as a fingerprint. They collect a dataset of IQ samples for each satellite in the formation and process it to create grayscale images. They authenticate satellites in two scenarios: 1) multiclass classification, where each satellite represents one class, and in this case, they use deep CNN for classification; 2) binary classification where they authenticate only one satellite against others, and here autoencoders are used. Their results showed that artificial intelligence algorithms can authenticate satellites based on their IQ samples. Similarly, [31] proposes a physical layer authentication method using the doppler frequency shift of the satellite. Specifically, their method focuses on authenticating the system information signaling (SIS) messages sent by satellite to allow the user equipment to request random access. Since illegitimate users also receive these messages, they can spoof the SIS messages by sending similar messages to the users, which will cause connection failure. The authentication scheme allows the user terminal to estimate the doppler shift of the received Fig. 4: Land mobile satellite with multiple users and eavesdroppers (a) One spot beam satellite (b) Multi-beam satellite message (current channel) and compare it to the doppler shift calculated using satellite ephemeris and position. If both calculations are equivalent, then the satellite signal is authenticated. Their scheme outperforms the CSI-based authentication scheme due to the channel correlation drop. Table II summarizes all discussed PLS works related to the LMS system. We compare all works against their security goal, satellite and link types, and their used PLS technique and measurement. The comparison shows that most works study confidentiality requirements more than other security goals and assume passive eavesdroppers that are not actively affecting the main channel. Moreover, all techniques focus on downlink security from satellite to ground without considering the uplink security despite their importance to the satellite networks. Hybrid Satellite-Terrestrial Networks _:_ In hybrid satellite-terrestrial networks (HSTNs), the satellite transmits the data to the ground destination with the help of a middle relay, as shown in figure 5. This relay can be terrestrial stations or air space devices such as UAVs or HAPS. Relays can help to improve the security rates using several techniques. * Amplify and forward (AF): This method can be used when the transmitter has limited power to send the message. The relay cooperates with the transmitter by amplifying the transmitter's signal (message) and sending it to the receiver without decoding it. As a result, the transmission rate increases, and the receiver's noise and interference decrease. * Decode and forward (DF): In this type, the relay cooperates with the transmitter by decoding the received message, then re-encodes it and sends it to the receiver. DF relay location affects the secrecy of the link; the closer the transmitter and the relay are, the higher the secrecy of the link [32]. * Noise and forward (NF): In this type, the relay has two channels. One sends the message to the receiver, and the other sends artificial noise to the eavesdropper's channel. In some cases, this type is used only to send AN to confuse eavesdroppers and increase secrecy capacity. HSTN is considered a practical design for terrestrial and satellite integration, encouraging researchers to study its performance and security since it is vulnerable to several attacks. Authors in [33] proposed a relay selection and user scheduling joint scheme and studied its security capacity with multiple users, relays, and eavesdroppers. Their scheme depends on measuring the signal-to-noise ratio (SNR) on both links from the satellite to the relays and from the relay to the users in order to choose the best link. They assume that eavesdroppers can cooperate to access legitimate information in the first scenario. In contrast, in the second scenario, the eavesdropper with the best signal-to-noise ratio will wiretap the channel. Furthermore, the authors in [34] propose two models of relay selection: single relay and multi-relay, and compare them to the baseline round-robin scheduling. The single relay scheme chooses the best relay based on the link's instantaneous capacity between the relay and the destination since they assume a passive eavesdropper with unknown channel information. However, in the multi-relay scheme, multiple relays transmit the data simultaneously with a weighted version of the decoded message. Their schemes outperform the round-robin method. Similarly,[35] investigates the security and reliability of two relay selection schemes in an HSTN system with artificial noise and compares them to the round-robin baseline under different shadowing conditions. They drive the closed-form expressions of the outage probability and intercept probability and present a numerical analysis of the \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Work & Security Goal & Satellite Type & Link Type & PLS Techniques & PLS feature used & Security Measurement \\ \hline [25] (2018) & Confidentiality & Non-geostationy orbit & Downlink & Information theory & Orbiting models & Security Capacity \\ \hline [26] (2019) & Confidentiality & RF satellite & Uplink & Wiretap coding & Eavesdropper’s noise power & Security Capacity \\ \hline [27] (2019) & Confidentiality & Spot beam Single Antenna & Downlink & Information theory & User cooperation & Security Capacity \\ \hline [28] (2019) & Confidentiality & Spot beam Single Antenna & Downlink & Information theory & Channel Estimation errors & Security Capacity \\ \hline [30] (2020) & Authentication & LEO satellite & Downlink & Hardware Fingerprinting & IQ samples & Classification accuracy \\ \hline [31] (2020) & Authentication & LEO satellite & Downlink & Physical layer authentication & Doppler frequency shift & False alarm rate miss detection rate(FAR) \\ \hline [29] (2021) & Confidentiality & Multibeam & Downlink & MIMO precoding Artificial noise & Antenna spatial degree of freedom & Minimum Secrecy capacity \\ \hline \end{tabular} \end{table} TABLE II: Comparison between existing PLS works for land mobile satellite communication Fig. 5: Relay-based satellite-terrestrial system two schemes. Their results showed that increasing the number of relays will increase the reliability of the two schemes while reducing the security where intercept probability increases. [36] propose a PLS framework for downlink HSTN using two relay techniques: amplify and forward,or decode and forward with multiple relays and users. Here the satellite has multiple antennas where it transmits its signal to all relays, and only the selected relay will amplify or decode the signal and forward it to the user. They assume that eavesdroppers can only wiretap the communication between the relay and users (terrestrial link) with two scenarios of eavesdropping, colluding, and non-colluding. The relay selection mechanism depends on the signal-to-noise ratio between the user and eavesdropper. Their analysis showed that secrecy outage probability is worse in the colluding scenario due to the cooperation between eavesdroppers. Moreover, increasing the number of users enhances the security performance due to the increased diversity gain. Authors in [37] add power allocation as a constraint to the relay selection scheme to minimize secrecy outage probability in HSTN using instantaneous and statistical CSI. In this paper, the relay used is near ground relays, such as airplanes or HAPS, and it transmits an interference signal to downgrade the channel for the eavesdropper with no effect on the quality of the legitimate user channel. Since instantaneous CSI can not be obtained in satellite communication, they propose a relay selection scheme based on statistical CSI to decrease the signal overhead and delay caused by continuous relay handover. Besides, statistical CSI is used for satellite and relay communications power allocation to minimize the system's energy consumption while enhancing the secrecy level. Similarly, authors in [38] use the known and unknown CSI conditions for the wiretap channel to propose two relay-user pairing schemes. The optimal relay user pairing scheme assumes complete knowledge of eavesdropper channel state information, hence the relay that can maximize the secrecy rate will be selected. However, the suboptimal method assumes that the full CSI of the eavesdropper is unknown since its a passive receiver. Hence it depends only on the main's channel information to choose the best relay. Authors in [39] consider a different perspective where they study the effect of the hardware impairments on the secrecy performance of an HSTN. They propose an optimal relay selection scheme specifically for the case where the hardware of nodes is not ideal. Their results show the superiority of the proposed scheme over the traditional round-robin scheduling for both ideal and impaired hardware cases. Moreover, they showed that in a high SNR situation, the secrecy capacity becomes lower in case of impaired hardware, increasing the secrecy outage probability. However, by increasing the number of relays, the secrecy performance enhances. In [40], the authors study the secrecy performance of NOMA-based users in the HSTN system. They consider the secrecy outage probability and define it as when a user fails to achieve the required level of secrecy. Their results showed that the channel condition of the week user does not affect the secrecy performance. Utilizing a different type of relay, [41] proposed a 3D UAV relay HSTN system with an aerial eavesdropper around the UAV. The eavesdropper keeps track of the UAV relay to place himself at a suitable distance to get the transmitted data. They consider two scenarios where the eavesdropper is in a fixed place and a random place. They propose three strategies for UAV relay selection. The optimal method is based on the maximum-signal-to-noise ratio for the destination since the eavesdropper CSI is assumed to be unknown. However, they propose two less complex selection schemes because of the delay in channel estimation in satellite communications. One approach is based on the distance between the UAV relay and the destination and selecting the closest relay for transmission. The other approach chooses the relay randomly to assist the communication. Their results showed that selection based on SNR has the best performance and the closest distance-based selection method outperforms the random selection. Furthermore, they found that the secrecy performance of the mobile UAV relaying outperforms the static conventional relaying. All discussed works assume no direct link between the satellite and the users due to shadowing and masking effects. Table III summarizes discussed work for PLS related to hybrid satellite-terrestrial networks. We compare the adversary model of all works and show that they all consider passive adversary models using single or multiple eavesdroppers with a single antenna. Furthermore, all works use the wiretap channel's signal-to-noise ratio and channel state information to ensure security. However, these features can be challenging to measure in the case of hidden adversaries. Similar to the previous section, all works focus on the confidentiality requirements only without studying other security aspects, such as authentication of the relay or satellite. Cognitive Satellite Terrestrial NetworksIncreased demand for bandwidth due to new applications and the advancement of communication systems cause spectrum congestion in terrestrial and satellite networks. Currently, each of these networks is working on their respective spectrums, and congestion is challenging for both networks. Cognitive satellite-terrestrial networks (CSTN) are based on the spectrum sharing between the two networks to solve the spectrum congestion issue by effectively using current frequency bands [42]. Cognitive radio is a promising paradigm and uses three primary schemes: underlay, overlay, and interweave. For satellite-terrestrial integration, underlay is widely considered to keep the interference of the terrestrial network to the satellite user under an acceptable level [43]. Although interference is a challenge in the cognitive network, it can be beneficial to enhance secure transmission if appropriately designed to degrade the wiretap channel more than the main channel. In this section, we review the current state of art PLS works in CSTN. Authors in [44] proposed two beamforming techniques that utilize the interference from the terrestrial network to improve the PLS security of the satellite network with joint optimization of the transmit power and the quality of service. Their network architecture is software-defined based cognitive satellite-terrestrial where a gateway is the control center to manage the network with a database server that handles the system's CSI and spectral data. They consider the single and multiple eavesdropper scenarios and use the penalty function approach to achieve the required beamforming weight vectors for the base station according to the data in the database server. Authors in [45] consider NOMA-based CSTN with decode and forward relay as a secondary network with hardware impairments and a passive eavesdropper. Specifically, they drive the closed-form expression of outage and inception probability with multiple antenna designs. Both works [46] and [47] propose a robust beamforming framework for CSTN with probabilistic quality of service constraints related to the user, eavesdropper, interference limit, and transmission power consumption. Both works use a terrestrial base station for beamforming to enhance the PLS. However, [46] used the terrestrial network as a source of friendly interference to secure the satellite user link in the presence of an eavesdropper. While in [47], the eavesdroppers wiretap terrestrial links instead of satellite links in simultaneous wireless information and power transfer (SWIPT) enabled network. The eavesdroppers are passive in both cases, with imperfect channel state information and angle based in the second case. The primary goal of the beamforming technique in [46] is to minimize the transmit power while achieving at least minimum SINR to the satellite user, maximize the tolerance to SINR leakage to the eavesdropper, and constrain the interference power to the satellite user. Similarly, [47] paid more attention to maximizing the achievable secrecy rate in the worst case for all terrestrial users while achieving the required SINR and Energy harvesting requirements for each user. Besides, they add a constraint on base station power consumption and interference limit for satellite stations. Due to these constraints, the problem becomes non-convex, so the authors use mathematical models to approximate the problem and then evaluate the computational complexity of their scheme and show its validity through numerical results. Different from the above techniques, [48] add the assumption of an imperfect channel state for users and earth stations, not only the eavesdroppers, and propose a beamforming technique to exploit the base station as a source of green interference. Specifically, they designed a cooperative jamming scheme using a base station and cooperative terminal to degrade possible wiretap channels in the beamforming region, constraining the total power required for both terminals and SNIR and the interference limit for legitimate users. [49] use a jamming-based technique to enhance the security of CSTN where a satellite receives the signal from a terrestrial user through RF links and then forwards it to another ground station using an optical link in the presence of an active eavesdropper for each link. They assume that one base station acts as a friendly jammer that uses pseudo-random sequences known only to legitimate users to generate artificial noise that affects illegitimate inception since they cannot decode it. The authors drive the closed form and asymptotic expressions of inception probability considering the presence and absence of the jammer. Authors in [50] and [51] consider imperfect departure angles for the wiretap channel in a multibeam satellite system that shares a portion of the MMwave band with the cellular network to propose a secure beamforming scheme. Since beamforming techniques affect the system's power consumption, both works consider energy optimization in their design. In [50], they consider a wireless information-powered network (WIPT) and focus on the power minimization problem with constraints on SINR level for users and secrecy limit to energy receivers as potential eavesdroppers. However, in [51], they use a hybrid digital and analog beamforming design to achieve a cost-performance trade-off and aim to maximize secrecy energy efficiency, which is the ratio between secrecy rate and consumed power. Both works use a cloud processing center for resource allocation and control of the integrated network CSTN. Sequential convex approximation (SCA) was adopted to convert the non-convex problems into convex ones, and \begin{table} \begin{tabular}{p{42.7pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline Work [33] (2018) & Continuality & \begin{tabular}{c} Ground based \\ DF \\ \end{tabular} & \begin{tabular}{c} Multiple eavesdropper \\ Colluding and Non-colluding \\ \end{tabular} & \begin{tabular}{c} Relay \\ technology \\ \end{tabular} & \begin{tabular}{c} Signal to Noise ratio \\ Security Capacity \\ \end{tabular} \\ \hline [34] & Confidentiality & \begin{tabular}{c} Ground based \\ DF \\ \end{tabular} & \begin{tabular}{c} Single passive eavesdropper \\ Wretap both links \\ \end{tabular} & \begin{tabular}{c} Relay \\ technology \\ \end{tabular} & \begin{tabular}{c} Channel state information \\ \end{tabular} & Secrecy outage probability \\ \hline [36] & Confidentiality & \begin{tabular}{c} Ground based \\ DF, AF \\ \end{tabular} & \begin{tabular}{c} Multiple eavesdropper \\ Colluding and Non-colluding \\ \end{tabular} & - & \begin{tabular}{c} Relay \\ Multi-antenna \\ \end{tabular} & \begin{tabular}{c} Sensor to Noise ratio \\ Secrecy outage probability \\ \end{tabular} \\ \hline [39] & Confidentiality & \begin{tabular}{c} Ground based \\ DF \\ \end{tabular} & \begin{tabular}{c} Single antenna eavesdropper \\ Wretap relay link \\ \end{tabular} & \begin{tabular}{c} Relay \\ technology \\ \end{tabular} & \begin{tabular}{c} Signal to Noise ratio \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ Secrecy outage probability \\ \\ \end{tabular} \\ \hline [35] & \begin{tabular}{c} Reliability \\ and confidentiality \\ \end{tabular} & \begin{tabular}{c} Ground based \\ DF \\ \end{tabular} & \begin{tabular}{c} Single eavesdropper \\ Wretap Satellite link \\ \end{tabular} & \begin{tabular}{c} Relay \\ Artificial Noise \\ \end{tabular} & - & \begin{tabular}{c} Signal to inter- \\ reference plus-noise ratio \\ \end{tabular} & \begin{tabular}{c} Outage probability \\ Intercept probability \\ \end{tabular} \\ \hline [38] & Confidentiality & \begin{tabular}{c} Ground based \\ DF \\ \end{tabular} & \begin{tabular}{c} Single eavesdropper \\ Wretap Both link \\ \end{tabular} & \begin{tabular}{c} Relay \\ technology \\ \end{tabular} & \begin{tabular}{c} Channel state information \\ (known \& unknown) \\ \end{tabular} & Secrecy outage probability \\ \hline \end{tabular} \end{table} TABLE III: Comparison between existing PLS works for Hybrid satellite-terrestrial communication the results showed the advantage of the proposed schemes in different scenarios. Table IV summarize the discussed PLS works related to CTSN. Similar to other satellite-terrestrial networks, the cognitive network works focus on the system's confidentiality and study the cases of single and multiple eavesdroppers. However, the energy efficiency constraint is vital for cognitive networks since most works use beamforming and artificial noise techniques that require additional energy and can affect system performance. ### _Domain 2: Satellite-based IoT Networks_ IoT communications are expected to provide massive connectivity in several services, including transportation, smart homes, manufacturing, health, agriculture, and maritime. These applications require extensive coverage in remote areas that terrestrial networks cannot support. In this case, the satellite network is a promising solution to provide IoT systems with broad coverage, including in harsh and rural areas. Specifically, LEO satellites can provide a trade-off between coverage and round trip latency around the earth, making them more favorable than MEO and GEO[52]. Moreover, hybrid satellite and terrestrial networks are proposed for maritime IoT to increase transmission efficiency and provide maritime service [53]. However, several challenges exist to achieve this connection, such as physical layer design and medium access mechanism. Security attacks are a severe concern in satellite IoT systems where IoT devices are considered vulnerable points exposed to different types of attacks such as interruption, spoofing, eavesdropping, and denial of service attacks [54]. For example, IoT devices can be used to maliciously overwhelm one satellite by sending lots of packets, causing a denial of service attack and interruption in the service. Applying intensive cryptographic techniques to satellite IoT devices is inapplicable because of the power limitation of these devices and the short characteristics of their packets. In the following section, we discuss how physical layer security is used to secure satellite-IoT communication through recent works. In [55], the authors studied the physical layer secrecy analysis for satellite downlink signals with multiple mobile users and eavesdroppers in areas not covered by terrestrial networks. They proposed the FD-NOMA scheme that uses frequency division of downlink spectrum to provide efficient and secure multiple access for multi-users using partial spectrum sharing between users. This scheme increases the inter-user interference, which is leveraged in PLS to downgrade the eavesdropper's channel, while they propose a cooperative scheme that preserves the signal-to-noise ratio for legitimate users by interference cancellation. They drive the closed form of SINR for legitimate users and define a lower bound of secrecy rate for the system. In [56], they proposed a similar scheme to introduce artificial interference and leverage it to affect the eavesdropper channel, using overlapping pulse shapes. They analyze the mutual information and secrecy capacity if the eavesdropper cannot resolve the time packing interference and in case he has an estimation module, and in both cases, security capacity is achieved. [57] consider cognitive satellite-terrestrial network for IoT services with a UAV malicious eavesdropper. They use terrestrial base stations as beamformers and friendly jammers utilizing inter-segment interference to reduce the quality of the eavesdropper channel in a 3D wiretap environment. The proposed beamforming schema depends on spatial correlation with energy consumption and secrecy rate constraints to maximize secrecy energy efficiency. On the other hand, authors in [58] assume satellite-UAV cognitive network architecture where each network has its users, and they share the spectrum \begin{table} \begin{tabular}{|p{42.7pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Work & Security Goal & Network Type & Adversary Model & PLS Techniques & PLS feature used & Security Measurement \\ \hline [44] & Confidentiality (2018) & Software Defined Network & Single and Multiple eavesdropper & Beamforming & Channel state information & Secrecy rate \\ \hline [46] & Confidentiality (2018) & Downlink CSTN & Single antenna eavesdropper Wiretap satellite link & Beamforming & Probabilistic channel state information & SNIR constraints on user and eavesdropper \\ \hline [45] & Confidentiality (2020) & NOMA based CSTN & Multi-antenna eavesdropper Wiretap terrestrial link & Relay technology and Multi-antenna and Wireless communication & Hardware Imperfect probability \\ \hline [47] & Confidentiality (2020) & mmWave CSNT with SWIPT & Multiple eavesdropper Wiretap terrestrial link & Beamforming and Artificial Noise & Angle-based imperfect channel state information & Achievable secrecy rate \\ \hline [48] & Confidentiality (2021) & mmWave CSNTNT & Multiple eavesdropper Wiretap terrestrial link & Beamforming and Artificial Noise & Imperfect channel state information & Signal to noise ratio \\ \hline [49] & Confidentiality (2021) & Underlay CSTN & Two eavesdropper Wiretap satellite links & Artificial Noise & Pseudo-random sequences & Inception probability \\ \hline [50] & Confidentiality & WIPT Cloud processing architecture & Energy receivers & Beamforming & Angle-of-departure based channel state information & Achievable secrecy rate \\ \hline [51] & Confidentiality & mmWave CSNT Cloud processing architecture & Multiple eavesdropper Wiretap terrestrial link & Hybrid Beamforming & Angle-of-departure based channel state information & Secrecy energy efficiency \\ \hline \end{tabular} \end{table} TABLE IV: Comparison between existing PLS works for Cognitive satellite-terrestrial communication divided into different channels, each channel serving the satellite user and UAV user. They consider terrestrial eavesdroppers wiretap the signals from UAVs and aim to maximize the secrecy rate of UAV users by optimizing power, resource allocation and interference constraints. They compare their scheme with the other two power allocation schemes and show the superiority of the proposed scheme in providing a higher secrecy rate. [59] proposed a user scheduling scheme for multiuser satellite and wireless sensors network to enhance secrecy performance. They drive the closed-form expression of secrecy outage probability and average secrecy capacity when the CSI of the eavesdroppers is unknown. They assume a single antenna for all users and eavesdroppers and utilize the time division multiple access to ensure one user is using the service at each time. Similarly, authors in [60] consider multi-beam satellite that enables vehicle communications with UAVs as amplify and forward relays to enhance the main channel between satellite and vehicles where rain attenuation is assumed for the satellite channel. Moreover, UAVs act as a jammer to produce artificial noise that affects the eavesdropper channel. They jointly optimize the power for satellite beamforming and power allocation for UAV relays to maximize the secrecy rate of satellite-to-vehicle links without affecting the QoS requirements. They optimize the non-convex optimization problem using several methods and compare the secrecy rate for the proposed technique with a case without artificial noise and a benchmark case without UAV cooperation to show the improvement in security performance resulting from UAV assistance. Table V summarized the discussed works related to PLS on satellite-based IoT networks, which mainly focus on system confidentiality with a single eavesdropper and consider the security of downlinks only. Authenticating satellite and IoT devices using PLS to prevent spoofing and impersonation attack is not studied yet. Moreover, protection against multiple malicious IoT devices and denial of service attacks (which are common in the case of IoT devices) are not available in the literature. Furthermore, more attention should be paid to studying the security of the uplink communication, which is critical for IoT systems since IoT devices transfer a massive amount of data to the satellite, and this data can be eavesdropped or altered if no proper uplink security mechanism is applied. ### _Domain 3: Navigation Satellite Systems_ Global Navigation Satellite Systems (GNSS) are localization systems that use satellite signals to share information about location, speed, and time with earth users. GPS and GALILEO are examples of GNSS widely used in several applications, such as intelligently connected vehicle networks, mobile applications, and maritime. However, GNSS is vulnerable to spoofing attacks due to its signal's lack of authentication mechanism. Specifically, the publicly known spreading codes allow attackers to forge messages and send them to users as legitimate messages, causing wrong positioning, which can lead to several consequences[61]. The attacker can jam legitimate messages to prevent their reception by legitimate receivers and give more chances to illegitimate ones. Through the years, several mechanisms have been proposed to prevent GNSS spoofing attacks. In this section, we focus on physical layer schemes used to detect and mitigate GNSS spoofing. Due to the invisibility of changing the algorithms of GNSS, the proposed methods are superposition based. In [62], the proposed scheme uses an authentication message and artificial noise on top of the GNSS message to authenticate it. In this paper, they consider three channels in the system: 1) the navigation channel, which is the broadcast signal from the satellite to all users, 2) the attack channel where the eavesdropper transmits the spoofing signal to the legitimate user, 3) the authenticated channel where a ground segment can communicate with all users through infinite bandwidth and eavesdropper has no control on this channel (authenticated through higher layer protocol). The legitimate user uses the authenticated channel to know the signature and artificial noise after revealing it by the system and check if the received navigation signal coordinates are included. They assume that the attacker receives the signal from the authenticated channel but does not have control over it. Two attack models were considered in the analysis: the generation attack, where the attacker generates a simple navigation signal and uses it, and the replay attack, where the adversary combines the signal with a previously used authentication signal. Although this method utilizes PLS, it depends on the upper layer algorithms to secure the authenticated channel, which is essential for the system design. On the other hand, authors in [63] utilize satellite signals' radio frequency spatial characteristics to identify spoofing signals. Using the K-means cluster algorithm, they use consultation figures to extract radio features and fingerprint satellite signals. Specifically, the clustering algorithm generates multiple cluster centers according to the signal features, which help detect spoofing by analyzing the center change. They verify the validity of the proposed scheme using experimental analysis through real satellite signals and the spoofed signals generated by the Universal Software Radio Platform. Similarly, the authors in [64] use device fingerprinting to create signatures for GPS satellite transmissions and allow identifying the spoofed and authentic signals. The attack model considered in this paper is similar to [62], where spoofer attackers can generate genuine GPS messages or use relay messages to spoof legitimate receivers and induce wrong position, velocity, and time messages. They propose a feature extraction method for fingerprinting based on the GPS receivers' internal hardware architecture. Specifically, they use early-late phase (ELP) correlator outputs, code-lock-detector (CN0), carrier-lock-detector (CLT) for feature extraction, and Multivariate Normal Distribution (MVN) as a scoring metric to measure similarity. To find thresholds for genuine GPS signals, they first train the feature extraction method using a sample spoofed-free dataset and then test it on spoofed signals to calculate the MVN for each observation, and low MVN means spoofed signal. These thresholds are used offline for real-time spoofing detection after the cross-validation phase. They collect live datasets and perform experimental analysis for the proposed algorithm under different attack scenarios, including location, time, and multipath. Depending on feature analysis, the authors in [65] propose a spoofing detection method using a support vector machine (SVM) to analyze signal quality monitoring (SQM), moving variance (MV), improved SQM moving average (MA), early-late phase, carrier-to-noise ratio-MV and clock offset rate. After feature extraction, the data is preprocessed using bais and normalization techniques and divided into train and test datasets. They perform offline training and testing phases for SVM using Texas Spoofing Test Battery (TEXBAT) dataset under different Kernal and activation functions to get the best performance. They reach 92.31% accuracy compared to other detection methods and provide other indices evaluation such as recall, precision, and F1. Another detection approach is proposed in [66], where they utilize the Iridium satellite constellation to detect GNSS spoofing using Iridium unencrypted IRA messages that can be received worldwide from all Iridium satellites. In particular, using thousands of messages from the data collection campaign, they reverse engineer satellite parameters such as speed, packet interval time, and satellite coverage. They assume a system model suitable for remote areas such as deserts and oceans where the receiver can not check their location with other sources. Detecting spoofing depends on collecting IRA messages at the receiver side and compensating for the movement. Then the receiver estimates the location using the longitude and latitude of the collected beams. Finally, the estimated position is compared to the position received by the GNSS satellite, and if it is greater than the specific threshold, the signal is considered malicious. [67] consider another application scenario where they propose a spoofing detection method for electric Substations since it relays on GNSS services for time transfer. The method utilizes multiple antenna receivers for GPS signals in proximity where the spoofer cannot send a spoofing signal for each antenna, so only a limited number of the antennas will give an incorrect location. Each antenna is connected to a GNSS clock used in a conventional substation that estimates the position using the received signal and then sends it to the detector. The detector will compare all positions received, decide the expected location, and raise the alarm if a spoofing attack is detected. They validate that their method can detect the spoofing attack in 4 to 10 seconds which eliminates the attack effect on the substation. Table VI summarizes the discussed works related to PLS in navigation satellite system, which clearly shows the focus on PLS authentication and spoofing detection mechanisms since navigation satellite systems are vulnerable to spoofing and impersonation attacks. The confidentiality of these systems is not considered critical since the messages' are not secret (they only provide location and time information). However, the integrity of these messages should be preserved to avoid malicious alteration during transmission. Moreover, navigation satellite services are vital to many applications; hence more attention should be paid to study the system availability. ### _Domain 4: Free Space Optical Satellite Communication_ Optical Wireless Communication (OWC) uses optical carriers such as visible light and Infrared to transmit information. Free Space Optical (FSO) is a type of OWC that uses directive laser beams to transmit data wirelessly in unguided free space media to provide high-speed communication between two points. The Optical medium is an unlicensed spectrum that overcomes the limitations and congestion of the current Radio frequency band and suffers from less electromagnetic interference [68]. Furthermore, FSO can provide a high data rate with low latency for several kilometers without expensive infrastructure, reducing installation and maintenance costs. Satellite communications, including uplink, downlink, and interlinks, start incorporating FSO links to leverage its advanced features and improvements. Specifically, FSO allows higher data rates to reach Gigabits per second (Gbps), significantly better compared to the RF band. Moreover, FSO has inherent security features that it can not penetrate walls and can support quantum cryptography when fiber optic infrastructure is unavailable [69]. On the other hand, FSO suffers from fading and attenuation induced by atmospheric turbulence such as fog, rain, and snow [70]. In addition, beam divergence caused by beam wanders causes misalignment between the transmitter and the \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Work & Security Goal & Network & link Type & Adversary Model & PLS Techniques & PLS feature used & Security Measurement \\ \hline [55] & Confidentiality & Multi-user Satellite & Downlink & Single eavesdropper & Information theory & Inter-user interference & Secrecy rate \\ \hline [56] & Confidentiality & \begin{tabular}{c} Cognitive \\ satellite \\ terrestrial \\ network \\ \end{tabular} & Downlink & \begin{tabular}{c} Malicious \\ UAV \\ \end{tabular} & Beamforming & Spatial correlations & Secrecy energy efficiency \\ \hline [59] & Confidentiality & Multi-user satellite & Downlink & \begin{tabular}{c} Colluded \\ Passive \\ eavesdropper \\ \end{tabular} & Information theory & \begin{tabular}{c} Time division multi- \\ple access \\ \end{tabular} & \begin{tabular}{c} Secrecy Outage Probability \\ Average secrecy capacity \\ \end{tabular} \\ \hline [58] & Confidentiality & \begin{tabular}{c} Cognitive \\ satellite and UAV \\ \end{tabular} & Downlink & \begin{tabular}{c} Terrestrial \\ eavesdropper \\ \end{tabular} & Information theory & Large-scale CSI & Secrecy rate \\ \hline [60] & Confidentiality & \begin{tabular}{c} Multi-beam \\ Satellite \\ \end{tabular} & Downlink & \begin{tabular}{c} Terrestrial \\ eavesdropper \\ for each beam \\ \end{tabular} & \begin{tabular}{c} Relaying \& Artificial noise \\ \end{tabular} & Perfect CSI & Secrecy rate \\ \hline \end{tabular} \end{table} TABLE V: Comparison between existing PLS works for IoT networks receiver. These conditions can affect the link's reliability and availability and limit its capacity to short-distance ranges. To tackle these shortcomings in FSO and leverage its features simultaneously, a hybrid Satcom system between RF and FSO is designed. RF links are more robust towards weather conditions and mature than FSO development. Several hybrid Satcom designs are proposed in the literature, such as hard switching [71], adaptive switching [72], and HAPS relay-based design. In this section, we discuss the physical layer security analysis available in the literature for FSO and hybrid FSO RF systems. Authors in [73][74] design an FSO satellite system with ground optical stations. In [73], the ground stations send \(N\) users information by optical feeder link to the satellite, which acts as a relay that decodes the optical wave received and sends it as beams to multiple earth users with earth eavesdropper for both links. However, authors in [74] consider the uplink connection between the ground station and a GEO satellite with a nano-satellite eavesdropper. They propose two PLS secrecy coding protocols: one-way and two-way protocols with two types of photodetectors: threshold and PNR. Their two-way scheme ensures secrecy even when the eavesdropper's channel is better than the legitimate channel. Both works analyze the secrecy capacity using the SNR between the legitimate user and eavesdropper. In [73], the authors consider two scenarios, ZF coding and without ZF coding, for the analysis. Without ZF coding, the satellite decodes the data received and directly forwards it to the users, while with ZF coding, the satellite code the beams using the ZF technique before sending. The results showed that the ZF technique improves the security capacity with specific positions. In [75], the authors consider a hybrid system RF link between the satellite and the ground station (the long distance) and an FSO link between the station and destination (short distance). The station is the decode and forward relay between source and destination, and the eavesdropper is assumed to be on the earth trying to get access to the data sent by satellite. They analyze the Average Secrecy Capacity and Secrecy Outage Probability under different satellite and FSO channel conditions, relaying schemes, and FSO detection methods. RF link conditions have more impact on the system secrecy than the FSO, while secrecy diversity depends on FSO conditions when using amplify and forward relaying methods. Both [76] and [77] use a high-altitude platform station (HAPS) as a relay between an LEO satellite and the earth's destination. The HAPS has an FSO downlink with the satellite and an RF downlink with the earth station. The eavesdropper in this system is passive and present on the earth's surface. They drive the secrecy outage probability (SOP) and the probability of positive secrecy capacity of the system and analyze them under different channel conditions. The results in [76] showed that the RF link has more impact on the system performance. At the same time, both analyses emphasize the criticality of pointing errors and channel shadowing on the system's security. TableVII summarizes the works related to PLS on FSO satellite communication, which shows that the use of PLS in optical satellite communication is still limited as most works consider a hybrid system of optical and radio frequencies and relay techniques between them. Therefore, the PLS technique needs to be investigated more in FSO links with a focus on how distance and weather conditions can affect it. ### _Domain 5: Inter-satellite Communication_ Inter-satellite communication links connect satellites to each other. Typically, satellites are deployed in the formation of many satellites where communication between them is essential for service continuity. Satellites must share their navigational and mobility data to keep the formation in the correct movement [78]. The connection between satellites increases network throughput, where the message is routed through satellites until it reaches its destination without depending on ground stations. Specifically, this communication creates a network in the space which improves the transmission and supports the service handover. On the other hand, Inter-satellite links are complex since they connect satellites from various levels (GEO, MEO, and LEO), with different speeds and \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Work & Security Goal & Satellite Type & Adversary Model & PLS Techniques & PLS feature used & Security Measurement \\ \hline [62] (2018) & Authentication & Gallio & Generation attack - Relay attack & Artificial Noise & Random generated codes & Channel gain - Channel estimation error \\ \hline [63] & Spoofing detection & GPS & - & Hardware fingerprinting & Spatial distribution characteristics & Euclidean distance \\ \hline [64] & Spoofing detection & GPS & Generation attack - Relay attack & Hardware fingerprinting & Early-late phase code-lock-detector carrier-lock-detector & Equal error rates \\ \hline [66] & Spoofing detection & GNSS & Generate Fake messages using SDR & Parameters Reserve engineering & IRA Iridium messages & False positive rate \\ \hline [65] & Spoofing detection & GPS & Generate spoofing signal using user tracking signal & Feature extraction & Quality Monitoring (SQM) moving variance (MV) Improved SQM Moving average (MA) Early-late phase & Detection accuracy \\ \hline [67] & Spoofing detection & GNSS & Generate spoofing signals for electric substation & Multiple antenna & spatial area & Detection speed \\ \hline \end{tabular} \end{table} TABLE VI: Comparison between existing PLS works for Navigation Satellite System altitudes that can affect the link availability and quality. These links can be based on radio frequency or optical communication. The complexity of inter-satellite links increases its exposure to adversaries from multiple levels, such as satellites from other constellations, spacecraft, and adversaries on earth. In addition, inter-satellite links have different channel states than ground links because of the high mobility of the satellites and fewer fading sources. Despite this complexity and the implications of the inter-satellite links, security mechanisms to protect the space networks are still limited. Inter-satellite links are vulnerable to attacks against confidentiality and authentication where the identity of satellites is not identified before initiating a connection, and data are not protected against any adversary in middle communication. Furthermore, adversaries can target the availability of inter-satellite links to cause service interruption by overwhelming the link using a compromised satellite or malicious spacecraft. Some authors apply the PLS framework to inter-satellite links such as [79][80]. In [79], the authors use Doppler frequency shift to generate keys for securing the inter-spacecraft links. While in [80] they use the Doppler frequency shift as a physical layer authentication method for satellite identification, utilizing satellite voting. Doppler frequency shift is the change of frequency relative to the motion. Both works use radio frequency links and satellites' mobility information, such as speed and locations, to estimate the Doppler frequency shift of each satellite. Since the mobility information of each satellite is already shared on the network, this method does not introduce overhead or additional channels. In [79], the authors assume communication between two spacecrafts, a transmitter and a receiver, in deep space using a time division duplexing (TDD) system. Both legitimate users will experience identical Doppler frequency shifts using their high mobility. At the same time, the eavesdropper observes a different shift since her relative velocity and position vary from the transmitter and receiver, which allows them to use the Doppler frequency shift as a source of secret keys. The secret keys generation process starts with pilot transmission and reception between the transmitter and receiver, where it is assumed they have the same velocity at this step. Then each spacecraft will estimate the nominal power spectral density for each communication block separately, which measures the power of the content of the message versus its frequency. After that, they will quantize the achieved value to get the raw secret sequence. The authors perform a Monte-Carlo simulation for the proposed scheme and analysis several aspects, such as the Key Disagreement rate and Maximum Achievable Key Rate, to prove the practicality and robustness of the technique. In [80], the authors consider a set of legitimate LEO satellites and one satellite that needs to be authenticated. When satellites receive a message from the suspect satellite, they will calculate the Doppler frequency using nominal power spectral density estimation, which measures the power of the content of the message versus its frequency. All satellites will compare the resulting Doppler frequency with the expected one for the transmitter, which can be calculated by any satellite since the transmitter's location, and speed are known to them. Then each satellite will decide if this satellite is a transmitter or an eavesdropper according to this comparison and specific threshold and send this decision to the fusion center. In the fusion center, the algorithm will collect all decisions and use the decision method to make the final judgment on the satellite's identity. The authors analyze algorithm performance with different numbers of satellites and lengths for the time slot. They also compare three fusion decision methods: OR, AND, and Majority rule, and measure their spoofing detection and false alarm probabilities. The results showed that the majority rule outperforms others, where it achieved a high spoofing detection rate with low false alarm probability. PLS in optical inter-satellite links is still an uncovered area in the literature due to its intrinsic security features earned by the inability to penetrate through walls. However, optical links are yet vulnerable to attacks such as eavesdropping as authors in [81] propose eavesdropping techniques for space networks where HAPS is the eavesdropper for LEO satellites communication. Table VIII summarizes recent work related to PLS for inter \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Work & Link Type & Security Goal & PLS Feature Used & Security Measurement \\ \hline [79] (2021) & Radio frequency & \begin{tabular}{c} Sector Key generation \\ \end{tabular} & \begin{tabular}{c} Doppler frequency shift \\ \end{tabular} & \begin{tabular}{c} Maximum Achievable Key Rate \\ Key Disagreement rate link \\ \end{tabular} \\ \hline [80] (2022) & \begin{tabular}{c} Radio frequency \\ inter-satellite link \\ \end{tabular} & Authentication & Doppler frequency shift & \begin{tabular}{c} Spoofing Detection Rate \\ False Alarm Probability \\ \end{tabular} \\ \hline \end{tabular} \end{table} TABLE VIII: Comparison between existing PLS works for Inter-satellite communication \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Work & Security Goal & Link Type & Relaying Scheme & Channels Types & Security Measurement \\ \hline [75] & Confidentiality & Downlink & \begin{tabular}{c} Amplify and forward \\ Decode and forward \\ \end{tabular} & \begin{tabular}{c} Shadowed-Rician For RF \\ GammaGamma for optical \\ \end{tabular} & \begin{tabular}{c} Average Secrecy Capacity Capacity \\ Secrecy Outage Probability \\ \end{tabular} \\ \hline [73] & Confidentiality & \begin{tabular}{c} Downlink \\ and uplink \\ \end{tabular} & Decode and forward & GammaGamma for optical & Secrecy Capacity \\ \hline [74] & Confidentiality & Uplink & - & Poisson channel & Secrecy Capacity \\ \hline [76] & Confidentiality & Downlink & Decode and forward & \begin{tabular}{c} Shadowed-Rician For RF \\ GammaGamma for optical \\ \end{tabular} & \begin{tabular}{c} Secrecy outage probability \\ Probability of positive secrecy capacity \\ \end{tabular} \\ \hline [77] & Confidentiality & Downlink & Amplify and forward & \begin{tabular}{c} Shadowed-Rician For RF \\ GammaGamma for optical \\ \end{tabular} & \begin{tabular}{c} Secrecy outage probability \\ Probability of positive secrecy capacity \\ \end{tabular} \\ \hline [77] & Confidentiality & Downlink & Amplify and forward & \begin{tabular}{c} Shadowed-Rician For RF \\ GammaGamma for optical \\ \end{tabular} & \begin{tabular}{c} Secrecy outage probability \\ Probability of positive secrecy capacity \\ \end{tabular} \\ \hline \end{tabular} \end{table} TABLE VII: Comparison between existing PLS works for FSO satellite communication satellite communication. Although inter-satellite communication is critical to the satellite system, its security is not yet well studied in the literature. Both radio and optical inter-satellite links are vulnerable to multiple attacks, such as eavesdropping, spoofing, and jamming. In LEO satellite constellations, the situation is exacerbated, where attacking these inter-satellite links will often affect major services, especially those requiring the transfer of satellite-to-satellite messages. Moreover, inter-satellite links exhibit unique features that differ from satellite-to-earth links, which prevent the usage of the available PLS techniques in the satellite-to-earth setting. Hence, more investigation is required to study how these features can be leveraged using PLS to provide security against possible attacks. ## V Research gaps and future directions Physical layer security in satellite systems still has many open issues that need to be investigated. In this section, we highlight open research opportunities and future directions that we identify through our survey study. ### _Availability schemes_ Due to the importance of satellite systems in the next emerging network architecture, ensuring their availability is essential to avoid system outages or interruptions. Satellite systems can suffer denial of service attacks if the satellite or link is overwhelmed with malicious packets that the satellite cannot handle and cause service outages for legitimate users. Moreover, anti-jamming techniques for satellite communications need to be investigated since it is a wireless signal that adversaries can target through multiple types of jamming, causing transformations to the signals that prevent correct decoding and processing. PLS techniques are promising in anti-jamming methods and cooperative jamming to prevent various attacks in 6G networks [82]. Therefore, more works should consider PLS availability techniques in satellite scenarios to study its feasibility with the unique physical characteristics of satellite communication. ### _Uplink secrecy techniques_ The current literature mainly focuses on securing the downlink communication from the satellite to other receivers. However, uplink communication from any terrestrial or aerial device to the satellite is vulnerable to several attacks such as spoofing and alteration by malicious adversaries. Especially in broadband satellite communication, the uplink handles a huge amount of data sent by users to the satellite, and this data must be secured through schemes suitable for heterogeneous devices and scenarios. Moreover, the uplink communication for the satellite can be used to overwhelm the satellite using multiple techniques of denial of service attack. Hence, researchers should give more attention to securing uplink communication using PLS, considering performance and power constraints. ### _Smart threat models_ Most current literature assumes that the eavesdropper is passive with limited resources and a single antenna and does not actively affect the main channel. They do not consider the case where eavesdroppers can extract CSI of the main channel by active interception, reducing the current schemes' effectiveness. These assumptions are becoming ideal with advancements in adversaries' resources, such as the number of antenna and signal processing techniques. Moreover, the widespread assumption of static and not moving eavesdroppers can be enhanced where we can have dynamic eavesdroppers with unknown locations such as UAVs. Hence, more complex adversary models should be considered in satellite systems to cooperate with advancing eavesdropping capabilities and design more robust solutions. ### _Authentication and Anti-spoofing techniques_ Through analysis of the available schemes for authenticating satellite signals, we notice that most techniques are focused on GNSS satellites due to their significance and vulnerability to spoofing. However, other types of satellites used in other services, such as broadband and IoT connection, are vulnerable to spoofing or impersonation attacks that affect their service and give access to unauthorized devices. In particular, failing to authenticate the messages sent to the satellite can cause several consequences due to processing malicious messages/ code received. In wireless systems such as satellites, the adversary sends signals with higher power than the legitimate user to deceive the receiver into considering his spoofed message as the legitimate one[83]. Hence, the adversary will get access to the service and information intended for the legitimate user. As a result, anti-spoofing PLS strategies need investigation in satellite scenarios with different services to prevent unauthenticated access or information leakage. ### _Integrity techniques_ Message manipulation detection techniques using PLS are not considered in the literature yet for any wireless communication. Integrity violation attacks such as man-in-the-middle intercept the message during transmission, modify it or manipulate it, and then re-transmit it. This change in the message needs to be detected by the receiver to prevent wrong information transmission and processing. Satellite systems are vulnerable to these attacks where adversaries can intercept the signal sent by satellite to the terrestrial user or vice-versa and manipulate it either to disturb the service or to get access by exploiting these messages. PLS techniques to achieve integrity needs to be investigated using different wireless signals, including satellite systems. ### _PLS for Inter-satellite link communication_ Inter-satellite communication is essential for providing high-speed communication and increasing network throughput by connecting satellites to each other. With the increasing interest in LEO satellites that are relatively near to the earth and advancements in adversary power, attacks on inter-satellite communication have become feasible. There is limited focus on securing inter-satellite communications in the literature for both RF and optical links. In particular, inter-satellite links have unique physical layer characteristics in terms of fading, doppler shift, and weather condition effects that can be exploited to secure communication. Moreover, optical inter-satellite links provide high speed and bandwidth, but it is still vulnerable to attacks such as eavesdropping, and there are no proposed solutions yet in the literature. PLS strategies are promising techniques that need to be studied extensively in inter-satellite communication scenarios by investigating their physical features and possible attacks. ### _Machine learning based PLS_ Wireless networks are becoming more complex with heterogeneous devices and high mobility requirements. Recently, machine learning techniques have emerged to support PLS in handling network complexity by intelligently handling signal processing and feature exploitation. Several supervised learning and unsupervised ML techniques are used for intelligent PLS in different categories, such as physical layer authentication, antenna selection, and relay node selection[84]. Proposed ML-based PLS schemes consider various wireless communications, including device-to-device, cognitive networks, and Non-orthogonal multiple access (NOMA). However, ML-based algorithms are not explored in satellite systems scenarios, and there is no analysis of the applicability of the proposed methods in the satellite case. Therefore more investigation is required to benefit from ML techniques in simplifying PLS in satellite systems and consider its unique features. ## VI Conclusion Satellite communication systems are vulnerable to various attacks due to their open nature and the heterogeneity of the connected devices. PLS schemes are lightweight security approaches that exploit physical layer characteristics to provide the required security while ensuring minimal overhead, making them suitable for several communication types, including satellite systems. In this paper, we first classify modern satellite communication networks into five domains and then analyze the use of PLS in each domain; these domains are: hybrid satellite-terrestrial networks, satellite-based IoT, navigation satellite systems, FSO-based satellites, and Inter-satellite communications. After providing the necessary background about satellite networks and PLS techniques, we review the state-of-the-art of PLS solutions in each domain and compare their approaches, adversary models, and results. Finally, we highlight a few open research gaps for PLS in satellite systems and propose a few potential future directions, which we believe the community must focus on. ## Acknowledgments This publication was made possible by GSRA grant # GSRA8-L-2-0521-21046 from the Qatar National Research Fund (a member of Qatar Foundation) and a TUBITAK-QNRF Joint funded grant # AICC03-0530-200033. The findings herein reflect the work, and are solely the responsibility, of the authors. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
2306.08780
Explaining Explainability: Towards Deeper Actionable Insights into Deep Learning through Second-order Explainability
Explainability plays a crucial role in providing a more comprehensive understanding of deep learning models' behaviour. This allows for thorough validation of the model's performance, ensuring that its decisions are based on relevant visual indicators and not biased toward irrelevant patterns existing in training data. However, existing methods provide only instance-level explainability, which requires manual analysis of each sample. Such manual review is time-consuming and prone to human biases. To address this issue, the concept of second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level. SOXAI automates the analysis of the connections between quantitative explanations and dataset biases by identifying prevalent concepts. In this work, we explore the use of this higher-level interpretation of a deep neural network's behaviour to allows us to "explain the explainability" for actionable insights. Specifically, we demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
E. Zhixuan Zeng, Hayden Gunraj, Sheldon Fernandez, Alexander Wong
2023-06-14T23:24:01Z
http://arxiv.org/abs/2306.08780v1
Explaining Explainability: Towards Deeper Actionable Insights into Deep Learning through Second-order Explainability ###### Abstract Explainability plays a crucial role in providing a more comprehensive understanding of deep learning models' behaviour. This allows for thorough validation of the model's performance, ensuring that its decisions are based on relevant visual indicators and not biased toward irrelevant patterns existing in training data. However, existing methods provide only instance-level explainability, which requires manual analysis of each sample. Such manual review is time-consuming and prone to human biases. To address this issue, the concept of second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level. SOXAI automates the analysis of the connections between quantitative explanations and dataset biases by identifying prevalent concepts. In this work, we explore the use of this higher-level interpretation of a deep neural network's behaviour to allows us to "explain the explainability" for actionable insights. Specifically, we demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance. ## 1 Introduction Although quantitative performance metrics such as accuracy are essential indicators of a deep neural network's performance, they do not offer insights into the decision-making process. To fill this gap in the performance analysis, explainable AI (XAI) can facilitate the auditing of model behaviour. This auditing helps ensure that the decisions are based on relevant visual indicators. Additionally, it can uncover potential biases in the training data, which may then be used to guide improvements to the training framework. First-order explainability techniques such as Grad-CAM [9], integrated/expected gradients [10; 3], LIME [8], GSInquire [6], and SHAP [7] yield per-instance visualizations of explanations. However, reviewing these visualizations can be time-consuming, particularly for large-scale datasets with multiple classes or high intra-class variability. In addition, human biases can impact manual review. In this work, we explore the concept of second-order explainable AI (SOXAI) [4] for obtaining actionable insights and demonstrate, for the first time, that such insights can be used to enhance model performance. SOXAI extends XAI from the instance level to the dataset level to enable the auditing of the model and dataset during development. Rather than relying on manual reviews of visual explanations to explore patterns in a model's decision-making behavior, SOXAI seeks to automatically unveil these patterns through the analysis of the relationships between quantitative explanations. This expedites the identification of the shared visual concepts utilized by a model ## 1 Introduction Figure 1: SOXAI visualizations of a classification model on chainsaws 1a and a segmentation model on hand drills 1b. Different regions show groupings of related quantitative explanations via first-order XAI, with significance discussed in Section 3. during inference and can uncover apparent model and dataset biases. Furthermore, this improves transparency by uncovering problematic patterns that exist among a groupings of examples in the dataset, which can adversely impact the model's decision-making process. In essence, SOXAI enables us to "explain the explainability" by providing higher-level interpretations of model behaviour for actionable insights. ## 2 Methods The concept of SOXAI takes first-order instance-level quantitative explanations of samples in a dataset and groups similar embeddings of these explanations to generate a user-friendly visualization that enables the uncovering of patterns among different groupings of data to unveil trends. Here, we employ GSInquire [6] to generate first-order quantitative explanations of a neural network's decision-making process across a dataset. GSInquire examines the network's activation signals in response to the input image and employs them to identify critical features within the sample that quantitatively led to the network's decision. ### Second-order explainability Second-order explainability is treated as an embedding problem: given an image \(I\) and the corresponding quantitative explanation \(\alpha\) for the trained model \(M\), we define the \(n^{\text{th}}\) element of the embedding \(f:(I,\alpha)\rightarrow\mathbb{R}^{N}\) as: \[f(I,\alpha)_{n}=\frac{\sum_{i=1}^{H}\sum_{j=1}^{W}M(I)_{ijn}\alpha_{ij}}{\sum _{i=1}^{H}\sum_{j=1}^{W}\alpha_{ij}}, \tag{1}\] producing an \(N\)-dimensional vector embedding from the regions of \(I\) weighted by \(\alpha\). Notably, \(M\) is truncated such that its output is a convolutional feature map of size \(H\times W\times N\), and \(\alpha\) is resized to \(H\times W\) to match. Equation 1 ignores regions not identified as critical and only considers regions with higher weighting score provided by \(\alpha\) - in essence, \(f\) performs a weighted average of \(M(I)\) with weights \(\alpha\). Here, we use t-distributed stochastic neighbour embedding (t-SNE) [11] to group the resulting embeddings across a full dataset [4]. In addition, embeddings were reduced to 50 dimensions via principal component analysis before applying t-SNE to map them to a 2D space for visualization. ## 3 Experimental Results and Discussion We present two example cases of SOXAI visualization: image classification and foreground instance segmentation, discuss the actionable insights gained from each, and demonstrate how such actionable insights can be used to enhance model performance. **Chainsaw classification:** To explore SOXAI for classification, we apply it to a ResNet-50 trained on ImageNet 1k [2]. An example result for the chainsaw class can be seen in Figure 0(a), which also highlights four groupings of interest. Groupings 1 and 2 show the frontal part of chainsaws (_i.e._, the cutting chain and guide bar) and the handle, respectively, demonstrating that the model has learned important features representing the target class. However, smaller groupings highlighted in areas 3 and 4 also reveal biases that the model has learned over time. In grouping 3, we see that the model has learned a relationship between earmuffs commonly worn when using chainsaws and the actual class prediction. Grouping 4 shows images of logs and even wooden sculptures instead of chainsaws directly. Through the use of SOXAI, we were able to quickly identify reoccurring biases learned by the model towards objects that commonly appear in the same frame as the target class. This was accomplished without the need to manually inspect each example in the validation set, as would be necessary for first-order XAI algorithms. Based on the identified biases, enhanced model performance may be achieved by better-targeted elimination of biases in future training and data collection or cleaning. **Drill segmentation:** Here, we apply SOXAI to a MaskRCNN model [5] trained on the MetaGraspNet dataset [1] to detect foreground objects. As an example, we analyze the segmentation of drills, an object category not seen in the training set, chosen for its geometric and textural complexity. Figure 0(b) presents the SOXAI result, highlighting two groupings representing different faces of the drill. The face shown in grouping 1 exhibits a high level of focus on the large logo. Since the model was not explicitly trained to recognize drills, some other foreground object must have biased it towards recognizing letters. We observe that the large logo is over-represented in the grouping, while the frontal black head of the drill is underrepresented. To investigate further, we evaluate the prevalence of incomplete segmentations of the drill when each face is visible, such as the incomplete segmentation shown in Figure 1(a). We find that 37% of predictions for drills with the large logo facing up are incomplete segmentations, with much of the frontal black segment missing, while only 14% of segmentation predictions on the other face are incomplete. To confirm the model's bias towards text, we mask out the logo (see Figure 1(b)), and evaluate the mAP score. We observe an increase from 0.592 to 0.618, suggesting that allowing the model to ignore its learned bias and focus on a fuller representation of the object improves its performance. These example cases demonstrate the usefulness of SOXAI for unveiling actionable insights into model biases that can be used to enhance a model's performance.
2303.15669
Unsupervised Pre-Training For Data-Efficient Text-to-Speech On Low Resource Languages
Neural text-to-speech (TTS) models can synthesize natural human speech when trained on large amounts of transcribed speech. However, collecting such large-scale transcribed data is expensive. This paper proposes an unsupervised pre-training method for a sequence-to-sequence TTS model by leveraging large untranscribed speech data. With our pre-training, we can remarkably reduce the amount of paired transcribed data required to train the model for the target downstream TTS task. The main idea is to pre-train the model to reconstruct de-warped mel-spectrograms from warped ones, which may allow the model to learn proper temporal assignment relation between input and output sequences. In addition, we propose a data augmentation method that further improves the data efficiency in fine-tuning. We empirically demonstrate the effectiveness of our proposed method in low-resource language scenarios, achieving outstanding performance compared to competing methods. The code and audio samples are available at: https://github.com/cnaigithub/SpeechDewarping
Seongyeon Park, Myungseo Song, Bohyung Kim, Tae-Hyun Oh
2023-03-28T01:26:00Z
http://arxiv.org/abs/2303.15669v1
# Unsupervised Pre-Training for Data-Efficient Text-to-Speech ###### Abstract Neural text-to-speech (TTS) models can synthesize natural human speech when trained on large amounts of transcribed speech. However, collecting such large-scale transcribed data is expensive. This paper proposes an unsupervised pre-training method for a sequence-to-sequence TTS model by leveraging large untranscribed speech data. With our pre-training, we can remarkably reduce the amount of paired transcribed data required to train the model for the target downstream TTS task. The main idea is to pre-train the model to reconstruct de-warped mel-spectrograms from warped ones, which may allow the model to learn proper temporal assignment relation between input and output sequences. In addition, we propose a data augmentation method that further improves the data efficiency in fine-tuning. We empirically demonstrate the effectiveness of our proposed method in low-resource language scenarios, achieving outstanding performance compared to competing methods. The code and audio samples are available at: [https://github.com/cnaigithu](https://github.com/cnaigithu) b/SpeechDewarping Seongyeon Park\({}^{1*}\), Myungseo Song\({}^{1*}\), Bohyung Kim\({}^{1}\) and Tae-Hyun Oh\({}^{2,3}\)\({}^{1}\)CNAI, Seoul, Korea \({}^{2}\)Dept. of EE and GSAI, POSTECH, Pohang, Korea \({}^{3}\)Institute for Convergence Research and Education in Advanced Technology, Yonsei University, Seoul, Korea Text-to-speech, data-efficiency, pre-training, unsupervised learning, data augmentation ## 1 Introduction Recent advance in deep neural networks enables us to build end-to-end text-to-speech (TTS) models [1, 2] to synthesize plausible speech. Recent research [3, 4] attributes natural and plausible speech generation of TTS models to the following capabilities to be learned: 1) _attention alignment_ between the input and output sequences, and 2) _autoregressive prediction_ of acoustic features. The supervised learning or pre-training methods [5, 6] directly inject the necessary capabilities for TTS through supervision using large-scale transcribed speech. However, such models require a large amount of transcribed speech data for training, which is not annotation efficient. Constructing such large-scale text-annotated speech is time-consuming, costly, and even infeasible for low-resource languages. To mitigate the labeled data deficiency, pre-training methods for TTS systems have been investigated [3, 4, 5, 6]. Among them, Chung et al.and Zhang et al. [3, 4] specifically designed to induce either of such capabilities in unsupervised ways by leveraging large-scale untranscribed speech data. In [3], the decoder of Tacotron [1] is pre-trained as an autoregressive speech generator. In [4], the whole model of Tacotron 2 [2] is pre-trained to predict speech from unsupervised linguistic units extracted by an external Vector-quantization Variational-Autoencoder (VQ-VAE) [7]. It would be desirable to pre-train the full TTS model without any external model. The goal of this paper is to further reduce the amount of transcribed speech required for TTS training. To this end, we propose an unsupervised pre-training method for Tacotron 2, _Speech De-warping_. By utilizing large-scale _untranscribed_ speech, our key idea is to make the TTS model learn to reconstruct original spectrograms from warped ones, _i.e._, learn to _de-warp_. This method does not require annotation, as we synthesize the warped spectrograms by a simple random temporal warping technique. We sample random segment boundaries and resize each segment along the temporal axis to be a fixed size. Learning to de-warp as a pre-training step encourages the model to acquire both preliminary knowledge of attention alignment and autoregressive prediction. After the pre-training, we fine-tune the model using small-scale transcribed speech data of a target speaker, possibly in a low-resource language. In addition, we extend our simple random warping technique to a data augmentation method for the fine-tuning step, which further improves performance. Compared to the previous studies, our pre-training method does not suffer from the model mismatch problem between pre-training and fine-tuning [3] and does not require training an external model for data preparation [4]. It is also worth noting that our data augmentation does not require any external data or pre-trained models unlike other data augmentation approaches for TTS [8, 9, 10, 11, 12, 13]; they typically leverage a large amount of transcribed speech to generate synthetic data with pre-trained TTS models or voice conversion models. Our main contributions are summarized as follows: 1) proposing an unsupervised pre-training method for TTS models, _Speech De-warping_, 2) proposing a simple yet effective data augmentation method, _SegAug_, 3) demonstrating improved data efficiency, and 4) showing the cross-language effectiveness of our methods. ## 2 Proposed Method ### Segment-based Speech Warping Our pre-training and data augmentation methods include the procedure of warping speech. To warp the speech, we segment the mel-spectrograms of the speech along the time axis and apply a transformation per segment. To clarify the procedure, we describe the general form of the segment-based speech warping \(f\). Given a mel-spectrogram \(\mathbf{m}\) of timesteps \(N\), we warp \(\mathbf{m}\) to generate a warped mel-spectrogram \(\hat{\mathbf{m}}\), which is given by \[\hat{\mathbf{m}}=f(\mathbf{m};S,T), \tag{1}\] where \(S\) is a segmentation method, and \(T\) is a transformation. The segmentation method \(S\) segments \(\mathbf{m}\) into \(k\) different spectrogram segments \(\mathbf{m}_{1},\mathbf{m}_{2},...,\mathbf{m}_{k}\) such that \(N=\sum_{i=1}^{k}N_{i}\), where \(N_{i}\) is the number of timesteps of \(\mathbf{m}_{i}\). Then, for each segment \(\mathbf{m}_{i}\), the transformation \(T\) transforms \(\mathbf{m}_{i}\) to a warped segment \(\hat{\mathbf{m}}_{i}=T(\mathbf{m}_{i})\). We concatenate the warped segments along the time axis to generate the warped spectrogram \(\hat{\mathbf{m}}=concat(\hat{\mathbf{m}}_{1},\hat{\mathbf{m}}_{2},...,\hat{ \mathbf{m}}_{k})\). Theoretically, any segmentation method and transformation can be used as \(S\) and \(T\), _e.g._, phoneme segmentation for \(S\). We present our specific configuration for \(S\) and \(T\) in the following subsections. ### Pre-training: Unsupervised Speech De-warping We aim to reduce the amount of transcribed speech required for TTS training. To this end, we propose an unsupervised pre-training method, _Speech De-warping_ which leverages large-scale _untransformed_ speech, which is much cheaper to obtain than transcribed one. The main idea is to pre-train a TTS model to recover original spectrograms from warped ones, which is illustrated in Figure 1. To generate pairs of input and expected output for unsupervised learning, we first generate warped spectrograms from the original spectrograms converted from the untranscribed speech by the segment-based speech warping (see Equation 1). Specifically, we use random segmentation as \(S\), which randomly selects \(k-1\) number of boundary timesteps; thus, \(k\) segments are obtained. The segment boundaries are independently sampled for each training step. We set \(k=\lfloor\frac{N}{6}\rfloor\) for each spectrogram. For the transformation \(T\), we use linear interpolation to make each segment have an equal unit timestep (_i.e._, length 1). We adopt Tacotron 2 [2] as our backbone TTS model and denote it as Tacotron for simplicity. Tacotron has the input format of text embedding; thus, the spectrogram inputs are not directly applicable. To feed the warped spectrograms to the model's encoder as input, we replace the text embedding look-up table of Tacotron with a simple 1D convolutional layer. It maps the mel-dimension to the embedding dimension of the Tacotron encoder during unsupervised training. Other segmentation methods can be used as \(S\) to generate segments instead of our proposed random segmentation, _e.g._, more semantically aligned segments like exact phonemes. For example, one can adopt the Montreal Forced Alignment (MFA) [14] tool to extract phoneme segments using text annotations. However, it is not applicable in our unsupervised pre-training setting, where no text annotation is available. Instead, one can use unsupervised pseudo phoneme segmentation [15]. We empirically show that these semantic segmentation methods can improve the performance of _Speech De-warping_, but the simple random segmentation is powerful enough to outperform other baselines. ### Fine-tuning: Transferring Knowledge to TTS After pre-training Tacotron with the pretext task, we fine-tune the model with the downstream TTS task of a target speaker. We use a few transcribed speeches, _i.e._, text-audio pairs, of the target speaker to fine-tune the model. Before starting fine-tuning, to feed texts to the model as in the original Tacotron, the 1D convolutional layer preceding the encoder is discarded, and a learnable text embedding look-up table for the target speaker's language is randomly initialized. With this reconfiguration, we fine-tune the TTS model for the small target speaker data. Data Augmentation.To further improve data efficiency during fine-tuning, we propose a simple data augmentation method called _SegAug_. During fine-tuning, we augment the training data by applying the segment-based speech warping (Equation 1) to the target spectrograms. Specifically, for \(S\), we use random segmentation as in the pre-training stage. For the transformation \(T\), we use linear interpolation to resize each segment of the input spectrogram along the time axis by a factor uniformly sampled from \([\frac{1}{3},\frac{5}{3}]\). The resulting warped spectrograms are used as the target spectrograms for training loss. After training the model with this augmentation, we additionally train the model for a few steps without the augmentation to adapt the model to the ground truth prosody of the target speaker, _i.e._, a cool-down step. Note that this augmentation in the fine-tuning stage is optional. While our pre-training alone empirically demonstrates favorable performance, we can further improve the performance with this augmentation during fine-tuning. ## 3 Experiments ### Experiment Setup Dataset and Evaluation.We use the _train-clean-100_ subset of the LibriTTS [16] dataset as the untranscribed pre-training set, which consists of 47.6 hours of speech from 247 English speakers. We set Korean as a low-resource language and use the Korean Single speaker Speech (KSS) [17] dataset as our transcribed fine-tuning set. Following [3, 4], we define 24 minutes of speech as 1 shard of data. Then, we construct fine-tuning datasets by randomly sampling 0.5, 1, 2, 3, 5, 8 shards from the KSS dataset. For evaluation, we conduct both objective and subjective tests. For the objective evaluation, we use Mel-cepstral Distortion with Dynamic time-warping (MCD-DTW) [18], simply denoted as MCD. The objective results are reported as an average over the test set containing 571 utterances (about 22.7 minutes in total). For the subjective evaluation, we conduct AB preference tests on 20 utterances randomly sampled from the test set. We ask 15 native Korean raters to choose the more preferred one among two synthesized audios given the text, in terms of pronunciation, recognizability, and naturalness. Griffin-Lim [20] algorithm is used as a vocoder for fast experiment cycles. **Compared methods.** We use Tacotron 2 [2] as the TTS model in our experiments. Following the naming conventions of Zhang et al. [4] with T(acotron), we denote the model only trained with the fine-tuning data without pre-training by Tac. We denote two recent unsupervised pre-training methods, decoder pre-training [3] and VQVAE-based pre-training [4], by T-Dec and T-VQ, respectively. The model pre-trained with our _Speech De-warping_ is denoted by T-SD, _i.e.,_ ours. As an upper bound of performance for the unsupervised pre-training methods, we employ the model pre-trained in a supervised manner with text annotations and denote it by T-Pho, as suggested by Zhang et al. [4]. In addition to the pre-training methods, we compare our data augmentation with other data augmentation methods in the fine-tuning stage. We denote additive Gaussian noise-based augmentation [21] by Gaussian, mixup-based augmentation [22] by Mixup, SpecAug. ### Results on Small Amount of Fine-tuning Data **Objective Evaluation.** Table 1 presents the superior performance of the proposed methods compared to competing methods on small amounts of fine-tuning data. Without data augmentation during fine-tuning, T-SD outperforms all unsupervised pre-training methods and Tac. Both T-Dec [3] and Tac, which do not have the opportunity to pre-learn a sufficient capability of attention alignment in pre-training, show similarly lower performance than the others. In contrast, the proposed de-warping task encourages the model to learn both preliminary knowledge of attention alignment and autoregressive prediction. When data augmentation is applied during fine-tuning, T-SD with _SegAug_ outperforms other combinations of pre-training and augmentation methods. _SegAug_ even effectively improves the performance of other pre-training baselines and shows competitive performance compared to other augmentation methods. **Subjective Evaluation.** Table 2 shows the preference test results with competitive methods using 0.5 shards of fine-tuning data. Consistent with the objective results, our methods outperform the prior art of unsupervised methods (T-VQ). Interestingly, with our proposed data augmentation applied during fine-tuning, our whole transfer learning scheme even outperforms the supervised pre-training baseline (T-Pho). ### Effects of Different Amounts of Fine-tuning Data Figure 2 presents the MCD evaluation results of the competing methods according to different amounts of fine-tuning data. Our method shows the best performance overall and is particularly better on small amounts of data. As the amount of fine-tuning data increases, the models tend to show better performance, and the performance gaps between the methods gradually decrease. ### Additional Results **Comparison to an upsampling pre-text task.** In our _Speech De-warping_, we resize segments of different lengths into the same timestep 1 to warp the input spectrograms. As a result, the alignment between the warped spectrogram and the original one becomes non-linear, which is analogous to the alignment characteristics between text and speech. We argue that _learning this monotonic yet non-linear alignment in pre-training is one of the critical factors_ of our method. To validate this argument, we introduce a control experiment with simple upsampling pre-training as a pre-text task, called _Naive_, and compare it with our _Speech De-warping_ in Table 3. Specifically, in _Naive_, instead of using the segment-wise warping to warp the spectrogram, we downsample the whole spectrogram by a single scale factor of \(\frac{1}{6}\) using linear interpolation along the time axis. Thereby, the model with _Naive_ learns a linear alignment between the uniformly downsampled spectrograms and the original spectrograms. Figure 3 \begin{table} \begin{tabular}{c c c c} \hline \multirow{2}{*}{Model pair} & \multicolumn{3}{c}{Preference (\%)} \\ \cline{2-4} & Former & Latter & Neutral \\ \hline T-VQ vs. T-SD & 15.7 & **54.7** & 29.6 \\ T-VQ vs. T-SD + SegAug & 3.7 & **76.0** & 20.3 \\ T-Pho vs. T-SD + SegAug & 23.3 & **44.0** & 32.7 \\ \hline \end{tabular} \end{table} Table 2: AB test results of our method over competitive baselines. All methods use 0.5 shards (12 minutes) of fine-tuning data. \begin{table} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{ \begin{tabular}{c} Augmentation \\ in fine-tuning \\ \end{tabular} } & \multirow{2}{*}{Model} & \multicolumn{2}{c}{Supervised} & \multicolumn{2}{c}{Paired data (in shards)} \\ \cline{3-5} & & pre-training & 0.5 & 1 \\ \hline \multirow{5}{*}{No aug.} & Tac & \(\times\) & 11.98 & 12.41 \\ & T-Dec & \(\times\) & 12.07 & 12.18 \\ & T-VQ & \(\times\) & 11.11 & 10.41 \\ & T-SD (Ours) & \(\times\) & **10.79** & **10.40** \\ & T-Pho & \(\bigcirc\) & 10.40 & 10.28 \\ \hline \multirow{5}{*}{With aug.} & Tac + Gaussian & 12.59 & 12.33 \\ & Tac + Mixup & 12.06 & 12.04 \\ & Tac + SpecAug & 12.29 & 10.60 \\ & Tac + SegAug & 12.19 & 10.68 \\ \cline{1-1} \cline{2-5} & T-VQ + Gaussian & 10.63 & 10.40 \\ \cline{1-1} & T-VQ + Mixup & 11.12 & 10.48 \\ \cline{1-1} & T-VQ + SpecAug & 10.46 & 10.33 \\ \cline{1-1} & T-VQ + SegAug & 10.41 & 10.27 \\ \cline{1-1} & T-SD + SegAug (Ours) & **10.28** & **10.24** \\ \hline \end{tabular} \end{table} Table 1: MCD results of several pre-training and data augmentation methods when being fine-tuned on 0.5 or 1 shard (12 or 24 minutes) of paired speech of the target speaker. Note that T-Pho leverages text annotations in pre-training. Figure 2: MCD results according to varying amounts of paired data. The dashed line denotes the MCD of T-Pho, which is a supervised method; thus, it can be considered a near-upper-bound performance of unsupervised pre-training methods. presents examples of attention alignments learned during pre-training. The superior performance of T-SD compared to _Naive_ in Table 3 verifies that learning a _monotonic and non-linear alignment_ benefits our _Speech De-warping_. Note that _Naive_ performs better than Tac and T-Dec in Table 1, which demonstrates the effectiveness of learning _monotonic alignment_ through the upsampling task itself. **Effect of Heterogeneous Languages in Fine-tuning.** We investigate the effect of using different or the same languages between pre-training and fine-tuning steps, which is the main scope of this work. As described, we use English as pre-training data. For the same language scenario, called _Same_, we use the LJSpeech [24] dataset (English) for fine-tuning data. The different language scenario follows the same setup described in Sec. 3.1, called _Different_. We compare our T-SD with T-VQ to show the algorithmic behavioral differences. Table 4 shows the performance of T-SD is overall similar to T-VQ when the language between pre-training and fine-tuning is unchanged, _i.e._, _Same_. However, as shown in the _Different_ columns, T-SD is more robust against overfitting to the pre-training language than T-VQ. We conjecture this is because the burden to memorize acoustic features of the pre-training language is less for our method since some language-specific acoustic information is already given as input for de-warping. **Effect of Segmentation Methods.** We investigate the effect of different segmentation methods for the segment-based speech warping in _Speech De-warping_. In addition to the random segmentation used in our T-SD, we compare with the phoneme segmentation by using the MFA tool [14], which requires text supervision, and the pseudo phoneme segmentation by using the unsupervised phoneme segmentation model [15]. As shown in Table 4, the performance of _Speech De-warping_ can be boosted by using semantically meaningful segmentation obtained from external models. The phoneme segmentation shows the best performance when the fine-tuning language is the same as the pre-training language and the worst when the fine-tuning language is unseen during pre-training. The phoneme segmentation of a specific language induces the alignment of the warped spectrograms and original spectrograms to be very similar to the alignment between text and speech of that language in pre-training. This behavior can lead to overfitting to the specific language used in pre-training. ## 4 Conclusion We propose an unsupervised pre-training method and a data augmentation method for training TTS models with limited amounts of text-annotated speech data. Our pre-training method enables us to build a TTS system for a low-resource language by leveraging a large-scale and untranscribed speech dataset that can be easily collected. The proposed data augmentation technique can be used to further improve such data efficiency. Our comprehensive experiments show the superior performance of the proposed methods compared to various competing pre-training and data augmentation methods. We empirically demonstrate that learning a non-linear alignment during pre-training of the model is beneficial in TTS compared to learning a linear alignment. We show that our pre-training method can achieve better performance by using external models for segmentation. **Acknowledgments.** T.-H. Oh was partially supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub; No.2022-0-00124, Development of Artificial Intelligence Technology for Self-Improving Competency-Aware Learning Capabilities; No. 2019-0-01906, Artificial Intelligence Graduate School Program(POSTECH)). \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Paired data (in shards)} \\ \cline{2-5} & \multicolumn{2}{c}{Different} & \multicolumn{2}{c}{Same} \\ \cline{2-5} & 0.5 & 1 & 0.5 & 1 \\ \hline T-VQ & 11.11 & 10.41 & 11.85 & 10.51 \\ T-SD (Random segment) & 10.79 & 10.40 & 11.57 & 10.63 \\ Pseudo phoneme segment & 10.56 & 10.38 & 11.71 & 10.69 \\ Phoneme segment & 11.35 & 10.48 & 11.19 & 10.46 \\ \hline \hline \end{tabular} \end{table} Table 4: MCD results of T-VQ and our _Speech De-warping_ according to different segmentation methods on two fine-tuning languages. Different and Same denote that the fine-tuning language is different or the same as the pre-training language. Note that phoneme segmentation requires text supervision. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Paired data (in shards)} \\ \cline{2-3} & 0.5 & 1 \\ \hline Naive & 11.37 & 10.88 \\ T-SD & **10.79** & **10.40** \\ \hline \hline \end{tabular} \end{table} Table 3: MCD results for the ablation study comparing our de-warping to the up-sampling pre-training. _Naive_ indicates the model pre-trained with the up-sampling task. Figure 3: Examples of the learned attention alignments between input and output timesteps of the decoders of the models during pre-training. While _Naive_ induces the model to learn a linear alignment, our T-SD encourages the model to learn a non-linear alignment whose form is similar to the alignment between text and speech as in T-Pho.
2305.11781
Lifting Network Protocol Implementation to Precise Format Specification with Security Applications
Inferring protocol formats is critical for many security applications. However, existing format-inference techniques often miss many formats, because almost all of them are in a fashion of dynamic analysis and rely on a limited number of network packets to drive their analysis. If a feature is not present in the input packets, the feature will be missed in the resulting formats. We develop a novel static program analysis for format inference. It is well-known that static analysis does not rely on any input packets and can achieve high coverage by scanning every piece of code. However, for efficiency and precision, we have to address two challenges, namely path explosion and disordered path constraints. To this end, our approach uses abstract interpretation to produce a novel data structure called the abstract format graph. It delimits precise but costly operations to only small regions, thus ensuring precision and efficiency at the same time. Our inferred formats are of high coverage and precisely specify both field boundaries and semantic constraints among packet fields. Our evaluation shows that we can infer formats for a protocol in one minute with >95% precision and recall, much better than four baseline techniques. Our inferred formats can substantially enhance existing protocol fuzzers, improving the coverage by 20% to 260% and discovering 53 zero-days with 47 assigned CVEs. We also provide case studies of adopting our inferred formats in other security applications including traffic auditing and intrusion detection.
Qingkai Shi, Junyang Shao, Yapeng Ye, Mingwei Zheng, Xiangyu Zhang
2023-05-19T16:18:55Z
http://arxiv.org/abs/2305.11781v1
# Lifting Network Protocol Implementation to Precise Format Specification with Security Applications ###### Abstract. Inferring protocol formats is critical for many security applications. However, existing format-inference techniques often miss many formats, because almost all of them are in a fashion of dynamic analysis and rely on a limited number of network packets to drive their analysis. If a feature is not present in the input packets, the feature will be missed in the resulting formats. We develop a novel static program analysis for format inference. It is well-known that static analysis does not rely on any input packets and can achieve high coverage by scanning every piece of code. However, for efficiency and precision, we have to address two challenges, namely path explosion and disordered path constraints. To this end, our approach uses abstract interpretation to produce a novel data structure called the abstract format graph. It delimits precise but costly operations to only small regions, thus ensuring precision and efficiency at the same time. Our inferred formats are of high coverage and precisely specify both field boundaries and semantic constraints among packet fields. Our evaluation shows that we can infer formats for a protocol in one minute with \(>\)95% precision and recall, much better than four baseline techniques. Our inferred formats can substantially enhance existing protocol fuzzers, improving the coverage by 20% to 260% and discovering 53 zero-days with 47 assigned CVEs. We also provide case studies of adopting our inferred formats in other security applications including traffic auditing and intrusion detection. + Footnote †: journal: Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acment on Acmentment on Acment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentmentment on Acmentmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentment on Acmentmentment on Acmentment on Acmentment on Acmentmentment on Acmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentmentment on Acmentmentmentment on Acmentmentment on Acmentmentmentment on Acmentmentmentment on Acmentmentment on Acmentmentment on Acmentmentmentment on Acmentmentmentment on Acmentmentmentment on Acmentmentmentment on Acmentmentment on Acmentmentmentmentment on Acmentmentmentmentmentment on Acmentmentmentmentment on Acmentmentmentmentmentmentmentmentment on Acment In this paper, we focus on the third scenario and develop a static program analysis to produce formal protocol formats, including both syntax and semantics, from the source code. We call it a protocol lifting technique, belonging to _category-three_. We resort to static analysis in order to address the coverage problem in dynamic analysis. Meanwhile, high accuracy can be achieved as it adopts a path-sensitive analysis. We produce BNF-like protocol formats. While BNF (Shi et al., 2019) is a common language to describe syntax, we enhance it to include first-order-logic semantic constraints across protocol fields. As we will show in SS4, lifting source code to protocol formats is highly challenging. First, the traditional data-flow analysis that aggregates analysis results of multiple program paths at their joint point yields very poor results, whereas path-sensitive analysis that considers individual paths separately is prohibitively expensive due to path explosion. Second, the inferred formats are mostly out of order for human interpretation, which is highly undesirable as humans are important consumers of the formats in security applications. To address the challenges, we develop a novel static analysis. In particular, we develop abstract interpretation rules that can derive an abstract format graph (AFG) from the source code. AFG can be considered as a transformed control flow graph. It precludes statements that are irrelevant to packet formats. It further merges program subpaths that are irrelevant to formats so that path-sensitive analysis is not performed on the merged places. Meanwhile, it retains sufficient information such that a localized but precise path-sensitive analysis can be performed on the unmerged parts of the graph. Therefore, it mitigates the path-explosion problem without losing analysis accuracy. The AFG is further unfolded and reordered to generate BNF-style production rules and first-order-logic formulas that describe semantic constraints across protocol fields. In summary, we make the following four contributions: * We develop an abstract interpretation method that produces a novel representation, namely the abstract format graph, to facilitate format inference. * We propose a localized graph unfolding algorithm that can perform precise path-sensitive analysis in small AFG regions to significantly mitigate path explosion. * We devise a graph reordering algorithm that translates an unfolded AFG to the commonly-used BNF so that our inferred formats can be widely applied in practice. * We implement our approach as a tool, namely Netlifier, to infer packet formats from protocol parsers written in C. We evaluate it on a number of protocols from different domains. Netlifier is highly efficient as it can infer formats in one minute. Netlifier is highly precise with a high recall as its inferred formats uncover \(\geq\) 95% formats with \(\leq 5\%\) false ones. In contrast, the baselines, often miss \(>\)50% of formats and, sometimes, produce \(>\)50% false ones. We use the inferred formats to enhance grammar-based protocol fuzzers, which are improved by 20%-260% in terms of coverage and detect 53 zero-day vulnerabilities with 47 assigned CVEs. Without our formats, only 12 can be found. We also provide case studies of adopting our approach in traffic analysis and intrusion detection. Netlifier is publicly available (Kang et al., 2019). ## 2. Motivation We use an open-source protocol, namely Open Supervised Device Protocol (OSDP), to illustrate the limitations of existing methods and how our technique can facilitate various security applications. OSDP is an access control communications standard developed by the Security Industry Association to improve interoperability among access control and security products. Although it is an open-source protocol, its full specification is not publicly available. The only available document (Bustin et al., 2019) lacks many details. For instance, it includes the formats for only 7 out of the 27 supported commands. The implementation of OSDP is vulnerable. Figure 1(a) shows a code snippet related to a zero-day bug found by a protocol fuzzer enhanced by our approach. The code shows part of the packet parsing function. The variable buf is a byte array representing the OSDP packet, and we use \(B[i]\) to represent the \((i+1)\)th byte in the packet. The bug is at Line 14. It may invoke an invalid function pointer \(f\)\(>\)_ops.write_, which could lead to a crash or be exploited for DoS or ROP attacks. There are multiple ways to avoid such attacks. The first one is to use fuzzing techniques to find bugs in its implementation and have them fixed before exploitation. The second is to provide OSDP support in network traffic analysis and attack detection tools such that the attack can be analyzed and further prevented. However, existing methods fall short as discussed below. **Standard Network Fuzzing Can Hardly Find the Zero-day.** Different from stand-alone application fuzzers such as AFL (Kang et al., 2019), network fuzzers, such as BooFuzz (Kang et al., 2019), often operate in a client-server architecture. The server runs the target protocol implementation. The client leverages grammar-based fuzzing to generate packets as per the formats, send the packets to the server, receive responses, and generate new packets to fuzz the target. However, the effectiveness of these fuzzers hinges on the protocol formats. When the formats are not available like in our OSDP case, they quickly degenerate into traditional greybox fuzzers that arbitrarily mutate bits or bytes. Such mutated packets can hardly pass many input validity checks in the code. For example, in Figure 1(a), to expose the bug, a fuzzer has to get through the check at Line 11, which is a complex relation across multiple fields as shown in the comment. As a result, standard network fuzzers fail to find the bug when an imprecise or incomplete format of OSDP is provided. Figure 1. (a) Simplified code that parses the file-transfer command. (b) The typical workflow of protocol fuzzers with a snippet of the format inferred by Netlifier, in which the first row is a BNF production rule denoting syntax (e.g., field partitioning) and the remaining denote semantic constraints. **Lack of Support for OSDP in Wireshark and Snort.** We can also rely on network traffic analyzers, e.g., Wireshark (Wireshark, 2018), and attack detection tools, e.g., Snort (Wireshark, 2018), to ensure security. However, both Wireshark and Snort do not support OSDP. Assume Wireshark is deployed at the gateway. It detects abnormal traffic as highlighted in the red box in Figure 2(a). Note that in the diagram the x-axis is time and the y-axis denotes the amount of traffic per second. However, the traffic is not interpretable for Wireshark as OSDP is not supported. Instead, the OSDP packets are treated as raw data bytes as shown in Figure 2(b). Thus, it is hard to analyze the packet details and determine which device launches the attack. ### Limitations of Existing Techniques A way to address the aforestated defense insufficiency is to infer the protocol formats. As discussed in \(\lx@sectionsign 1\), existing techniques fall into categories one and two. Category-one infers formats from a set of network packets. For example, a recent method NetPlier (Wang et al., 2017) leverages a probabilistic analysis on network packets to determine a keyword field, i.e., the field identifying the packet type, by computing the probabilities of each byte offset. Once the keyword field is determined, it clusters packets according to the value of the field and applies multi-sequence alignment to derive message format. However, real-world network packets suffer from all sorts of distribution biases, e.g., lacking some kinds of messages due to their rare uses in practice, leading to sub-optimal results. For instance, NetPlier partitions the first four bytes of an OSDP packet as \(\mid\) 0x53 0xff \(\mid\) 0x29 \(\mid\) 0x00 \(\mid\)... \(\mid\), which mistakenly places the first two bytes into the same field and splits \(B[2]\) and \(B[3]\) into two different fields while \(B[2..3]\) (with the value of \(B[3]B[2]\) that represents a two-byte integer with \(B[3]\) the most significant byte and \(B[2]\) the least.) should be a single field representing the packet length. However, since most input packets are shorter than 255, \(B[3]\) is always zero while \(B[2]\) has different values in different packets. Thus, these two bytes follow different distributions in the packet samples, and NetPlier incorrectly regards them as separate fields. Moreover, NetPlier does not infer semantic constraints such as the condition at Line 11 in Figure 1(a). Such imprecise formats prevent a grammar-based protocol fuzzer from finding the zero-day (see \(\lx@sectionsign\)6) and fail to enhance Wireshark and Snort (see Appendix B). Category-two methods dynamically analyze protocol execution using a set of input packets. AutoFormat (Wang et al., 2017) is a representative. It leverages the observation that most packet parsers utilize top-down parsing such that they invoke a function to parse a substructure. Therefore, the dynamic call graph in parsing a packet discloses its structure. However, the function call hierarchy may not be sufficiently fine-grained to disclose detailed packet formats. Similar to NetPlier, it does not infer semantic constraints across fields, such as the one at Line 11 in Figure 1(a). As dynamic analysis, the inferred format may be incomplete, depending on the coverage of the input packets that drive the dynamic analysis. For instance, in our evaluation, AutoFormat misses 15 out of the 27 packet types because these types of packets do not appear in regular workloads. Some category-two techniques, e.g., Tupni (Tupni, 2018), can precisely infer semantic constraints among packet fields. However, as dynamic analyses, they suffer from the innate coverage problem. As per our results, the inference results of Tupni may miss >50% of possible formats. The problem is that if the program executions analyzed by Tupni do not cover the file-transfer command, i.e., Lines 5-15 in Figure 1(a), Tupni will not generate formats for the command. Without the formats, it is hard for a fuzzer to generate packets that can pass the validity check at Line 11 and expose the bug at Line 14. ### Our Solution and Security Applications Observing that the source code discloses substantial information about packet formats, we propose a category-three method that lifts the source code of OSDP to the protocol formats. For instance, Line 5 of the code in Figure 1(a) indicates that the command code for file transfer is 0x7c. Line 7 indicates that \(B[6]\) is a field representing the file type to transfer. Lines 8-9 load two four-byte integers to variables file_size and file_offset and thus indicate that there are two four-byte fields, one from \(B[7]\) to \(B[10]\) and the other from \(B[11]\) to \(B[14]\), meaning the size and the offset of the file to transfer, respectively. In addition to the syntactic information (e.g., field partitioning), the code also discloses the semantic relations across fields. For instance, the if-statement at Line 11 implies a cross-field constraint dictating that if \(B[4]\)\(\&\)\(4=0\), a valid packet must satisfy the constraint \(B[3]B[2]-7\geq 12\) or, otherwise, \(B[3]B[2]-8\geq 12\). We extract the above syntactic and semantic information via static analysis and produce a BNF-style production rule in Figure 1(b). Our lifted formats are both precise and of high coverage. In terms of precision, we precisely identify each field and its name as shown in Figure 1(b) and, meanwhile, also specify the field constraints as first-order-logic formulas. In terms of coverage, since we do not rely on any input packets like category-one and category-two techniques, any format included in the source code will be inferred. We can use the lifted formats to support many applications. **Application 1: Finding Zero-days by Network Fuzzing.** We leverage a theorem prover such as Z3 (Zhou et al., 2017) to produce valuations for individual packet fields, such as the \(B[i]\)'s in Figure 1(b), which satisfy the semantic constraints. The generated packets can pass the check at Line 11 in Figure 1(a), thereby enabling the discovery of the CVE at Line 14. In addition, our inferred packet format is of high coverage and allows the fuzzer to generate diverse packets to Figure 2. Network traffic auditing via Wireshark. improve test coverage. In particular, the vulnerable code can only be reached when the packet is a file-transfer command, i.e., the 0x7c branch of the switch statement (Line 5). If the format is not covered, the chance that a fuzzer can mutate a packet of different types to a valid file-transfer command is very slim. Our evaluation shows that the lifted formats can improve the coverage of fuzzing by 20-260% and allow us to detect 41 more zero-days, compared to using the inferred specifications by category-one and category-two methods, which can only detect 12 zero-days. Note that we do not claim direct contributions to fuzzing. Instead, our approach is orthogonal to existing fuzzing methods that rely on packet formats. **Application 2: Network Traffic Auditing.** Wireshark is the foremost protocol analyzer to ensure network security for hundreds of protocols (Han et al., 2017). Supporting a new protocol in Wireshark can be achieved by providing an extension, which is usually a library to parse protocol packets. We develop an extension generator that takes a lifted format as input and generates the corresponding Wireshark extension. Figure 2(c) shows that with the generated extension, Wireshark can look inside an OSDP packet sampled from the abnormal traffic. Our lifted format provides not only precise packet syntax but also informative field names extracted from variable names. With Wireshark, we observe that all packets during the abnormal traffic have the field osdp.address=35 and osdp.cmd=0x7c, indicating they are all from a device with the id #35 via the file-transfer commands. As will be shown in our evaluation, category-two approaches miss over 50% of possible fields. Extensions built from these incomplete formats would render Wireshark failing to process many received packets. In addition, they can hardly provide field names that are as informative as the ones we can provide. **Application 3: Network Intrusion Detection.** Due to the space limit, we put discussions of this application in Appendix B. **Remark.** While our inferred formats have high precision and recall close to 100%, like all previous works, there may still be missing or wrong formats, due to the inherent limitations of static program analysis (see SS9). However, the inferred format still matters in practice, because many downstream applications do not require perfect formats. For example, although format inaccuracies cause degraded efficacy improvement in fuzzing, the performance may still be far better than without any formats or having low-quality formats. This is similar for network traffic auditing and intrusion detection. ## 3. Background and Overview This section provides some background knowledge of our approach and overviews the lifting procedure, in order to facilitate the later discussion with more complexity. **Protocol Format vs. Protocol Specification.** Generally, the specification of a protocol consists of protocol formats and protocol state machines (Zhu et al., 2017). Protocol formats are often specified using a grammar in BNF, which specifies how a network packet, i.e., a bit or byte stream, can be dissected into multiple segments, i.e., fields, and specifies the semantic constraints the fields need to satisfy. For example, Figure 3(b) shows a typical BNF-style format of OSDP packets. The productions specify how an OSDP packet can be divided into multiple fields such as _som_, _address_, and so on. Like many previous works (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2019; Wang et al. production rule for each non-terminal symbol encountered (Han et al., 2017). Given the parsing function of a protocol, e.g., _parse(char\({}^{*}\) buf, int len) {...}_, the user annotates the parameters, i.e., the buffer variable, _buf_, that contains the network packet to parse, and the integer variable, _len_, which stands for the packet length. Except for the two annotations, Netlifter is fully automated. The output of Netlifter is the protocol format defined below. The format is similar to common BNF so that it aligns well with existing standards in formally describing protocol formats. Definition 1 (Protocol Format).: The format includes syntax and semantics. The syntax is denoted by production rules in BNF, where each rule is a sequence of consecutive bytes. Semantics is described by non-recursive first-order-logic (FOL) constraints with two special functions, name(...) and repeat(...), which are explained in the example below. The format satisfies three properties: 1. Each terminal symbol in the grammar is either \(B[i]\) or \(B[i..j]\), which is a bit-vector standing for the \((i+1)\)th byte or a range of bytes from \(B[i]\) to \(B[j]\); 2. Each production rule is associated with a set of assertions that assert FOL constraints over the terminals in this rule. The constraints must not conflict with each other. 3. Each assertion contains only a single atomic constraint that does not contain any connectives \(\wedge\) or \(\vee\). Example (Input of Netlifter).Figure 3 extends the example in Figure 1. It shows a simplified OSDP parser starting from Line 15. The packet is a byte array stored in _buf_ and the array length is _blen_. The user needs to annotate the two variables. The parser with the annotations is the input of Netlifter. In Lines 16-21, the parser loads the first five bytes into the variables _som_, _address_, _len_, and _ctrl_, where _som_ stands for "start of message" and is used to identify OSDP packets. The remaining code invokes the function _decode_command_ to parse an OSDP command as explained in Figure 1. Example (Output of Netlifter).Figure 3(b) shows a typical BNF-style format of OSDP, which is often manually constructed. The output format of Netlifter is shown in Figure 3(c), which closely resembles the manually constructed BNF in (b). The first rule in (c) resembles the first rule in (b), where we correctly determine that the first two bytes, \(B[0]\) and \(B[1]\), are two separate fields, corresponding to the fields _som_ and _address_ in (b). Similarly, the second and third rules in (c) resemble the two CMD rules in (b), where, besides single-byte fields, we also correctly determine multi-byte fields including \(B[2..3]\), \(B[7..10]\), and \(B[11..14]\), corresponding to the fields _length_, _filesize_, and _fileoffset_ in (b). The output format also associates each rule with two kinds of assertions. One kind, such as Line 25 and Lines 32-33, specifies the semantic constraints among packet fields. They are inferred from branching conditions in the code. When we infer a constraint including a value like \(B[3]B[2]\), it indicates a two-byte field with \(B[3]\) the most significant byte and \(B[2]\) the least. In other words, in addition to _field boundaries_, our format also expresses the _endinannes_, whereas the standard BNF cannot. Netlifter also describes semantic constraints not expressible in standard BNF such as the one in Line 32. All constraints have the _bit-level precision_. For instance, the expression \(B[4]\) & 4 in Line 32 computes the third bit of \(B[4]\). The other kind, such as Lines 26-27 and Lines 34-39, specifies the field names, which provide _high-level field semantics_ for us to understand the format. In addition to those in the example, we also produce many other names such as name(\(B[i..j]\)) = 'times-tamp'/'checksum' to indicate a timestamp/checksum field. As explained later, we infer such high-level semantics using the names of program variables or library APIs. In addition to the example above, we elaborate on several places where our format is more expressive than the standard BNF. Direction and Variable-Length Fields.A direction field locates another field and is often a length field, whose value encodes the variable length of a target field (Stein **Challenge 1:** Insufficiency of Traditional Static Analysis. Traditional static analysis is path-insensitive and merges analysis results from different paths at their joint point to achieve scalability. As introduced before, such merging yields over-approximation and incurs low precision. For example, the abstract values of _ctrl_ from the two branches at Lines 4 and 5, respectively, are merged at Line 6, yielding \(ctrl=B[4]+1\lor\textit{ctrl}=B[4]-1\). As such, we lose the correlation between \(B[4]\) and \(B[5]\) as the precise value of _ctrl_ should depend on the value of \(B[5]\) due to the if-statement at Line 3. In consequence, the resulting format will lose the correlations between \(B[4]\) and \(B[5]\), while in the ideal format shown in Figure 4(e), the production rules \(L_{4}\) and \(L_{5}\) include such correlation, i.e., \(B[4]+1=0\Leftrightarrow B[5]=0\) and \(B[4]-1=0\Leftrightarrow B[5]\neq 0\). A typical solution is to use a path-sensitive static analysis that separately analyzes individual paths and does not merge results from multiple branches. Lifting is thus reduced to enumerating paths, each constituting a production rule. In our example, there are four paths that denote valid packets, i.e., (P1)... \(\to 4\to...\to 14\to...\), (P2)... \(\to 5\to...\to 14\to...\), (P3)... \(\to 4\to...\to 15\to...\), and (P4)... \(\to 5\to...\to 15\to...\). Thus, the lifted format has four rules, each of which corresponds to a path constraint. For example, the format for the path P1 is shown below. \[\begin{array}{|c Generally, a vertex of AFG is an atomic constraint that does not contain any connectives \(\wedge\) or \(\vee\), and an edge means logical conjunction. In the definition, the first rule returns a single vertex for any atomic constraint. The second creates a graph for conjunction by connecting all _exit vertices_ (vertices without outgoing edges) of \(\operatorname{AFG}(\rho_{1})\) to all _entry vertices_ (vertices without incoming edges) of \(\operatorname{AFG}(\rho_{2})\). The third creates a graph for disjunction by simply creating a union of the two graphs, which contains the vertices and edges from both. The following lemma states the equivalence relation between the graph \(\operatorname{AFG}(\rho)\) and the constraint \(\rho\). In other words, \(\operatorname{AFG}(\rho)\) is an equivalent graphic representation of the constraint \(\rho\). We put the proofs of all our lemmas in Appendix C. Lemma 5.1 ().: _Given \(\operatorname{AFG}(\rho)\) with \(n\) paths, we have \(\rho\equiv\bigvee_{i=1}^{n}\rho_{i}\) where each \(\rho_{i}\) equals the conjunction of all constraints in an AFG path._ **Example**.: Consider the constraint \(\rho\equiv(a\lor b)\wedge c\wedge(d\lor e)\). By definition, \(\operatorname{AFG}(\rho)\) is a directed graph with five nodes, which respectively correspond to the five atomic constraints \(a\), \(b\), \(c\), \(d\), and \(e\). The AFG also contains four edges respectively from \(a\) to \(c\), from \(b\) to \(c\), from \(c\) to \(d\), and from \(c\) to \(e\). The AFG has four paths, respectively representing four constraints, \(\rho_{1}=a\wedge c\wedge d\), \(\rho_{2}=a\wedge c\wedge e\), \(\rho_{3}=b\wedge c\wedge d\), and \(\rho_{4}=b\wedge c\wedge e\). Apparently, we have \(\rho_{1}\lor\rho_{2}\lor\rho_{3}\lor\rho_{4}\equiv\rho\). Thus, we say the \(\operatorname{AFG}(\rho)\) is an equivalent graphic representation of the constraint \(\rho\). ### Abstract Interpretation The static analysis derives an AFG denoting path constraints related to the packet format. It features a new selection operator at the joint point of branches, which enables localized path-sensitive analysis. **Abstract Language.** For clarity, we use a C-like language in Figure 5 to model our target programs. A program in the language has an entry function that parses an input network packet, _pkt_, which is a byte array. The parsing function often has a parameter specifying the packet length, _len_, to avoid out-of-bounds access during parsing. The language contains assignments, binary operations, statements that read bytes from the packet, assertions, branching, and sequencing. Each branching statement is labeled by a unique identifier \(\kappa\). Although we do not include function calls or returns for discussion simplicity, our system is inter-procedural as a call statement is equivalent to a list of assignments from the actual parameters to the formals, and a return statement is an assignment from the return value to its receiver. The language includes statements reading bytes from the packet but does not include statements that store values into the packet. This is because, for parsing purposes, the input packet is often read-only. Note that the abstract language serves for demonstrating how we address the challenges discussed in SS4. Thus, for simplicity, we abstract away some common program structures, e.g., pointers and loops, from the language. Dealing with these structures is not our technical contribution. In SS5.5, we discuss how we handle them in our implementation. **Abstract Domain.** An abstract value of a variable represents all possible concrete values that may be assigned to the variable during program execution. The abstract domain specifies the limited forms of an abstract value. In our analysis, the abstract value of a variable \(v\) is denoted as \(\tilde{v}\) and defined in Figure 6. An abstract value could be a constant or a special value _length_ that represents the packet length. The \((\tilde{v}+1)\)th byte of the input packet is \(B[\tilde{v}]\). We introduce a new selection operator \(\Theta_{\kappa}\) such that \(v=\Theta_{\kappa}(v_{1},v_{2})\), which means that when the if-statement at \(\kappa\) takes the true branch, we have \(v=v_{1}\), \(v=v_{2}\) otherwise. One may find that the operator \(\Theta_{\kappa}\) is similar to the operator \(\phi\) in the classic SSA code form (Sang and Ghahramani, 2015) because both of them merge values from multiple branches. We note that \(\Theta_{\kappa}\) differs from \(\phi\) in two aspects. First, in the SSA form, \(v=\phi(v_{1},v_{2})\) is always placed at the end of a branching statement, whereas in our analysis \(v=\Theta_{\kappa_{i}}(v_{1},v_{2})\) represents an abstract value of the variable \(v\) and is propagated to many other places where the variable \(v\) is referenced. Second, since \(v=\Theta_{\kappa}(v_{1},v_{2})\) may be used at any place in the code, we use the subscript \(\kappa\) to record the branching statement where it originates. This is a critical design for the next step, i.e., the localized graph unfolding, as illustrated later. An abstract value can also be a first-order logic formula over other abstract values. To ease the explanation, we only support binary formulas. Figure 6 lists the rules that normalize expressions over abstract values. Rule (1) states that we do not need a \(\Theta_{\kappa}\) operator if we merge two equivalent values. Rules (2-3) state that any operation with a \(\Theta_{\kappa}\)-merged value is equivalent to operating on each value merged by the \(\Theta_{\kappa}\) operator. Rules (4-5) simplify nested \(\Theta_{\kappa}\) operators. **Abstract Semantics.** The abstract semantics describe how we analyze a given protocol parser. They are described as transfer functions of program statements. Each transfer function updates the program's abstract state, which is a pair \((\mathbb{E},\mathbb{G})\). Given the set \(V\) of program variables and the set \(\tilde{V}\) of abstract values, \(\mathbb{E}:V\mapsto\tilde{V}\) maps a variable to its abstract value. We use \(\mathbb{E}[v\mapsto\tilde{v}]\) to denote updating the abstract value of the variable \(v\) to \(\tilde{v}\). \(\mathbb{G}\) is the output AFG. Since AFG is an equivalent form of path constraint, we directly create AFG without computing the path constraint first. Figure 7 lists the transfer functions as inference rules. In each rule, the part above the horizontal line includes a set of assumptions and, under these assumptions, the bottom part describes the abstract states before and after a statement \(S\), in the form of \(\mathbb{E}\), \(\mathbb{G}\vdash S:\mathbb{B}^{\prime},\mathbb{G}^{\prime}\). Initially, we assign the special abstract value _length_ to the variable Figure 5. Language of target programs. Figure 6. Abstract values. len_, which represents the length of input network packet. The rules for assignment, binary operation, read operation, and assertion are straightforward. For instance, in the rule for assertions, the abstract value \(\tilde{v}_{1}\) represents a constraint that must be satisfied. Therefore, we append the graph AFG\((\tilde{v}_{1})\) to the graph \(\mathbb{G}\). This is equivalent to appending the constraint \(\tilde{v}_{1}\) to the current path constraint. The sequencing rule states that, for two consecutive statements, we analyze them in order, using the postcondition of the first statement as the precondition for the second. In the branching rule, \(\mathbb{G}\) denotes the path constraint before the branching statement. \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\) represent the branching condition and its negation. Thus, \(\mathbb{G}\bowtie\mathbb{G}_{1}\) and \(\mathbb{G}\bowtie\mathbb{G}_{2}\) represent the initial path constraints before the two branches. After analyzing the two branches, the resulting AFGs are assumed to be \(\mathbb{G}\bowtie\mathbb{G}_{\kappa}\) and \(\mathbb{G}\bowtie\mathbb{G}_{\neg\kappa}\). The branching rule states that, under these assumptions and after an \(\mathbf{if}_{\kappa}\)-statement, we merge the abstract states from both branches. The procedure merge merges abstract values of the same variable via the \(\Theta_{\kappa}\) operator. Graph merging is straightforward based on the definition of AFG, which is equivalent to merging path constraints of the two branches with the common prefix pulled out. Our merging is different from the value merging in traditional analyses due to the use of the selection operator. On one hand, merging allows achieving scalability as the number of values is no longer exponential of the number of statements. On the other hand, the selectors in abstract values can be unfolded to support path-sensitive analysis if needed. **Packet Fields.** The abstract interpretation builds the AFG to represent the path constraints. As discussed in SS3, from these constraints, it is direct to infer the endianness, field boundaries, and direction fields. For instance, if multiple consecutive bytes, e.g., \(B[0]\) and \(B[1]\) in Figure 4, belong to a single field, the field value, e.g., \(B[0]B[1]\), will be computed and occur in the path constraint. **High-Level Field Semantics.** We also extend our analysis to infer high-level field semantics, i.e., field names, using rich source code information. Such high-level semantics can help better understand a format, e.g., identifying checksum fields and distinguishing keywords and delimiters (both of which are constant fields). As illustrated in Figure 4, we can name a field (via some variable name) by adding extra path constraints. Formally, given the AFG \(\mathbb{G}\) and a formula over a field \(\bar{B}[i..j]\), denoted as \(\mathfrak{f}(\bar{B}[i..j])\), we name the field by \(\mathbb{G}\bowtie\text{AFG}(\text{name}(\bar{B}[i..j])=\text{'var'})\) if there is a statement assigning \(\mathfrak{f}(\bar{B}[i..j])\) to the variable var. In addition to variable names, we also leverage system APIs used in the code. For instance, if a field \(B[i..j]\) is used in the system API, diftitime(), it is likely to be a timestamp field. In our experience, this method helps us identify many special fields via names such as 'length','version', 'checksum', 'timestamp', etc. In our current implementation, we handle all standard C APIs. If there are multiple options for naming a field, we prefer the names inferred by system APIs because software developers may not be careful to name program variables. If there are still multiple options, we simply keep the first. **Example.** Given the code in Figure 4(a), the abstract interpretation yields the AFG in (b) from top to bottom. After Line 5, we merge the two paths forked from Line 3 and get the path constraint: \(\rho\equiv\text{name}(B[4])=\text{'ctrl'}\wedge(B[5]=0\lor B[5]\neq 0)\). By the branching rule, we do not compute the path constraint but directly create the equivalent AFG, i.e., the first two rows in Figure 4(b). We name the byte \(B[4]\)'ctrl' because the arithmetic results of \(B[4]\) are assigned to the variable ctrl in both branches. The constraint \(B[5]=0\lor B[5]\neq 0\) merges the branching constraints. Meanwhile, the abstract store is updated such that \(\text{ctrl}=\Theta_{\kappa_{3}}(B[4]+1,B[4]-1)\). At Line 7, since the false branch aborts, we only consider the true branch, for which we add \(B[6]>0\land\Theta_{\kappa_{3}}(B[4]+1,B[4]-1)=0\) to the constraint \(\rho\). This is equivalent to adding the third and fourth rows in Figure 4(b). At Lines 10-11, we add the constraint \(B[0]B[1]=10\land\text{name}(B[0..1])=\text{'code'}\). This is equivalent to adding the fifth and sixth rows in Figure 4(b). We regard \(B[0]\) and \(B[1]\) as a single field as they are used in a single value \(B[0]B[1]\). Similarly, after Line 15, we merge the paths forked from Line 11 as the constraint \((B[3]=0\lor B[3]\neq 0)\land\text{name}(B[2])=\text{'state'}\) and append it to the path constraint \(\rho\). This is equivalent to adding the last row in Figure 4(b). After Line 15, we update the value state \(=\Theta_{\kappa_{3}}(B[2]+1,B[2]-1)\). In this example, the variable state is simply printed at Line 16 and never used in any if-statements or assertions. Hence, the merged value of state is abstracted away from the final constraint. Observe that the size of AFG is linear size with the number of statements. This is critical to scalability. **Lemma 5.2**.: _Given a program in the language defined in Figure 5, the AFG produced by the abstract interpretation is sound and complete._ ### Localized Graph Unfolding Recall that path sensitivity is needed in localized regions during lifting (Challenge 1 in SS4). Specifically, a code region that requires path sensitivity is identified as follows. _If a \(\Theta_{\kappa}\)-merged value is later used in some path condition \(\kappa^{\prime}\), the individual combinations of branch outcomes of \(\kappa\) and \(\kappa^{\prime}\) need to be analyzed separately._ That is, path sensitivity is needed within the code regions of \(\kappa\) and \(\kappa^{\prime}\). On the other hand, many \(\Theta_{\kappa}\)-merged values are not used in any later conditionals, the paths within the code region of \(\kappa\) do not need to be enumerated. That is, path sensitivity is not necessary. Figure 7. Inference rules and auxiliary procedure. Specifically, given an AFG created by the abstract interpretation, we eliminate all \(\Theta_{\kappa}\)-merged values by a localized graph unfolding algorithm shown in Algorithm 1. Assume that the AFG to unfold contains a list of \(\Theta_{\kappa}\) operators, e.g., \(\Theta_{\kappa_{0}}\), \(\Theta_{\kappa_{1}}\), and \(\Theta_{\kappa_{2}}\). The algorithm eliminates \(\Theta_{\kappa_{i}}\) one by one. For each \(\Theta_{\kappa_{i}}\), it works in two steps \(-\) slicing (Lines 3-7) and unfolding (Lines 8-11). To ease the explanation, we use Figure 8 for illustration. In Figure 8(a), without loss of generality, assume that we are unfolding \(\Theta_{\kappa_{i}}\) in the AFG and that only the constraints \(\rho_{1}\) and \(\rho_{2}\) contain \(\Theta_{\kappa_{i}}\)-merged values. The exiting vertices of \(\mathbb{G}_{\kappa_{i}}\) and \(\mathbb{G}_{\neg\kappa_{i}}\) are shown in the figure. **Slicing**. This step delimits the next unfolding step to a local region in AFG. First, we find all exiting vertices of \(\mathbb{G}_{\kappa_{i}}\) and \(\mathbb{G}_{\neg\kappa_{i}}\). We then perform a forward graph traversal (e.g., depth-first search) from the exiting vertices. Denote the subgraph visited during the traversal as \(\mathbb{G}_{\text{forward}}\). Second, we identify all vertices containing \(\Theta_{\kappa_{i}}\)-merged values, e.g., \(\rho_{1}\) and \(\rho_{2}\) in Figure 8(a). A backward graph traversal from them yields a subgraph denoted as \(\mathbb{G}_{\text{backward}}\). The overlapping part of \(\mathbb{G}_{\text{forward}}\) and \(\mathbb{G}_{\text{backward}}\), e.g., the yellow part in Figure 8(a), is the graph slice we will perform unfolding, denoted as \(\mathbb{G}_{\text{slice}}\). **Unfolding.** As illustrated in Figure 8(b), we copy the subgraph to unfold, obtaining \(\mathbb{G}_{\text{slice}}\) and \(\mathbb{G}_{\text{slice}}^{\prime}\). The copy \(\mathbb{G}_{\text{slice}}\) is connected to \(\mathbb{G}_{\kappa_{i}}\), and by the definition of the merging operator, all the \(\Theta_{\kappa_{i}}\)-merged values are replaced by its first operand. Similarly, the other copy \(\mathbb{G}_{\text{slice}}^{\prime}\) is connected to \(\mathbb{G}_{\neg\kappa_{i}}\), and all the \(\Theta_{\kappa_{i}}\)-merged values are replaced by its second operand. Since the subgraphs to unfold are limited in small local regions in practice, we significantly mitigate the path-explosion problem, which is sufficient to make our approach scalable. Note that we do not claim to have a theoretical bound on the size of subgraphs that need to be unfolded, as path explosion is still an open problem and cannot be completely addressed in theory, similar to all previous path-sensitive analyzers. ``` 1Procedure\(\text{unfolding}(\mathbb{G})\) 2foreach operator \(\Theta_{\kappa_{i}}\) in \(\mathbb{G}\)do 3\(\mathbb{G}_{\text{forward}}\leftarrow\) subgraph reachable from but excluding \(\mathbb{G}_{\kappa_{i}}\) and \(\mathbb{G}_{\neg\kappa_{i}}\); 4\(\mathbb{V}\leftarrow\) all vertices including \(\Theta_{\kappa_{i}}\) expressions; 5 6\(\mathbb{G}_{\text{backward}}\leftarrow\) subgraph that can reach any vertex in \(\mathbb{V}\), including \(\mathbb{V}\); 7\(\mathbb{G}_{\text{slice}}\leftarrow\) overlapping subgraph of \(\mathbb{G}_{\text{backward}}\) and \(\mathbb{G}_{\text{backward}}\); 8\(\mathbb{G}_{\text{slice}}\leftarrow\) a copy of \(\mathbb{G}_{\text{slice}}\), including all its incoming/outgoing edges; 9\(\mathbb{G}_{\text{slice}}\leftarrow\) for\(\mathbb{G}_{\neg\kappa_{i}}\); 10 replace all \(\Theta_{\kappa_{i}}\) expressions in \(\mathbb{G}_{\text{slice}}\) with their first operands; 11 12\(\mathbb{G}_{\text{slice}}\leftarrow\) from \(\mathbb{G}_{\kappa_{i}}\); 13 replace all \(\Theta_{\kappa_{i}}\) expressions in \(\mathbb{G}_{\text{slice}}^{\prime}\) with their second operands; ``` **Algorithm 1**Unfolding. **Lemma 5.3**.: _The unfolded AFG does not contain \(\Theta_{\kappa}\)-merged values and represents an equivalent constraint as the original AFG._ **Example (continued)**. In Figure 4(b), the value merged by \(\Theta_{\kappa_{3}}\) indicates that the branches forked at Line 3 need a path-sensitive analysis and delimits the analysis to the local region colored gray. To distinguish the two branches, the gray region in (b) is unfolded to two disjoint paths in (c), which eliminates the \(\Theta\)-merged values and make the two semantic relations among \(B[4]\), \(B[5]\), and \(B[6]\) explicit: \(B[5]=0\wedge B[6]>0\Leftrightarrow B[4]+1=0\); and \(B[5]\neq 0\wedge B[6]>0\Leftrightarrow B[4]-1=0\). In contrast, the \(\Theta_{\kappa_{3}}\) value in variable _state_ is never used in any conditional, suggesting that we do not need to unfold the region led by Line 13. ### Localized Graph Reordering As illustrated in Figure 4, bytes in a packet may not appear in the order in a program path, e.g., \(B[5]\) may precede \(B[2]\). To produce legitimate BNF productions, we need to reorder them to produce the ordered AFG. Then transforming an ordered AFG to BNF productions is straightforward. We first define the concepts of _vertical decomposition_ (VD) and _horizontal decomposition_ (HD). **Definition 2** (Vd)**.: Given an unfolded AFG \(\mathbb{G}=\mathbb{G}_{1}\bowtie\mathbb{G}_{2}\bowtie\ldots\bowtie\mathbb{G}_{n}\), namely, the exit vertices in \(\mathbb{G}_{i}\) are fully connected to the entry vertices in \(\mathbb{G}_{i+1}\), its vertical decomposition is the sequence of subgraphs, denoted as \(\text{VD}(\mathbb{G})=\mathbb{G}_{1}\mathbb{G}_{2}\ldots\mathbb{G}_{n}\). **Definition 3** (Hd).: Given an unfolded AFG \(\mathbb{G}\), its horizontal decomposition is a set of subgraphs, each of which is rooted at a single entry vertex in the AFG and includes the subgraph reachable from the entry vertex, denoted as \(\text{HD}(\mathbb{G})=\mathbb{G}_{1}|\mathbb{G}_{2}|\ldots|\mathbb{G}_{n}\). Figure 10(a) shows an example of vertical decomposition, where the graph is decomposed into two parts, one containing the vertices \(\rho_{1}\) and \(\rho_{2}\), and the other containing the vertices \(\rho_{3}\), \(\rho_{4}\), and \(\rho_{5}\). The graph in Figure 10(b) cannot be vertically decomposed because the upper two vertices are not fully connected to the other three. Instead, it can be horizontally decomposed into two parts, one containing the vertices \(\rho_{1}\), \(\rho_{3}\), \(\rho_{4}\), and \(\rho_{5}\), and the other containing the vertices \(\rho_{2}\), \(\rho_{3}^{\prime}\), and \(\rho_{5}^{\prime}\). Here, \(\rho_{3}^{\prime}\) and \(\rho_{5}^{\prime}\) are copies of \(\rho_{3}\) and \(\rho_{5}\), respectively. As illustrated in the example and stated in Lemma 5.4, the AFGs before and after decomposition contain the same number of paths and the constraint represented by each path is not changed. **Lemma 5.4**.: _AFGs before and after decomposition are equivalent in representing path constraint._ The decomposition has three properties. First, the horizontal decomposition is more expensive than the vertical one as it may copy vertices. Hence, Algorithm 2 always tries the vertical decomposition first. Second, as stated in Lemma 5.5, the decomposition can be recursively performed on a graph and its subgraphs. For instance, after the horizontal decomposition in Figure 10(b), we can further apply vertical decomposition to each subgraph. This property allows us to describe our reordering approach as a recursive process in Algorithm 2. Third, the vertical decomposition follows the commutative law stated in Lemma 5.6. For instance, Figure 8. Example of unfolding an AFG. after switching \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\) in Figure 10(a), we get the graph in Figure 10(c), which is equivalent to the original graph because they represent equivalent path constraints: \((\rho_{1}\vee\rho_{2})\wedge(\rho_{3}\vee\rho_{4}\vee\rho_{5})\) and \((\rho_{3}\vee\rho_{4}\vee\rho_{5})\wedge(\rho_{1}\vee\rho_{2})\). Such a commutative property allows us to reorder vertices in Algorithm 2. **Lemma 5.5**.: _If an AFG with multiple vertices cannot be vertically decomposed, each subgraph after horizontal decomposition contains a single vertex or can be vertically decomposed._ **Lemma 5.6**.: _Switching the position of subgraphs in VD yields an AFG that represents an equivalent constraint as the original AFG._ Algorithm 2 first tries to vertically decompose the input AFG (Line 2). If failed, Lemma 5.5 allows us to horizontally decompose it into subgraphs and recursively order each subgraph (Lines 14-15). If VD succeeds in splitting AFG into a list of subgraphs, these subgraphs are reordered by byte indices (Lines 3-5). Figure 9(a) and Figure 9(b) illustrate this step. In Figure 9(a), the AFG is vertically decomposed into five subgraphs, \(\mathbb{G}_{a}\), \(\mathbb{G}_{b}\), \(\mathbb{G}_{c}\), \(\mathbb{G}_{d}\), and \(\mathbb{G}_{e}\), which are respectively put in five dashed boxes. The minimum byte indices of the subgraphs are 5, 4, 3, 2, and 0. Figure 9(b) shows the AFG after reordering the subgraphs based on the minimum byte indices. After reordering, since \(\mathbb{G}_{a}\) and \(\mathbb{G}_{b}\) contain overlapping byte indices1, they are merged into a single subgraph, i.e., \(\mathbb{G}_{4}\) in Figure 9(b). In this example, the subgraphs after reordering and merging are put in the array \(\mathcal{A}=[\mathbb{G}_{1},\mathbb{G}_{2},\mathbb{G}_{3},\mathbb{G}_{4}]\). These subgraphs are ordered and contain mutually exclusive byte indices. Footnote 1: The range of byte indices in \(\mathbb{G}_{a}\) is \([5,5]\), and the range in \(\mathbb{G}_{b}\) is \([4,6]\). The former is a subset of the latter. Thus, they overlap each other. We then recursively reorder subgraphs in \(\mathcal{A}\) (Lines 6-13). Especially, for a merged subgraph, e.g., \(\mathbb{G}_{4}=\mathbb{G}_{a}\bowtie\mathbb{G}_{b}\) in the example, since we have tried vertical decomposition, which does not work as neither \(\mathbb{G}_{a}\,\mathbb{G}_{b}\) nor \(\mathbb{G}_{b}\,\mathbb{G}_{a}\) respects the stream order, we turn to horizontal decomposition (Lines 8-11). Lines 8-9 ensure the feasibility of horizontal decomposition and Line 10 performs the decomposition. Figure 9(c) illustrates this step, where the subgraph \(\mathbb{G}_{4}\) is horizontally decomposed into the white and the gray parts. Each part then is recursively reordered (Line 11). Figure 9(d) shows that the white and the gray parts are recursively split by vertical decomposition and reordered as indicated by the arrows, yielding the ordered AFG in Figure 9(e). Lemma 5.7 states the correctness of Algorithm 2. Figure 10. Decomposition for graph reordering. Figure 9. Example of Algorithm 2. **Lemma 5.7**.: _Algorithm 2 yields an ordered AFG, which represents an equivalent constraint as the input AFG._ **From Ordered AFG to BNF-like Format.** It is straightforward to translate an ordered AFG to packet formats in BNF. Due to its simplicity, the detailed discussion is elided and the formal algorithm is put in Algorithm 3. As an example, Figure 4(e) shows the inferred packet format where \(S\) is the start symbol that represents the whole graph and each non-terminal \(L_{t}\) represents a subgraph -- \(L_{1}\) represents the path prefix containing \(B[0]\), \(B[1]\), and \(B[2]\); \(L_{2}\) and \(L_{3}\) represent two possible constraints of \(B[3]\); and \(L_{4}\) and \(L_{5}\) stand for the two path suffixes containing \(B[4]\), \(B[5]\), and \(B[6]\). ### Soundness and Completeness in Practice As proved in Appendix C, Lemmas 5.1-5.7 together guarantee the theoretical soundness and completeness of our approach for a program written in our abstract language. In practice, we need to handle common program structures not included in the abstract language, such as function calls, pointers, and loops. This section discusses how we handle them in our implementation and their effects on soundness or completeness. **Pointers.** In the previous discussion, we focus on building an AFG for format inference. Pointer operations are not directly related to AFG. In the implementation, we follow existing works (Kumar et al., 2017) to resolve pointer relations, which helps us identify what values may be loaded from a memory location. For instance, when visiting an assertion in the program such as assert("(p + 1) > 1) where p is a pointer, if the pointer analysis tells us p+1 points to a memory location storing the value \(B[5]\) on the condition \(\rho\), we then compute and include the constraint \(\rho\Rightarrow B[5]>1\) (which equals \(\neg\rho\lor B[5]>1\)) in AFG. Pointer operations such as p+1 are not a part of path constraints and, thus, are not included in AFG. That is, according to the assertion rule in Figure 7 and assuming the AFG before the assertion is G, the AFG after the assertion is G\(\mapsto\) AFG(\(\neg\rho\lor B[5]>1\)). Since the pointer analysis we use is sound and path-sensitive, it allows Netfilter to be sound and highly precise. **Function Calls.** Although we do not include function calls in our abstract language for simplicity, our system is inter-procedural as a call statement is equivalent to a list of assignments from the actual parameters to the formats, and a return statement is an assignment from the return value to its receiver. Thus, in our analysis, function calls and returns are treated as assignments. This treatment does not degrade soundness and completeness. Especially, for recursive function calls, we convert them to loops, which are discussed below. **Loops and Repetitive Fields.** Loops in a protocol parser are often used to parse repetitive fields (Kumar et al., 2017). We follow existing techniques to analyze loops (Kumar et al., 2017; Kumar et al., 2017), which are good at inferring repetitive fields and how many times a field repeats. For example, the code below parses a packet where \(B[0]\) represents the packet length and contains a positional constraint that all bytes after \(B[0]\) are less than five. For this example, we produce the production \(S\to B[0]B[1]\) with two semantic constraints: \(B[1]<5\) and repeat(\(B[1]\)) = \(B[0]\). ``` 1.short compute.src()int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;intint;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;intint;int;int;int;intint;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;intint;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;intint;int;int;intint;int;int;int;int;int;int;int;int;int;intint;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;int;intint;int;int;int;int;int;int;int;int;int;int;int;intint;int;int;int;int;int;int;int;int;intint;int;int;intint;int;int;int;intint;int;int;intint;int;int;intint;intint;int;int;intint;int;int;intint;int;intint;intint;int;intint;intint;int;int;int;int;intint;int;intint;intint;intint;int;int;intint;int;int;int;intint;int;intint;intint;intint;intint;intint;int;int;int;intint;intint;int;int;int;intint;int;intint;intint;intint;int;int;int;int;intint;intint;int;intint;int;intint;int;intint;intint;intint;intint;int;intint;intint;int;intint;intint;int;int;intint;intint;intint;int;intint;intint;intint;intint;intint;intint;intint;intint;int;intint;intint;int;int;int;intint;intint;int;intint;intint;intint;int;int;intint;int;intint;intint;intint;int;intint;intint;intint;int;intint;intint;intint;intint;intintint;int;intint;int;intint;int;intint;intint;int;intint;intint;intint;int;intint;intintint;intint;intint;intintint;intint;intint;int;intint;intint;intint;intint;int;intint;intint;intint;intint;intint;intint;intint;intint;intint;intint;intintint;intint;intint;intintint;intint;intint;intint;intint;intint;intint;intint;intint;intintint;intint;intintint;intintint;intintint;intint;intint;intint;intintint;intint;intintint;intintint;intintint;intintint;intintint;intint;intintint;intintint;intintint;intintint;intintint;intintint;intintint;intintint;intintint;intintintint;intintintint;intint;intintintint;intintint;intintint;intintint;intintint;intintint;intintint;intint;intintint;intint;intintint;intintint;intintint;intintint;intint;intintintint;intintint;intintintint;intintintint;intintint;intintint;intintint;intintint;intintintint;intintint;intintintint;intintintint;intint;intintint;intintint;intintintint;intintint;intintint;intintintint;intintint;intintintint;intintintintint;intintint;intintintint;intintint;intintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintintint;intintintintint;intintintint;intintintintintint;intintintint;intintintint;intintintintint;intintintint;intintintint;intintintint;intintintint;intintintint;intintintintint;intintintintint;intintintint;intintintintint;intintintint;intintintintint;intintintintint;intintintint;intintintint;intintintintint;intintintintint;intintintintint;intintintintint;intintintintintint;intintintintint;intintintintint;intintintintint;intintintintint;intintintintint;intintintintintint;intintintintintint;intintintintint;intintintintint;intintintintintintint;intintintintint;intintintintintint;intintintintintint;intintintintintint;intintintintintintintint;intintintintintint;intintintintintintintint;intintintintintintint;intintintintintintintint;intintintintintintint;intintintintintintintintint;intintintintintintintintintintintintintint; ## 6. Evaluation We implement our method as a tool, namely Netlifter, to lift packet formats from source code in C. It is implemented on top of the LLVM (12.0.0) compiler infrastructure (Wang et al., 2019) and the Z3 (4.8.12) SMT solver (Wang et al., 2019). The source code of a protocol is compiled into the LLVM bitcode, where we perform our static analysis. In the analysis, Z3 is used to represent abstract values as symbolic expressions and compute/solve path constraints. All experiments are run on a Macbook Pro (16-inch, 2019) equipped with an 8-core 16-thread Intel Core i9 CPU with 2.30GHz speed and 32GB of memory. As shown in Table 1, we have run Netlifter over a number of protocols. They are from different codebases (e.g., Linux and LWIP) and domains (e.g., IoT and routing). They include widely-used ones such as TCP/IP and niche ones like APDU that is used in smart cards. As shown in the table, the size of the code involved in a protocol parser ranges from 3KLoC to 59KLoC, and it takes Netlifter \(<\)1min to infer the format of each protocol. Determining the precision and recall of the inferred formats requires manually comparing them with their official documents. We cannot afford to manually inspect all protocols because we have to learn a lot of domain-specific knowledge to understand a protocol, which is time-consuming and not very related to our core contribution to the static analysis. In the remaining experiments, we focus on the first ten, which are from different codebases. We believe that other protocols in the same codebases are implemented in similar manners and, thus, do not introduce extra challenges. We use these protocols/codebases because of two reasons. First, their repositories in GitHub are relatively active, which makes it easy to get feedback from developers when we report bugs. Second, they have their own fuzzing drivers, meaning that they have been extensively fuzzed by the developers themselves. Thus, their code is expected to be of high quality and an approach that can find vulnerabilities in their codebase is highly effective. ### Effectiveness of the Three-Step Design For technical contributions, we explained in SS4 that our static analysis avoids individually exploring program paths to address two challenges. To show the importance of our solution, we implement a baseline that employs a well-known symbolic executor, KLEE (Kleiner et al., 2017), to infer packet formats. Similar to our solution, it infers packets by computing path constraints. Different from our solution, it has to analyze individual program paths. We then compare their time cost of format inference. The results are shown in Figure 12(a) in log scale. The line chart shows that the KLEE-based approach runs out of time (\(\geq 3\) hours) for almost all protocols. We use a three-hour time budget here as it is sufficient to show the advantage of our approach over symbolic execution. As plotted in Figure 12(a), Netlifter can finish in one minute. Figure 12(b) shows the decomposition of Netlifter's time cost, indicating that the three steps of Netlifter respectively take 14%, 44%, and 42% of the total time. ### Precision and Recall of Packet Formats As discussed in SS1, existing techniques focus on network trace analysis (category one) or dynamic program analysis (category two). We refer to both of them as dynamic analyses as they rely on dynamically captured network packets as their inputs. We cannot find any static program analysis that infers formats from a protocol parser. Thus, while the dynamic analyses have a different assumption from our static analysis, not for a comparative purpose but to show the value of our approach, we evaluate Netlifter with two network trace analyses, i.e., NemeSys (Wang et al., 2019; Wang et al., 2019) and NetPlier (Wang et al., 2019), and two dynamic program analyses, i.e., AutoFormat (Wang et al., 2019) and Tupni (Wang et al., 2019). NemeSys and NetPlier are open-source software and we directly use their implementation. AutoFormat and Tupni are not publicly available. We implement them on top of LLVM based on their papers. We cannot find other open-source dynamic program analyses for evaluation. We evaluate them in terms of precision and recall. Given a set of packets, the precision is the ratio of correctly inferred fields in the packets to all inferred fields. The recall is the ratio of correctly inferred fields to all fields in the ground truth. To compute the precision and recall, we manually build the formats based on the protocols' official documents or source code. We then write scripts to compare the inferred and the manually-built formats. **Dynamic Analysis.** To use dynamic analyses, we follow their original works to collect 1000 network packets for each protocol from publicly available datasets (Kleiner et al., 2017; Wang et al., 2019; Wang et al., 2019). Table 2 shows the precision and recall of the inferred field boundaries. Network trace analyses often exhibit low precision (\(\text{$\prec$50\%$}\)) and recall (\(\text{$\prec$50\%$}\)), because \begin{table} \begin{tabular}{l l l l l} \hline \hline **Name** & **Codebase** & **Size** & **Time** & **Description** \\ & & (slice) & (sec) & \\ \hline L2CAP & linux/blateoth (Kleiner et al., 2017) & 38 & 12 & logical link ctrl and adaptation proto. \\ SMP & linux/blateoth (Kleiner et al., 2017) & 12 & 2 & low energy security manage proto. \\ APDU & opens (Kleiner et al., 2017) & 3 & 3 & application proto. data unit \\ GSD & lins (Kleiner et al., 2017) & 14 & 27 & open supervised device proto. \\ SSD & lins (Kleiner et al., 2017) & 8 & 1 & source server query proto. \\ TCP/IP & lwp (Kleiner et al., 2017) & 41 & 53 & transport control 4.1 internet proto. \\ RQMF & lwp (Kleiner et al., 2017) & 17 & 16 & internet graph (MPI, \& internet proto. \\ OUC & npc/2 & 59 & 11 & general-purpose transport layout proto. \\ BAHEL & frotrotting (Horn et al., 2017) & 7 & 9 & a distance-vector routing proto. \\ I-S & I-S & frrotting (Horn et al., 2017) & 22 & 6 & intermediate system (S) to I-S proto. \\ \hline A2MP & linux/blateoth (Kleiner et al., 2017) & 16 & 2 & amp manager proto. \\ RNEP & linux/blateoth (Kleiner et al., 2017) & 15 & 3 & BT network encapsulation proto. \\ CMTP & linux/blateoth (Kleiner et al., 2017) & 20 & 1 & e-ap message transport proto. \\ HDP & linux/blateoth (Kleiner et al., 2017) & 17 & 4 & human interface device proto. \\ UDP & lwp (Kleiner et al., 2017) & 37 & 33 & user diagram proto. \\ ICMP & lwp (Kleiner et al., 2017) & 22 & 12 & internet control message proto. \\ DHCP & lwp (Kleiner et al., 2017) & 25 & 43 & dynamic host configuration proto. \\ DHCP & lwp (Kleiner et al., 2017) & 30 & 54 & internet control message proto. \\ DHCP & lwp (Kleiner et al., 2017) & 35 & 51 & dynamic host configuration proto. \\ BOP & frotting (Horn et al., 2017) & 13 & 2 & border gateway proto. \\ LDFT & frrotting (Horn et al., 2017) & 20 & 5 & label distribution proto. \\ BFD & frrotting (Horn et al., 2017) & 10 & 17 & bidirectional forwarding detection \\ VNFP & frrotting (Horn et al., 2017) & 8 & 12 & virtual router redundancy proto. \\ EIGER & frrotting (Horn et al., 2017) & 14 & 21 & interior gateway routing proto. \\ NURBP & frrotting (Horn et al., 2017) & 11 & 11 & next hop resolution proto. \\ OSPF2 & frrotting (Horn et al., 2017) & 9 & 14 & open shortest path first v2 \\ OSPF3 & frrotting (Horn et al., 2017) & 7 & 16 & open shortest path first v3 \\ RIP & frrotting (Horn et al., 2017) & 11 & 13 & routing information proto. v1 \\ RIP2 & frrotting (Horn et al., 2017) & 11 & 15 & routing information proto. v2 \\ RIP2 & frrotting (Horn et al., 2017) & 7 & 41 & routing information proto. for lpg \\ \hline \hline \end{tabular} \end{table} Table 1. Protocols and Their Codebases for Evaluation Figure 12. Time cost (seconds) and its decomposition. they use statistical approaches to align message fields while statistical approaches are known to have inherent uncertainty and their effectiveness heavily hinges on the quality of input packets. The two dynamic program analyses, especially Tupni, significantly improve the precision due to the analysis of control/data flows in the code. AutoFormat has a relatively low precision because it tracks coarse-grained control/data flows. For instance, AutoFormat regards consecutive bytes of a packet processed in the same calling context as a single field. However, it is common for a parser to process multiple fields in the same calling context. Tupni tracks more fine-grained control/data flows, such as predicates in the code, and, thus, exhibits a higher precision. As acknowledged by Tupni itself, it may also produce false fields in many cases. For instance, when the value of a multi-byte field is computed by multiple instructions over every single byte in the field, it will incorrectly split the field into multiple fields. Despite the high precision achieved by Tupni, the key problem of these dynamic analyses is their coverage (i.e., recall), which is often lower than 50% and may compromise downstream security analyses as discussed in the next subsection. Note that simply combining the results of multiple tools does not help improve the quality of the inferred formats. This is because, when combining the formats inferred by multiple tools, with the increase of correctly inferred fields, the number of incorrect fields also increases. For instance, after combining the results of the four dynamic tools, the precision for OSDP is 0.43, which is even worse than the result when using Tupni independently. The combined results are shown in the last column of Table 2. **Static Analysis.** Table 2 shows that, in terms of field boundaries, our inferred formats cover >96% formats and produce <4% false ones. For many of them, we can produce absolutely correct formats. We also miss some fields and report some false ones due to the inherent limitations of static analysis (see SS9). These limitations, e.g., the incapability of handling inline-assembly in the source code, will let us lose information during the static analysis, thereby leading to false formats. Table 3 also shows the quality of the inferred field names. A name is considered to be correct if it is the same as the official documents or a reasonable abbreviation, e.g., 'len' vs. 'length'. Overall, we can infer >94% field names with a precision >96%. The names provide high-level semantics and help us identify special fields to facilitate security applications as discussed next. ### Security Applications **Protocol Fuzzing.** To show the value of our approach, we respectively input the formats inferred by Netlfilter, NetPlier, and Tupni to a typical grammar-based (i.e., format-based) protocol fuzzer, namely BooFuzz (14; 16). Particularly, since we can locate checksum fields by names such as 'checksum' and 'crc', in the fuzzing experiments, we can skip the checksum checks in the code. This is critical for fuzzing as random mutations in fuzzing can easily invalidate the checksum values (73). The experiments are performed on a three-hour budget and repeated twenty times to avoid random factors. We use a three-hour budget because we observe that the baseline fuzzers rarely achieve new coverage after three hours. The results are shown in Figure 13. Since Netlfilter can provide formats with precise field boundaries and semantic constraints, Netlfilter-enhanced BooFuzz achieves 1.2\(\times\) to 3.6\(\times\) coverage compared to others. Netlfilter-enhanced BooFuzz also detected 53 zero-day vulnerabilities while the others detect only 12. All detected vulnerabilities are exploitable as they can be triggered via crafted network packets. To date, 47 of them have been assigned CVEs. We can detect more bugs as our inferred formats are of both high precision and high coverage. In Appendix A, we provide more details about the fuzzing experiments and the detected bugs. **Traffic Auditing and Intrusion Detection.** Appendix B provides an extended study, where we use the formats inferred by Netlfilter and the best baseline, Tupni, to enhance Wireshark and Snort. We conclude that the precise and high-coverage formats inferred by us are critical for auditing traffic and detecting intrusions. ## 7. Related Work Existing techniques that infer packet formats are mainly based on dynamic analysis. We discuss some typical ones in what follows. For a broader overview, we refer readers to four surveys (44; 56; 65; 70). **Network Trace Analysis (NTA).** NTA uses statistical methods to identify field boundaries based on runtime network packets. Discoverer (36) relies on a recursive clustering approach to recognize packets of the same type. Biprominer (74) uses the variable length pattern to locate protocol keywords and is enhanced by ProDecoder (75). AutoReEngine (61) uses data mining to identify protocol keywords, based on which packets are classified into different types. ReverX (23) uses a speech recognition algorithm to identify delimiters in packets. NemeSys (50; 51) interprets binary packets as feature vectors and applies an alignment and clustering algorithm to determine the packet format. NetPlier (80) leverages a probabilistic analysis to determine the keyword field, clusters \begin{table} \begin{tabular}{l|c c c|c c|c} \hline **Protocol** & Netlfilter & NameSys & NetPlier & AutoFormat & Tupni & Combined \\ \hline L2CAP & 96.98 & 14.79 & 14.15 & 72.32 & 88.41 & 66.49 \\ SMP & 100.100 & 27.07 & 20.52 & 100.92 & 90.78 & 45.82 \\ APDU & 100.100 & 52.21 & 43.45 & 44.25 & 100.01 & 58.71 \\ GNDP & 100.100 & 17.11 & 10.16 & 74.31 & 89.47 & 43.52 \\ SQS & 100.100 & 25.11 & 18.81 & 99.54 & 41.67 \\ TCP/IP & 98.95 & 57 & 4/12 & 24/19 & 88.21 & 39.75 \\ ICMP/IP & 99.98 & 13.12 & 13.92 & 35.22 & 97.25 & 54.36 \\ QUC & 97.99 & 18.914 & 18.72 & 70.92 & 86.43 & 69.53 \\ AREL & 99.999 & 28.14 & 37.88 & 43.16 & 80.24 & 40.28 \\ IS-IS & 98.99 & 28.35 & 18/14 & 100.34 & 87.21 & 52.42 \\ \hline \end{tabular} \end{table} Table 2. Precision(%)/Recall(%) of Field Boundaries. Figure 13. The y-axis is the number of covered branches normalized to one. It shows the branch coverage averaged over twenty runs with a 95% confidence interval. packets based on the keyword values, and applies multi-sequence alignment to derive packet format. These techniques do not analyze code and, thus, are different from ours. **Dynamic Program Analysis (DPA).** DPA can be used over both source and binary code. They work by running protocol parsers against network packets and monitoring runtime control/data flows. Polyglot (Krishnan et al., 2016) uses dynamic taint analysis to infer fixed or variable length fields. AutoFormat (Yang et al., 2017) approximates the field hierarchical structure by monitoring call stacks. This approach then is extended to both bottom-up and top-down hierarchical structures (Yang et al., 2017). Wondracek et al. (2017) identify delimiters and length fields within a hierarchical structure. Tupni (Tupni, 2017) tracks fine-grained taint flows to identify packet fields. It also applies loop analysis to infer repeated fields and records path constraints to infer length or checksum fields. ReFormat (Yang et al., 2017) recognizes encrypted fields based on the observation that encrypted fields are processed by a high percentage of arithmetic/bitwise instructions. Our approach can be easily extended with the same observation, i.e., by counting relevant instructions to recognize an encrypted field. In addition to inferring the formats of received packets, Dispatcher (Dispcher, 2017) and P2C (Zhu et al., 2017) reverse engineer the formats of packets to be sent and, thus, are different from all aforementioned approaches as well as ours. **Static Program Analysis (SPA).** There are a few SPAs for reverse engineering protocols. However, they either infer the formats of packets to be sent via imprecise abstract domain (Yang et al., 2017) or focus on cryptographic mechanisms (Zhu et al., 2017). Our approach precisely infers the format of received packets and, thus, is different from these works. ## 8. Conclusion In this work, we propose a static analysis that can infer protocol formats with both high precision and high recall. Hence, the formats significantly enhance network protocol fuzzing, network traffic auditing, and network intrusion detection. Particularly, our format-inference technique has helped existing protocol fuzzers find 53 zero-days with 47 assigned CVEs. ## 9. Limitations and Future Work Our static analysis currently is implemented for C and does not support C++ due to the difficulty in analyzing virtual tables. We focus on the source code and do not handle inline assembly and libraries that do not have code available. We believe these limitations can be addressed with more engineering work. For instance, we can use class hierarchical analysis, e.g., (Zhu et al., 2017), to deal with virtual tables and support C++. We can use existing disassembly techniques, e.g., (Zhu et al., 2017), to support inline assembly. We leave them as our future work. As discussed earlier, Netlitter employs existing techniques to deal with pointers and loops. Thus, it inherits their limitations. A common limitation shared by both Netlitter and all recent techniques is that the quality of inferred formats relies on the protocol implementation. For instance, if the implementation ignores a field, the output formats will ignore it, either. Nevertheless, we have shown that Netlitter is promising in practice via a set of experiments.
2307.06490
Stability of Schwarzshild black holes in quadratic gravity with Weyl curvature domination
We study the linear stability of static and spherically symmetric (SSS) black holes (BHs) in the presence of a Weyl-squared curvature besides an Einstein-Hilbert term in the action. In this theory, there is always an exact Schwarzschild BH irrespective of the Weyl coupling constant $\alpha$, with the appearance of a non-Schwarzschild solution for a particular range of the coupling of order $|\alpha| \approx r_h^2$ (where $r_h$ is the horizon radius). On the SSS background, we show that the propagating degrees of freedom (DOFs) are three in the odd-parity sector and four in the even-parity sector. Since the number of total seven DOFs coincides with those on the Minkowski and isotropic cosmological backgrounds, the Weyl gravity does not pose a strong coupling problem associated with the vanishing kinetic term of dynamical perturbations. The odd-parity perturbations possess at least one ghost mode, but the propagation speeds of all three dynamical modes are luminal. In the even-parity sector, our analysis, based on the WKB approximation, shows that, besides the appearance of at least one ghost mode, the Schwarzschild solution is prone to both radial and angular Laplacian instabilities of several dynamical perturbations for the Weyl coupling in the range $|\alpha| \gg r_h^2$. For large radial and angular momentum modes, the time scales of such instabilities are much shorter than the horizon distance $r_h$ divided by the speed of light. In the coupling regime $|\alpha |\lesssim r_h^2$, the WKB approximation does not hold any longer, and a different analysis should be performed if one wants to state the stability of both the Schwarzschild and non-Schwarzschild BH solutions in this range of model parameters.
Antonio De Felice, Shinji Tsujikawa
2023-07-12T23:38:59Z
http://arxiv.org/abs/2307.06490v1
# Stability of Schwarzshild black holes in quadratic gravity ###### Abstract We study the linear stability of static and spherically symmetric (SSS) black holes (BHs) in the presence of a Weyl-squared curvature besides an Einstein-Hilbert term in the action. In this theory, there is always an exact Schwarzschild BH irrespective of the Weyl coupling constant \(\alpha\), with the appearance of a non-Schwarzschild solution for a particular range of the coupling of order \(|\alpha|\approx r_{h}^{2}\) (where \(r_{h}\) is the horizon radius). On the SSS background, we show that the propagating degrees of freedom (DOFs) are three in the odd-parity sector and four in the even-parity sector. Since the number of total seven DOFs coincides with those on the Minkowski and isotropic cosmological backgrounds, the Weyl gravity does not pose a strong coupling problem associated with the vanishing kinetic term of dynamical perturbations. The odd-parity perturbations possess at least one ghost mode, but the propagation speeds of all three dynamical modes are luminal. In the even-parity sector, our analysis, based on the WKB approximation, shows that, besides the appearance of at least one ghost mode, the Schwarzschild solution is prone to both radial and angular Laplacian instabilities of several dynamical perturbations for the Weyl coupling in the range \(|\alpha|\gg r_{h}^{2}\). For large radial and angular momentum modes, the time scales of such instabilities are much shorter than the horizon distance \(r_{h}\) divided by the speed of light. In the coupling regime \(|\alpha|\lesssim r_{h}^{2}\), the WKB approximation does not hold any longer, and a different analysis should be performed if one wants to state the stability of both the Schwarzschild and non-Schwarzschild BH solutions in this range of model parameters. + Footnote †: preprint: YITP-23-90, WUCG-23-08 ## I Introduction Black holes (BHs) are fundamental objects arising as vacuum solutions in General Relativity (GR). A static and spherically symmetric (SSS) vacuum configuration in GR gives rise to a Schwarzschild solution characterized by a horizon radius \(r_{h}\). In theories beyond GR, it is possible to realize non-Schwarzschild BH solutions whose geometries are modified by the presence of additional degrees of freedom (DOFs). In scalar-tensor or vector-tensor theories, for example, there are some asymptotically-flat BH solutions endowed with scalar or vector hairs [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. With the advent of gravitational astronomy [25], we can now probe the physics of strong gravity regimes and the possible deviations from GR [26; 27; 28]. GR is described by an Einstein-Hilbert action with the Lagrangian \(M_{\rm Pl}^{2}R/2\), where \(M_{\rm Pl}\) is the reduced Planck mass and \(R\) is the Ricci scalar. On strong gravitational backgrounds, it is expected that quadratic curvature terms may modify the spacetime structure and dynamics. The general quadratic curvature contributions to the action consist of the terms \(R^{2}\), \(R_{\mu\nu}R^{\mu\nu}\), and \(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\), where \(R_{\mu\nu}\) is the Ricci tensor and \(R_{\mu\nu\rho\sigma}\) is the Riemann tensor. Since the Gauss-Bonnet curvature invariant \(\mathcal{G}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\) is a topological term that does not affect the spacetime dynamics in four dimensions, the general gravitational action up to quadratic-order curvature terms is given by \[\mathcal{S}=\frac{M_{\rm Pl}^{2}}{2}\int\mathrm{d}^{4}x\sqrt{-g}\left(R-\alpha C _{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}+\beta R^{2}\right)\,, \tag{1}\] where \(g\) is a determinant of the metric tensor \(g_{\mu\nu}\), \(\alpha\) and \(\beta\) are coupling constants, and \(C_{\mu\nu\rho\sigma}\) is the Weyl tensor whose squared is given by \[C_{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}=2R_{\mu\nu}R^{\mu\nu}-\frac{2}{3}R^{2 }+\mathcal{G}\,. \tag{2}\] If we regard the higher-order curvature terms as one-loop quantum corrections, the theory is renormalizable at the price of having ghost DOFs induced by the Weyl term [29]. In theories without the Weyl curvature, i.e., \(\alpha=0\) in Eq. (1), there exists one healthy scalar mode ("scalaron") for \(\beta>0\) besides two tensor polarizations. On the cosmological background, the scalaron potential induced by the Lagrangian \(\beta R^{2}\) can derive an accelerated expansion of the Universe [30]. If we apply the same theory to BH physics, it is known that there are no hairy SSS solutions other than the Schwarzschild solution. This is related to the fact that \(f(R)\) gravity is equivalent to Brans-Dicke theories [31] with a scalar potential arising from a nonlinear function of \(f(R)\)[32; 33; 34]. In such theories, several authors showed that the no-hair property holds for the SSS BHs [35; 36; 37; 21; 38]. In the presence of the Weyl term, the Schwarzschild BH is an exact solution on the SSS background for any arbitrary couplings \(\alpha\). When the coupling \(|\alpha|\) is of order \(r_{h}^{2}\), it is known that the other asymptotically-flat non-Schwarzschild branch appears besides the Schwarzschild branch [39; 40]. For \(|\alpha|\) exceeding the order of \(r_{h}^{2}\), there is only the Schwarzschild branch. In this coupling regime, the Weyl squared dominates over the Einstein-Hilbert term around the horizon. For \(\alpha>0\), the intersection of Schwarzschild and non-Schwarzschild branches occurs at the point \(r_{h}/\sqrt{2\alpha}\simeq 0.876\)[41; 42; 43; 40; 41]. In the non-Schwarzschild solution is present up to the value \(r_{h}/\sqrt{2\alpha}\simeq 1.143\), above which the BH mass becomes negative. The non-Schwarzschild branch exists in the range \(0.876\leq r_{h}/\sqrt{2\alpha}\leq 1.143\), while the Schwarzschild branch is present for any positive values of \(r_{h}/\sqrt{2\alpha}\). In theories with \(\beta=0\) in Eq. (1.1), the analysis of linear perturbations on the Minkowski and isotropic cosmological backgrounds shows that there are seven dynamical DOFs (four tensor, two vector, one scalar modes) in total [44; 45; 46; 47; 48]. Apart from the two massless tensor modes, the Weyl term generates a mass squared \(m_{W}^{2}=1/(2\alpha)\) for the other five DOFs on the Minkowski background. If \(\alpha>0\), the tachyonic instabilities of massive modes do not arise, but there are five ghosty propagating DOFs. Meanwhile, the Laplacian instabilities of such massive modes are absent, so they can be regarded as "soft ghosts" [49] at the classical level around the Minkowski vacuum. This should not be the case if the ghosts are coupled to other fields [50; 51; 52; 53]. The presence of Weyl ghosts on the Minkowski background implies that these new modes may lead to some instabilities on other curved backgrounds. In this paper, we would like to address the propagation and linear stability of dynamical perturbations on the SSS background. To extract the effect of the Weyl curvature term on the dynamics of perturbations, we study theories given by the action (1.1) with \(\beta=0\). If the number of propagating DOFs is less than seven on a particular background, this implies the presence of a strong coupling problem. This problem arises in some other higher-curvature gravity theories such as Einsteinian cubic gravity [54], where the propagating DOFs on the maximally symmetric background are smaller than those around general curved backgrounds [55; 56; 57; 58]. In Weyl gravity, we will show that there are seven dynamical DOFs (three in the odd-parity sector and four in the even-parity sector) for perturbations on the SSS background. Although there are ghosts in both odd- and even-parity sectors, the theory does not give rise to the strong coupling problem. To study the linear stability of BHs, we will exploit the WKB approximation in which the solution to perturbations is dominated by the large angular frequency and high radial and angular momenta. The WKB prescription loses its validity in the massive regime where the Weyl mass squared \(m_{W}^{2}=1/(2\alpha)\) provides non-negligible contributions to the solutions of perturbations outside the horizon. To avoid the breakdown of the WKB approximation in the vicinity of the horizon, we require the condition \(|m_{W}^{2}|r_{h}^{2}\ll 1\), i.e., \(|\alpha|\gg r_{h}^{2}\). This is the regime in which the hairy non-Schwarzschild branch is absent. However there exists the Schwarzschild branch, so we will use this background solution for studying the BH stability. This does not generally mean that the hairy non-Schwarzschild branch is stable. It just implies that we cannot apply the WKB approximation to accommodate their stability and that another study is necessary to clarify this issue. We note that, for the monopole mode (multipole \(l=0\)) with a negligible radial wavenumber \(k\) relative to mass terms, a long-wavelength instability analogous to the Gregory-Laflamme instability [59] was reported in Ref. [60] for the Schwarzschild BH in the coupling range \(|\alpha|\gg r_{h}^{2}\). For \(l=0\), the number of dynamical DOFs can be different and reduced in comparison to the modes \(l\geq 2\) (which is the case for other modified gravity theories [61; 62; 63; 64]). In this paper, we wish to clarify whether there are Laplacian instabilities of the Schwarzschild BH for large radial and angular momentum modes (\(kr_{h}\gg 1\) and \(l\gg 1\)), by properly dealing with all dynamical perturbations under the WKB approximation. We will show that, albeit with the appearance of at least one ghost mode, all of the odd-parity dynamical perturbations have luminal propagation speeds in both radial and angular directions. Hence the Laplacian instabilities in the odd-parity sector are absent at the classical level. In the even-parity sector, besides the appearance of at least one ghost mode, the Schwarzschild BH for the coupling range \(|\alpha|\gg r_{h}^{2}\) is subject to Laplacian instabilities of several dynamical perturbations around the horizon in both along the radial and angular directions. In particular, the time scales of instabilities of large radial and angular momentum modes are typically much shorter than \(r_{h}/c\), where \(c\) is the speed of light. These Laplacian instabilities destabilize the Schwarzschild BH much more rapidly in comparison to the long-wavelength instability mentioned above. This paper is organized as follows. In Sec. II, we revisit the SSS BH solutions present in Weyl gravity. In Sec. III, we will discuss the propagation of dynamical perturbations in the odd-parity sector and show that all the speeds of propagation are luminal without classical Laplacian instabilities. In Sec. IV, we will present the prescription for extracting four dynamical perturbations from the second-order action in the even-parity sector. We then show that Laplacian instabilities emerge for both large radial and angular modes in the coupling range \(|\alpha|\gg r_{h}^{2}\). Sec. V is devoted to conclusions. Throughout the paper, we will use the natural units where the speed of light \(c\) and the reduced Planck constant \(\hbar\) are 1. Black holes in quadratic Weyl gravity We study the linear stability of BHs given by the action \[\mathcal{S}=\frac{M_{\rm pl}^{2}}{2}\int{\rm d}^{4}x\sqrt{-g}\left(R-\alpha C_{ \mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}\right)\,, \tag{1}\] where \(g\) is a determinant of the metric tensor \(g_{\mu\nu}\). The Weyl squared term \(C_{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}\), which is given by Eq. (2), is equivalent to \(2R_{\mu\nu}R^{\mu\nu}-(2/3)R^{2}\) up to boundary terms. Taking into account the quadratic Ricci scalar \(\beta R^{2}\) allows for the possibility of constructing renormalizable theories of gravity [29]. However, the Weyl term gives rise to ghost DOFs, which violate the unitarity of theories. The ghosts arise from derivative terms higher than second order in the field equations of motion. Although the existence of ghosts in higher-derivative theories can be problematic, there are some arguments stating that the ghost can be "soft" in the sense that small classical perturbations are not subject to instabilities [49]. In this paper, we would like to study the number of propagating DOFs on the SSS background and the linear stability of SSS BHs in Weyl gravity to see whether BHs are not prone to Laplacian instabilities. For this purpose, we focus on the effect of the Weyl curvature term without taking into account the \(\beta R^{2}\) term. As in the analysis of Refs. [39; 40], we do not deal with Weyl gravity as an effective field theory where the Weyl term \(\alpha C_{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}\) is suppressed relative to \(R\). We consider the SSS background given by the line element \[{\rm d}s^{2}=-f(r){\rm d}t^{2}+h^{-1}(r){\rm d}r^{2}+r^{2}\left({\rm d}\theta^ {2}+\sin^{2}\theta\,{\rm d}\varphi^{2}\right)\,, \tag{2}\] where \(f\) and \(h\) are functions of the radial coordinate \(r\). We compute the action (1) on the background (2) and vary it with respect to \(f\) and \(h\). This gives fourth-order and third-order differential equations for \(f\), with respect to the radial derivatives. Since the latter contains derivatives of \(h\) up to the second order, we take the \(r\) derivative of it and eliminate \(f^{\prime\prime\prime}\) by using the former equation, where a prime represents the derivative with respect to \(r\). After this procedure, we obtain a third-order differential equation for \(f\). Combining it with the equation derived by the variation of \(h\) leads to a second-order differential equation for \(f\), as \[f^{\prime\prime}=\frac{r^{2}hf^{\prime 2}-4(rh^{\prime}+h-1)f^{2}-(rh^{ \prime}+4h)rff^{\prime}}{2r^{2}fh}\,, \tag{3}\] which does not contain the Weyl coupling constant \(\alpha\). We differentiate Eq. (3) with respect to \(r\) and eliminate \(f^{\prime\prime\prime}\) by exploiting the other third-order differential equation of \(f\). Then, we obtain \[h^{\prime\prime}=\frac{(1-h)f-rf^{\prime}h}{\alpha h(2f-rf^{ \prime})}+\frac{4f^{3}(h-1)(rh^{\prime}+2h)+r^{3}f^{\prime 2}h(fh^{ \prime}-hf^{\prime})+r^{2}f(3h^{2}f^{\prime 2}+2fhf^{\prime}h^{\prime}+3f^{2}h^{ \prime 2})}{2r^{2}f^{2}h(2f-rf^{\prime})}. \tag{4}\] From Eqs. (3) and (4), we find that the Schwarzschild metric components \[f=h=1-\frac{r_{h}}{r}\,, \tag{5}\] are the exact solution to the system, where \(r_{h}\) is the horizon radius. This Schwarzschild branch is present for any arbitrary coupling \(\alpha\) (\(\neq 0\)). In Refs. [39; 40], the authors numerically found the other non-Schwarzschild branch of BH solutions for positive \(\alpha\) of order \(r_{h}^{2}\). Although there are no exact solutions for this branch, it is possible to obtain an approximate solution by using a continued-fraction expansion of the non-GR solution [65]. The metric components of the non-Schwarzschild branch can be expressed in the form \[f(r)\simeq\left(1-\frac{r_{h}}{r}\right)A(r)\,,\qquad h(r)\simeq\left(1-\frac {r_{h}}{r}\right)\frac{A(r)}{B^{2}(r)}\,, \tag{6}\] where \(A(r)\) and \(B(r)\) are functions of \(r\) whose approximate formulas are given in Ref. [41]. This hairy BH solution has a horizon at \(r=r_{h}\) and it also respects the asymptotic flatness. In Ref. [39], it was numerically shown that this non-Schwarzschild branch is present in the range \[0.876\leq p\leq 1.143\,, \tag{7}\] where \[p\equiv\frac{r_{h}}{\sqrt{2\alpha}}\,. \tag{8}\] For \(p>1.143\), the mass of non-Schwarzschild BHs becomes negative. At the point \(p=0.876\), the non-Schwarzschild and Schwarzschild branches intersect with each other. The non-Schwarzschild branch can be extended to the region \(p<0.876\) down to the value \(p\approx 0.6\), but it was shown in Ref. [60] that the solution in the range \(p<0.876\) is prone to the long-wavelength instability related to the Gregory-Laflamme instability [59]. To avoid this instability for the non-Schwarzschild branch, the variable \(p\) needs to be in the range (2.7). As we mentioned above, the Schwarzschild branch (2.5) is present for any values of \(\alpha\). The mass squared arising from the Weyl curvature computed on the Minkowski background is given by \[m_{W}^{2}=\frac{1}{2\alpha}\,, \tag{2.9}\] which is positive if \(\alpha>0\). For \(p\gg 1\), i.e., in the "massive" regime characterized by \(m_{W}\gg r_{h}^{-1}\), the Weyl term \(\alpha C_{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}\) is suppressed relative to \(R\) outside the horizon. The other limit \(p\ll 1\), which corresponds to the "massless" regime characterized by \(m_{W}\ll r_{h}^{-1}\), the Weyl term dominates over the Ricci scalar. The latter is the regime in which the modification of gravity manifests itself in BH physics. Since there is only the Schwarzschild branch for the mass range \(m_{W}\ll r_{h}^{-1}\), we will exploit the background metric components (2.5) to study the propagation of dynamical perturbations and linear stability of BHs for high radial and angular momentum modes. We note that the long-wavelength instability of Schwarzschild BHs is known to be present for \(p<0.876\)[60]. This long-wavelength mode is characterized by the monopole (\(l=0\)) with a negligible radial wavenumber relative to mass terms. Since some of the dynamical perturbations vanish for \(l=0\), we would like to clarify how all the dynamical perturbations propagate for the short-wavelength modes with \(kr_{h}\gg 1\) and \(l\gg 1\). ## III Odd-parity perturbations On the SSS background (2.2) with the metric tensor \(\bar{g}_{\mu\nu}\), we will study the linear stability of BHs in the presence of metric perturbations \(h_{\mu\nu}\). Namely, the metric tensor of a perturbed line element is given by \(g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}\). We focus on the BH stability in the external region of the horizon, i.e., \(f(r)>0\) and \(h(r)>0\). We first consider \(h_{\mu\nu}\) in the odd-parity sector where the perturbations have a parity \((-1)^{l+1}\) under the rotation in the \((\theta,\varphi)\) plane, with \(l\) being the multipole of the spherical harmonics \(Y_{lm}(\theta,\varphi)\). In the odd-parity sector, the components of \(h_{\mu\nu}\) are given by [66; 67; 68; 69; 70; 71] \[h_{tt}=h_{tr}=h_{rr}=0\,,\qquad h_{ab}=0\,,\] \[h_{ta}=\sum_{l,m}Q(t,r)E_{ab}\nabla^{b}Y_{lm}(\theta,\varphi)\,, \qquad h_{ra}=\sum_{l,m}W(t,r)E_{ab}\nabla^{b}Y_{lm}(\theta,\varphi)\,, \tag{3.1}\] where \(Q\) and \(W\) are functions of \(t\) and \(r\), the subscripts \(a\) and \(b\) denote either \(\theta\) or \(\varphi\), and \(E_{ab}\) is an antisymmetric tensor with nonvanishing components \(E_{\theta\varphi}=-E_{\varphi\theta}=\sin\theta\). Note that we have omitted the subscripts \(l\) and \(m\) in \(Q\) and \(W\) and chosen the gauge in which \(h_{ab}\) vanishes. Without loss of generality, we focus on the axisymmetric modes (\(m=0\)) and expand the action (2.1) up to the second order in odd-parity perturbations. After the integration with respect to \(\theta\) and \(\varphi\), we can write the second-order action of perturbations in the form \(\mathcal{S}^{(2)}_{\rm odd}=(M_{\rm Pl}^{2}/2)\int{\rm d}t{\rm d}r\,\tilde{L} _{\rm odd}\), where \(\tilde{L}_{\rm odd}\) is a function of \(t\) and \(r\) composed of the products of odd-parity perturbations. In \(\tilde{L}_{\rm odd}\), there exists the following combination \[(\tilde{L}_{\rm odd})_{K}\equiv-\frac{\alpha M_{\rm Pl}^{2}h^{1/2}l(l+1)}{2f^{ 3/2}}\left(\ddot{W}-\dot{Q}^{\prime}+\frac{2\dot{Q}}{r}\right)^{2}\,, \tag{3.2}\] where a dot represents the derivative with respect to \(t\). This is equivalent to the following Lagrangian \[(L_{\rm odd})_{K}=-\frac{\alpha M_{\rm Pl}^{2}h^{1/2}l(l+1)}{2f^{3/2}}\left[2 \chi\left(\ddot{W}-\dot{Q}^{\prime}+\frac{2\dot{Q}}{r}\right)-\chi^{2}\right]\,, \tag{3.3}\] where \(\chi\) is a Lagrange multiplier. The Lagrangian \(\tilde{L}_{\rm odd}-(\tilde{L}_{\rm odd})_{K}+(L_{\rm odd})_{K}\) is equivalent to the original one \(\tilde{L}_{\rm odd}\). Varying the former with respect to \(\chi\), it follows that \[\chi=\ddot{W}-\dot{Q}^{\prime}+\frac{2\dot{Q}}{r}\,, \tag{3.4}\] which corresponds to a new dynamical DOF. After integrating the action \((M_{\rm Pl}^{2}/2)\int{\rm d}t{\rm d}r\,[\tilde{L}_{\rm odd}-(\tilde{L}_{\rm odd}) _{K}+(L_{\rm odd})_{K}]\) by parts, we can express the second-order action (up to boundary terms) in the form \[{\cal S}_{\rm odd}^{(2)}=\frac{M_{\rm Pl}^{2}}{2}\int{\rm d}t{\rm d}r\,L_{\rm odd }\,, \tag{3.5}\] where \[L_{\rm odd} = a_{1}\dot{W}^{2}+a_{2}\dot{Q}^{2}+2a_{3}\dot{W}\dot{\chi}+a_{4} \left(\dot{W}^{\prime}-Q^{\prime\prime}+\frac{2Q^{\prime}}{r}\right)^{2}+a_{5} W^{\prime 2}+a_{6}Q^{\prime 2}+a_{7}W^{2}+a_{8}\chi^{2}+a_{9}Q^{2} \tag{3.6}\] \[+a_{10}W^{\prime}\dot{Q}+a_{11}\dot{W}Q^{\prime}+a_{12}\dot{\chi} Q^{\prime}+a_{13}\dot{W}Q+a_{14}\chi\dot{Q}\,,\] with \(a_{1},\cdots,a_{14}\) being functions of \(r\) alone. From this action, it is clear that there are three dynamical perturbations \(W\), \(Q\), and \(\chi\). The perturbation equations of motion follow by varying \(L_{\rm odd}\) with respect to those variables. We study the propagation of short-wavelength modes with the large angular frequency \(\omega\) and momentum \(k\) by assuming the solutions in the form \[\vec{\cal X}=\vec{\cal X}_{0}e^{i(\omega t-kr)}\,,\qquad{\rm with}\qquad\vec{ \cal X}_{0}=(W_{0},\chi_{0},Q_{0})\,, \tag{3.7}\] where \(W_{0}\), \(\chi_{0}\), and \(Q_{0}\) are assumed to be constants. We are interested in the values of \(k\) and \(l\) in the ranges \(kr_{h}\gg 1\) and \(l\gg 1\). Note that we also focus on the WKB regime in which the radial variation of \(\omega\) is small such that \(|\omega^{\prime}|\ll|k\omega|\simeq|\omega^{2}|\). In the limit that \(l\gg 1\), each coefficient in Eq. (3.6) has the following multipole dependence: \[a_{1}=b_{1}l^{4}\,,\quad a_{2}=b_{2}l^{4}\,,\quad a_{3}=b_{3}l^{ 2}\,,\quad a_{4}=b_{4}l^{2}\,,\quad a_{5}=b_{5}l^{4}\,,\quad a_{6}=b_{6}l^{4} \,,\quad a_{7}=b_{7}l^{6}\,,\quad a_{8}=b_{8}l^{2}\,,\] \[a_{9}=b_{9}l^{6}\,,\quad a_{10}=b_{10}l^{4}\,,\quad a_{11}=b_{11} l^{2}\,,\quad a_{12}=b_{12}l^{2}\,,\quad a_{13}=b_{13}l^{4}\,,\quad a_{14}=b_{14}l ^{2}\,, \tag{3.8}\] where \[b_{1}=\frac{2\alpha h^{1/2}}{r^{2}f^{1/2}}\,,\qquad b_{2}=-\frac {1}{2fh}b_{1}\,,\qquad b_{3}=\frac{r^{2}}{2f}b_{1}\,,\qquad b_{4}=\frac{r^{2} }{2}b_{1}\,,\qquad b_{5}=-\frac{fh}{2}b_{1}\,,\qquad b_{6}=b_{1}\,,\] \[b_{7}=-\frac{f}{2r^{2}}b_{1}\,,\qquad b_{8}=\frac{r^{2}}{2f}b_{1 }\,,\qquad b_{9}=\frac{1}{2r^{2}h}b_{1}\,,\qquad b_{10}=-b_{1}\,,\qquad b_{12 }=-\frac{r^{2}}{f}b_{1}\,. \tag{3.9}\] The explicit forms of \(b_{11}\), \(b_{13}\), and \(b_{14}\) are not shown, as they are not needed in the following discussion. Picking up the dominant contributions of \(\omega\), \(k\) and \(l\), the perturbation equations of motion are expressed as \[\mathbf{A}_{\rm odd}\vec{\cal X}_{0}^{\rm T}=0\,, \tag{3.10}\] where \(\mathbf{A}_{\rm odd}\) is a \(3\times 3\) matrix whose components are given by \[\mathbf{A}_{\rm odd}=\left(\begin{array}{ccc}2l^{2}\left[(b_{4}k^{2}+b_{1}l^{2 })\omega^{2}+b_{5}k^{2}l^{2}+b_{7}l^{4}\right]&2l^{2}b_{3}\omega^{2}&l^{2}(2b_ {4}k^{3}\omega-b_{10}l^{2}k\omega)\\ 2l^{2}b_{3}\omega^{2}&2l^{2}b_{8}&-l^{2}b_{12}k\omega\\ l^{2}(2b_{4}k^{3}\omega-b_{10}l^{2}k\omega)&-l^{2}b_{12}k\,\omega&2l^{2}\left( b_{2}l^{2}\omega^{2}+b_{4}k^{4}+b_{6}k^{2}l^{2}+b_{9}l^{4}\right)\end{array} \right)\,. \tag{3.11}\] The no-ghost conditions can be obtained by picking up terms proportional to \(\omega^{2}\) in \(\mathbf{A}_{\rm odd}\). The matrix \(\mathbf{K}_{\rm odd}\) associated with such kinetic terms is \[\mathbf{K}_{\rm odd}=2\omega^{2}\left(\begin{array}{ccc}{\cal K}_{11}&{\cal K}_{ 12}&0\\ {\cal K}_{12}&0&0\\ 0&0&{\cal K}_{33}\end{array}\right)\,, \tag{3.12}\] where \[{\cal K}_{11}=l^{2}(b_{4}k^{2}+b_{1}l^{2})\,,\qquad{\cal K}_{12}=l^{2}b_{3} \,,\qquad{\cal K}_{33}=l^{4}b_{2}\,. \tag{3.13}\] The absence of ghosts requires the following three conditions \[{\cal K}_{11}>0\,,\qquad-{\cal K}_{12}^{2}>0\,,\qquad-{\cal K}_{12}^{2}{\cal K }_{33}>0\,. \tag{3.14}\] The explicit form of \({\cal K}_{12}\) is given by \[{\cal K}_{12}=\frac{\alpha h^{1/2}l^{2}}{f^{3/2}}\,. \tag{3.15}\] Since the second inequality of (3.14) is violated for \(\alpha\neq 0\), there is at least one ghost mode in the odd-parity sector. After making the field redefinitions \(W=W_{2}-{\cal K}_{12}\,\chi_{2}/{\cal K}_{11}\), \(\chi=\chi_{2}\), and \(Q=rQ_{2}\), the kinetic matrix of new fields (\(W_{2},\chi_{2},Q_{2}\)) becomes diagonal with the elements \({\cal K}_{11}\), \(-{\cal K}_{12}^{2}/{\cal K}_{11}\), and \({\cal K}_{33}\), where \[{\cal K}_{11}=\frac{\alpha l^{2}\sqrt{h}(r^{2}hk^{2}+2l^{2})}{r^{2}\sqrt{f}}\,, \qquad-\frac{{\cal K}_{12}^{2}}{{\cal K}_{11}}=-\frac{\alpha l^{2}r^{2}\sqrt{h }}{f^{5/2}(r^{2}hk^{2}+2l^{2})}\,,\qquad{\cal K}_{33}=-\frac{\alpha l^{4}}{f^{ 3/2}h^{1/2}}\,. \tag{3.16}\] For \(\alpha>0\), there are two ghosts because the last two eigenvalues are negative. For \(\alpha<0\), one ghost is present because \(K_{11}\) is negative. It should be noticed that all the elements of Eq. (3.16), for large values of \(r\), do not vanish but tend to approach constant values. This is related to the fact that the mass squared for these modes, in the regime \(r\gg r_{h}\), is of the same order as \(m_{W}^{2}=1/(2\alpha)\), i.e., a finite value independent of the radial distance \(r\). From this analysis of the kinetic matrix, we can conclude that the odd-parity perturbations in this theory do not suffer from a strong coupling problem. This is in contrast with the odd-parity perturbations in Einsteinian cubic gravity, in which the strong coupling problem is present [57; 58]. For the radial propagation, we calculate the determinant of the matrix \(\mathbf{A}_{\rm odd}\) and expand it with respect to the large momentum \(k\) in the regime \(kr_{h}\gg l\gg 1\). The radial propagation speeds \(c_{r}={\rm d}r_{*}/{\rm d}\tau\), which are measured by the rescaled radial coordinate \(r_{*}=\int{\rm d}r/\sqrt{h}\) and the proper time \(\tau=\int\sqrt{f}\,{\rm d}t\), are given by \(c_{r}=(fh)^{-1/2}(\partial\omega/\partial k)\). The dominant terms in the equation \(\det\mathbf{A}_{\rm odd}=0\) are those proportional to \(k^{6}\), so that we obtain the following three solutions \[c_{r1}^{2} = \frac{b_{4}b_{8}f^{2}}{\alpha^{2}h^{2}}=1\,, \tag{3.17}\] \[c_{r2}^{2} = \frac{-(b_{1}+b_{6}+b_{10})+\sqrt{{\cal D}_{1}}}{2b_{2}fh}=1\,,\] (3.18) \[c_{r3}^{2} = \frac{-(b_{1}+b_{6}+b_{10})-\sqrt{{\cal D}_{1}}}{2b_{2}fh}=1\,, \tag{3.19}\] where \[{\cal D}_{1}=b_{1}^{2}+2b_{1}b_{6}-4b_{2}b_{5}+b_{6}^{2}+2(b_{1}+b_{6})b_{10}+ b_{10}^{2}=0\,. \tag{3.20}\] Thus, all three radial propagation speeds are luminal. For the angular propagation, we expand \(\det\mathbf{A}_{\rm odd}\) with respect to large \(l\) in the range \(l\gg kr_{h}\gg 1\). The angular propagation speeds \(c_{\Omega}=r{\rm d}\theta/{\rm d}\tau\) measured by the proper time are \(c_{\Omega}=r\omega/(\sqrt{f}l)\). The leading-order terms, which are proportional to \(l^{14}\), give rise to the following three solutions \[c_{\Omega 1}^{2} = \frac{r^{2}(b_{1}b_{8}+\sqrt{{\cal D}_{2}})}{2b_{3}^{2}f}=1\,, \tag{3.21}\] \[c_{\Omega 2}^{2} = \frac{r^{2}(b_{1}b_{8}-\sqrt{{\cal D}_{2}})}{2b_{3}^{2}f}=1\,,\] (3.22) \[c_{\Omega 3}^{2} = -\frac{b_{9}r^{2}}{b_{2}f}=1\,, \tag{3.23}\] where \[{\cal D}_{2} = b_{8}(b_{1}^{2}b_{8}+4b_{3}^{2}b_{7})=0\,. \tag{3.24}\] Hence all three angular propagation speeds are also luminal. Since we have not used the background solutions of \(f(r)\) and \(h(r)\) to derive the above propagation speeds, they are valid for any BH solutions under the scheme of the WKB approximation. In this same short-wavelength regime, this analysis shows that ghost modes present in the odd-parity sector are "soft." We have thus shown that there is at least one ghost mode in the odd-parity sector, but the classical Laplacian instabilities are absent for both radial and angular directions. Even-parity perturbations For the even-parity sector, we consider the components of metric perturbations \(h_{\mu\nu}\) as \[h_{tt} =f(r)\sum_{l,m}H_{0}(t,r)Y_{lm}(\theta,\varphi)\,,\qquad\quad h_{tr} =\sum_{l,m}H_{1}(t,r)Y_{lm}(\theta,\varphi)\,,\qquad\quad h_{ta}=0\,,\] \[h_{rr} =h(r)^{-1}\sum_{l,m}H_{2}(t,r)Y_{lm}(\theta,\varphi)\,,\qquad h_{ ra}=\sum_{l,m}h_{1}(t,r)\nabla_{a}Y_{lm}(\theta,\varphi)\,,\qquad h_{ab}=0\,, \tag{10}\] where \(H_{0}\), \(H_{1}\), \(H_{2}\), and \(h_{1}\) depend on \(t\) and \(r\). We have chosen the gauge conditions \(h_{ta}=0=h_{ab}\), which fix the residual gauge DOFs. We expand the action up to quadratic order in even-parity perturbations by setting \(m=0\). After the integration with respect to \(\theta\) and \(\varphi\), the second-order action can be expressed in the form \[\mathcal{S}^{(2)}_{\rm even}=\int{\rm d}t{\rm d}r\,\tilde{L}_{\rm even}\,, \tag{11}\] where \(\tilde{L}_{\rm even}\) is the Lagrangian containing the products of even-parity perturbations. First of all, we notice that there are higher-order time derivative terms \(\tilde{H}_{2}^{2}\) and \(\tilde{h}_{1}^{2}\) in \(\tilde{L}_{\rm even}\). To find the combinations of Lagrange multipliers \(\chi_{1}\) and \(\chi_{2}\) associated with \(\ddot{H}_{2}\) and \(\ddot{h}_{1}\), respectively, we consider the following Lagrangian \[L_{\rm even} = \tilde{L}_{\rm even}+\frac{\alpha M_{\rm Pl}^{2}r^{2}}{6h^{1/2}f ^{3/2}}\left(\ddot{H}_{2}+c_{1}H_{0}^{\prime\prime}+c_{2}\dot{H}_{1}^{\prime}+ c_{3}H_{0}^{\prime}+c_{4}\dot{H}_{1}+c_{5}H_{0}+c_{6}H_{1}+c_{7}H_{1}^{\prime}- \chi_{1}\right)^{2} \tag{12}\] \[+\frac{l(l+1)\alpha M_{\rm Pl}^{2}h^{1/2}}{2f^{3/2}}\left(\ddot{ h}_{1}+d_{1}H_{0}^{\prime\prime}+d_{2}\dot{H}_{1}^{\prime}+d_{3}H_{0}^{\prime}+d_{4} \dot{H}_{1}+d_{5}H_{0}+d_{6}H_{1}+d_{7}H_{1}^{\prime}-\chi_{2}\right)^{2}\,.\] We need to choose the \(r\)-dependent coefficients \(c_{i}\), \(d_{i}\) (\(i=1,2,\cdots,7\)) to eliminate the cross products such as \(\ddot{H}_{2}H_{0}^{\prime\prime}\) and \(\ddot{h}_{1}H_{0}^{\prime}\). Then, these coefficients are determined as \[c_{1} =fh\,,\qquad c_{2}=-2h\,,\qquad c_{3}=hf^{\prime}+\frac{1}{2}fh^{ \prime}-\frac{fh}{r}\,,\qquad c_{4}=\frac{2h}{r}-h^{\prime}\,,\] \[c_{5} =\frac{l(l+1)f}{2r^{2}}\,,\qquad c_{6}=0\,,\qquad c_{7}=0\,, \tag{13}\] \[d_{1} =0\,,\qquad d_{2}=0\,,\qquad d_{3}=f\,,\qquad d_{4}=-1\,,\qquad d _{5}=\frac{1}{2}\,f^{\prime}-\frac{f}{r}\,,\qquad d_{6}=0\,,\qquad d_{7}=0\,. \tag{14}\] Varying \(L_{\rm even}\) with respect to \(\chi_{1}\) and \(\chi_{2}\), respectively, it follows that \[\chi_{1} = \ddot{H}_{2}+fhH_{0}^{\prime\prime}-2h\dot{H}_{1}^{\prime}+\left( hf^{\prime}+\frac{1}{2}fh^{\prime}-\frac{fh}{r}\right)H_{0}^{\prime}+\left( \frac{2h}{r}-h^{\prime}\right)\dot{H}_{1}+\frac{l(l+1)f}{2r^{2}}H_{0}\,, \tag{15}\] \[\chi_{2} = \ddot{h}_{1}+fH_{0}^{\prime}-\dot{H}_{1}+\left(\frac{1}{2}\,f^{ \prime}-\frac{f}{r}\right)H_{0}\,, \tag{16}\] both of which correspond to the propagating DOFs. As we will see below, there are four propagating dynamical perturbations in the even-parity sector (including \(\chi_{1}\) and \(\chi_{2}\)). The Lagrangian \(L_{\rm even}\) contains the following quadratic term \[L_{\rm even}\ni-\frac{\alpha M_{\rm Pl}^{2}l(l+1)[l(l+1)-2]\sqrt{f}}{8r^{2} \sqrt{h}}\,H_{0}^{2}\,, \tag{17}\] besides the linear terms in \(H_{0}\). Hence we will integrate the nondynamical perturbation \(H_{0}\) from the second-order action. The term proportional to \(\dot{H}_{1}^{2}\) disappears in \(L_{\rm even}\), because of the choice of coefficients \(c_{4}\) and \(d_{4}\) in Eqs. (13) and (14). The Lagrangian \(L_{\rm even}\) does not contain the time derivatives of \(H_{1}\), so the perturbation \(H_{1}\) is not dynamical either. For later convenience, we perform the following field redefinition \[\ddot{H}_{1}\equiv H_{1}f^{-1/4}h^{3/4}\,. \tag{18}\] Then, we find that the \(\bar{H}_{1}\)-dependent terms can be expressed as \[L_{\rm even}\ni\frac{1}{2}\alpha M_{\rm Pl}^{2}l(l+1)(\bar{H}_{1}^{ \prime})^{2}+e_{1}\bar{H}_{1}^{2}+\bar{H}_{1}\big{(}e_{2}\dot{h}_{1}^{\prime \prime}+e_{3}\dot{H}_{2}^{\prime\prime}+e_{4}\dot{h}_{1}^{\prime}+e_{5}\dot{H}_ {2}^{\prime}+e_{6}\dot{\chi}_{1}^{\prime}+e_{7}\dot{h}_{1}+e_{8}\dot{H}_{2}+e_{ 9}\dot{\chi}_{1}+e_{10}\dot{\chi}_{2}\big{)}, \tag{4.10}\] where the coefficients \(e_{i}\)'s (\(i=1\dots 10\)) are \(r\)-dependent functions. It should be noticed that \(\bar{H}_{1}\) does not appear anywhere else inside \(L_{\rm even}\). It is then clear that \(\bar{H}_{1}\) is not a propagating field that can be integrated out from the second-order action. This should leave only four dynamical DOFs in the even-parity sector. At this moment, to disentangle as much as possible the dynamics of \(\bar{H}_{1}\) with those of the other perturbations, we perform another field redefinition as \[-M_{\rm Pl}^{2}l(l+1)\bar{\chi}_{2}\equiv e_{2}h_{1}^{\prime\prime}+e_{3}H_{2 }^{\prime\prime}+e_{4}h_{1}^{\prime}+e_{5}H_{2}^{\prime}+e_{6}\chi_{1}^{\prime }+e_{7}h_{1}+e_{8}H_{2}+e_{9}\chi_{1}+e_{10}\chi_{2}\,. \tag{4.11}\] The new variable \(\bar{\chi}_{2}\) is used instead of \(\chi_{2}\) in the following discussion. At this point, we will proceed by performing several integrations by parts, as presented in Appendix A. This step is meant to bring the action in a canonical form just before integrating out the nondynamical field \(\bar{H}_{1}\). Then, up to boundary terms, we can express \(L_{\rm even}\) in the following form \[L_{\rm even} = A_{ij}^{(1)}\,\dot{\psi}_{i}^{\prime}\,\dot{\psi}_{j}^{\prime}+ A_{ij}^{(2)}\,\dot{\psi}_{i}\,\dot{\psi}_{j}+\frac{1}{2}\,B_{ij}^{(1)}\, \left(\dot{\psi}_{i}^{\prime}\,\dot{\psi}_{j}-\dot{\psi}_{j}^{\prime}\,\dot{ \psi}_{i}\right)+A_{ij}^{(3)}\,\psi_{i}^{\prime\prime\prime}\psi_{j}^{\prime \prime\prime}+\frac{1}{2}\,B_{ij}^{(2)}\,\left(\psi_{i}^{\prime\prime\prime} \,\psi_{j}^{\prime\prime}-\psi_{j}^{\prime\prime\prime}\,\psi_{i}^{\prime \prime}\right) \tag{4.12}\] \[+ A_{ij}^{(4)}\,\psi_{i}^{\prime\prime}\psi_{j}^{\prime\prime}+ \frac{1}{2}\,B_{ij}^{(3)}\,\left(\psi_{i}^{\prime\prime}\,\psi_{j}^{\prime}- \psi_{j}^{\prime\prime}\,\psi_{i}^{\prime}\right)+A_{ij}^{(5)}\,\psi_{i}^{ \prime}\psi_{j}^{\prime}+\frac{1}{2}\,B_{ij}^{(4)}\,\left(\psi_{i}^{\prime} \,\psi_{j}-\psi_{j}^{\prime}\,\psi_{i}\right)+A_{ij}^{(6)}\,\psi_{i}\psi_{j}\] \[+ \frac{1}{2}\alpha M_{\rm Pl}^{2}l(l+1)(\bar{H}_{1}^{\prime})^{2}+ e_{1}\bar{H}_{1}^{2}-M_{\rm Pl}^{2}l(l+1)\bar{H}_{1}\dot{\psi}_{4}\,,\] where \(A_{ij}^{(j)}\)'s and \(B_{ij}^{(j)}\)'s are symmetric and antisymmetric matrices, respectively, and \(\psi_{i}\)'s consist of four dynamical perturbations given by \[\psi_{1}=h_{1}\,,\qquad\psi_{2}=\chi_{1}\,,\qquad\psi_{3}=H_{2}\,,\qquad\psi_{ 4}=\bar{\chi}_{2}\,. \tag{4.13}\] In Eq. (4.12), there is a radial derivative term \((\bar{H}_{1}^{\prime})^{2}\). As such, one is required to set boundary conditions on the nondynamical field \(\bar{H}_{1}\), which may eventually influence the dynamics of other propagating fields, in a fashion similar to the one present in theories that possess shadowy modes [72; 73]. We need to take care when we integrate out the field \(\bar{H}_{1}\) from the second-order action. To compute the speeds of propagation of dynamical perturbations, we need to assume the WKB approximation to hold. This approximation is valid so long as the coefficients of the differential equations are slowly varying functions of \(r\). If we take a closer look at the equation of motion for \(\bar{H}_{1}\) and evaluate it on the Minkowski background for simplicity, we obtain the differential equation \[\bar{H}_{1}^{\prime\prime}-\left[m_{W}^{2}+\frac{l(l+1)}{r^{2}} \right]\bar{H}_{1}+2m_{W}^{2}\dot{\psi}_{4}\simeq 0\,, \tag{4.14}\] where \(m_{W}^{2}\) is given by Eq. (2.9). Recall that \(m_{W}^{2}\) corresponds to the mass squared arising from the Weyl curvature term. For \(\alpha>0\), the general solution to Eq. (4.14) can be expressed as \[\bar{H}_{1} = c_{1}\sqrt{r}I_{l+1/2}(m_{W}r)+c_{2}\sqrt{r}I_{l+1/2}(m_{W}r) \tag{4.15}\] \[+2m_{W}^{2}\sqrt{r}\left[K_{l+1/2}(m_{W}r)\int\sqrt{r}I_{l+1/2}( m_{W}r)\dot{\psi}_{4}{\rm d}r-I_{l+1/2}(m_{W}r)\int\sqrt{r}K_{l+1/2}(m_{W}r) \dot{\psi}_{4}{\rm d}r\right]\,,\] where \(c_{1}\), \(c_{2}\) are integration constants, and \(I_{l+1/2}(x)\) and \(K_{l+1/2}(x)\) are the modified Bessel functions of the first and second kinds, respectively. On using the growing-mode solution \(I_{l+1/2}(m_{W}r)\) in the regime \(m_{W}r\gg 1\), the radial derivative of \(\bar{H}_{1}\) can be estimated as \(|r\bar{H}_{1}^{\prime}/\bar{H}_{1}|\approx m_{W}r\gg 1\). We would like to consider the WKB regime in which the solution to \(\bar{H}_{1}\) is expressed in the form \(\bar{H}_{1}=\bar{\bar{H}}_{1}e^{-i(\omega t-kr)}\), where \(\omega\) and \(k\) are the constant angular frequency and momentum, respectively, and \(\bar{\bar{H}}_{1}\) is constant. In this regime, the radial variation of \(\bar{H}_{1}\) is dominated by large momentum modes \(k\), such that \(|r\bar{H}_{1}^{\prime}/\bar{H}_{1}|\approx kr\gg 1\). Since we need to avoid the large radial variation of \(\bar{H}_{1}\) induced by the mass term \(m_{W}\), we require the condition \(m_{W}r\ll 1\). For \(\alpha<0\), the solution to \(\bar{H}_{1}\) can be expressed in terms of \(J_{l+1/2}(\sqrt{-m_{W}^{2}}r)\) and \(Y_{l+1/2}(\sqrt{-m_{W}^{2}}r)\), where \(J_{l+1/2}(x)\) and \(Y_{l+1/2}(x)\) are the Bessel functions of first and second kinds, respectively. In this case the perturbation \(\bar{H}_{1}\) exhibits fast oscillations with respect to \(r\), so the validity of the WKB approximation requires the condition \(\sqrt{-m_{W}^{2}}r\ll 1\). The above discussion shows that for both \(\alpha>0\) and \(\alpha<0\), the condition \(|m_{W}^{2}|r_{h}^{2}\ll 1\) is needed to ensure the validity of the WKB approximation in the vicinity of the BH horizon \(r_{h}\). This translates to the condition \[|\alpha|\gg r_{h}^{2}\,. \tag{4.16}\] In the following, we will consider the finite region of \(r\) satisfying the inequality \[r_{h}^{2}\lesssim r^{2}\ll|\alpha|\,. \tag{4.17}\] Thus, our analysis based on the WKB approximation does not encompass the small coupling region \(|\alpha|\lesssim r_{h}^{2}\) due to the dominance of a heavy mass squared \(|m_{W}^{2}|\). In other words, the linear stability of even-parity perturbations on the Minkowski background with the coupling range \(|\alpha|\ll r_{h}^{2}\) cannot be obtained by simply taking the limit \(r\gg r_{h}\) for the BH stability conditions derived below. We should make it clear that the inequality \(|\alpha|\gg r_{h}^{2}\) is not a condition for stability, but merely a condition under which the WKB approximation works. In other words, in that region of the parameter space, we can trust the approximate but analytic prescriptions presented below. For the parameter range (4.16) there is only the Schwarzschild branch (2.5), but the non-Schwarzschild branch with the metric components (2.6) is not present. Hence the BH linear stability discussed below can be only applied to the Schwarzschild solution (2.5) in the coupling regime where the Weyl curvature term dominates over \(R\). If the parameter space in Weyl gravity lies outside the range (4.16), then the theory might still be unstable, but a different analysis, either analytical or numerical, is necessary to establish the stability of the background BH solution. For instance, with a largely negative value of the squared mass of perturbations, namely for \(-M^{2}<1/\alpha<0\) (where \(M\) is a cutoff mass scale of the theory), one could expect tachyonic instability to occur. For what we have discussed so far, we will not describe how the solutions depend on the choice of boundary conditions for \(\tilde{H}_{1}\) at spatial infinity. Instead, we will only consider some subset of solutions for the theory, which can be treated under the WKB approximation. Then, we assume the solutions to \(\psi_{i}\)'s and \(\tilde{H}_{1}\) in the forms \[\psi_{i}=\tilde{\psi}_{i}e^{-i(\omega t-kr)}\,,\qquad\tilde{H}_{1}=\tilde{ \tilde{H}}_{1}e^{-i(\omega t-kr)}\,, \tag{4.18}\] where \(\tilde{\psi}_{1}=\tilde{h}_{1}\), \(\tilde{\psi}_{2}=\tilde{\chi}_{1}\), \(\tilde{\psi}_{3}=\tilde{H}_{2}\), \(\tilde{\psi}_{4}=\tilde{\tilde{\chi}}_{2}\) and \(\tilde{\tilde{H}}_{1}\) are assumed to be constant. Although we focus only on these solutions, if Laplacian instabilities occur for them, then the SSS background becomes also unstable in general. Under the WKB approximation, we turn our attention to the large \(k\) and \(l\) modes. Furthermore, for any \(r\)-dependent coefficient \(\mathcal{F}(r)\), we will exploit the approximation \((\mathcal{F}(r)\,\psi_{i})^{\prime}\approx\mathcal{F}(r)\,\psi_{i}^{\prime} \to ik\,\mathcal{F}(r)\,\tilde{\psi}_{i}\). In other words, when we vary the Lagrangian (4.12), the components of symmetric matrices \(A_{ij}^{(j)}\)'s and antisymmetric ones \(B_{ij}^{(j)}\)'s can be treated as constants. After integrating out \(\tilde{\tilde{H}}_{1}\), the field equations of motion for the four dynamical perturbations \(\vec{\psi}=(\tilde{h}_{1},\tilde{\chi}_{1},\tilde{H}_{2},\tilde{\tilde{\chi} }_{2})\) can be expressed in the form \[\mathbf{A}_{\rm even}\vec{\psi}^{\rm T}=0\,, \tag{4.19}\] where \(\mathbf{A}_{\rm even}\) is a \(4\times 4\) Hermitian matrix whose components satisfy \((\mathbf{A}_{\rm even})_{ij}=(\mathbf{A}_{\rm even})_{ji}^{*}\). We note that \((\mathbf{A}_{\rm even})_{ij}\)'s contain \(r\)-dependent background functions besides \(k\) and \(l\). ### No-ghost conditions The no-ghost conditions can be derived by considering terms proportional to \(\omega^{2}\) in \(\mathbf{A}_{\rm even}\). We express the matrix in \(\mathbf{A}_{\rm even}\) containing the \(\omega^{2}\) dependence as \(\mathbf{K}_{\rm even}=\omega^{2}\mathbf{K}\), where \(\mathbf{K}\) is a \(4\times 4\) matrix whose components are given by \(K_{ij}\). To avoid ghosts in the even-parity sector, the determinants of submatrices of \(\mathbf{K}\) need to be positive. In the limit of large values of \(k\) and \(l\), the no-ghost conditions translate to \[g_{4} =K_{44}\simeq-\frac{M_{\rm Pl}^{2}\,r^{2}h\,l^{2}}{\alpha\,(k^{2} r^{2}h+l^{2})}>0\,, \tag{4.20}\] \[g_{3} =\frac{K_{33}K_{44}-K_{34}K_{43}}{g_{4}}\simeq\frac{2\alpha M_{ \rm Pl}^{2}l^{2}}{3\sqrt{fh}}>0\,,\] (4.21) \[g_{2} =\frac{1}{g_{3}\,g_{4}}\det\left|\begin{array}{ccc}K_{22}&K_{23 }&K_{24}\\ K_{32}&K_{33}&K_{34}\\ K_{42}&K_{43}&K_{44}\end{array}\right|\simeq-\frac{\alpha\,r^{4}M_{\rm Pl}^{2 }}{6f^{5/2}\sqrt{h}\,l^{2}}>0\,, \tag{4.22}\] \[g_{1} = \frac{\det(\mathbf{K})}{g_{2}\,g_{3}\,g_{4}}\simeq-\frac{7M_{\rm Pl}^{2} (rhf^{\prime}+fh-f)l^{2}}{12\sqrt{fh}(rf^{\prime}-2f)} \tag{4.23}\] \[+ \frac{\alpha M_{\rm Pl}^{2}}{3f^{5/2}\sqrt{h}r^{2}\,(rf^{\prime}-2 f)}\Bigg{\{}\bigg{[}4\left(2k^{2}r^{2}-2k^{2}h^{\prime}r^{3}-l^{2}\right)h^{2}-l^{2} \left(\frac{15rh^{\prime}}{2}+8\right)h+\frac{rh^{\prime}\left(19rh^{\prime}-1 4\right)l^{2}}{4}\bigg{]}\,f^{3}\] \[+ 2\left[\left(k^{2}h^{\prime}r^{3}-2k^{2}r^{2}+\frac{61}{4}l^{2} \right)h^{2}-8h^{3}k^{2}r^{2}+\frac{l^{2}\left(23rh^{\prime}+2\right)h}{8}- \frac{17h^{\prime 2}l^{2}r^{2}}{32}\right]rf^{\prime}f^{2}\] \[+ {hf^{\prime}}^{2}r^{2}\left[10h^{2}k^{2}r^{2}+\left(k^{2}h^{ \prime}r^{3}-15l^{2}\right)h+\frac{13rh^{\prime}l^{2}}{8}\right]f-h^{2}{f^{ \prime}}^{3}\left(h\,k^{2}r^{2}-\frac{27l^{2}}{16}\right)r^{3}\Bigg{\}}>0\,,\] where \(g_{i}\)'s correspond to the eigenvalues of \(\mathbf{K}\). To derive the results (4.20)-(4.23), we have not specified the background to be the Schwarzschild solution (2.5). Irrespective of the signs of \(\alpha\), either of the three conditions (4.20)\(\sim\)(4.22) is violated and hence there is always at least one ghost mode in the even-parity sector. The fourth condition (4.23) is complicated, but if we evaluate it on the Schwarzschild solution (2.5), we find \[g_{1}=\left[\frac{\left(8\beta^{2}-20\beta+15\right)l^{2}}{4\beta^{3}(\beta- 1)r_{h}^{2}}-\frac{4(\beta-1)(\beta-3)k^{2}}{3\beta^{2}}\right]\alpha M_{\rm Pl }^{2}\,, \tag{4.24}\] where \[\beta\equiv\frac{r}{r_{h}}\,. \tag{4.25}\] In the vicinity of the horizon (\(\beta=1\)), we have the approximate relation \(g_{1}\simeq 3\alpha M_{\rm Pl}^{2}l^{2}/[4r_{h}^{2}(\beta-1)]\) and hence \(g_{1}>0\) for \(\alpha>0\) and \(\beta>1\). ### Radial speeds of propagation To have nonvanishing solutions for \(\vec{\psi}=(\tilde{h}_{1},\tilde{\chi}_{1},\tilde{H}_{2},\tilde{\tilde{\chi}}_ {2})\), we require that Eq. (4.19) obeys the discriminant equation \[\det\mathbf{A}_{\rm even}=0\,. \tag{4.26}\] The propagation speeds \(c_{r}\) along the radial direction can be derived by using the approximation of large values of \(\omega\) and \(k\) in the range \(kr_{h}\gg l\gg 1\). Then, Eq. (4.26) is approximated as \[\mu_{1}k^{2}\omega^{8}+\mu_{2}k^{6}\omega^{6}+\mu_{3}k^{10}\omega^{4}+\mu_{4}k ^{14}\omega^{2}+\mu_{5}k^{16}\simeq 0\,, \tag{4.27}\] where we picked up the dominant contribution in \(k\) for each single \(\omega^{p}\) coefficient with \(p=\{0,2,4,6,8\}\), so that \(\mu_{i}\)'s depend on only \(r\) and \(l\). In general, we do not have the solution \(\omega=0\) to Eq. (4.27). To find approximate dispersion relations which are valid for large values of \(\omega\) and \(k\) (with \(kr_{h}\gg l\gg 1\)), we will proceed as follows. First, we search for solutions of the kind \(\omega=\sqrt{\mathcal{B}}\,k\propto k\). Substituting this expression into Eq. (4.27), it follows that \[k^{16}\left[\mu_{4}\mathcal{B}+\mu_{5}+\mathcal{O}(k^{-2})\right]\simeq 0\,, \tag{4.28}\] and hence we obtain \[\mathcal{B}=-\frac{\mu_{5}}{\mu_{4}}\simeq\sqrt{fh}\,, \tag{4.29}\] where we have finally taken the limit \(l\gg 1\) for the computation of \(\mathcal{B}\). Then, one of the dynamical perturbations has the dispersion relation \[\omega=\sqrt{fh}\,k\,, \tag{4.30}\] so that the corresponding radial propagation speed \(c_{r1,{\rm even}}=(fh)^{-1/2}(\partial\omega/\partial k)\) is given by \[c_{r1,{\rm even}}=1\,, \tag{4.31}\] which is luminal. On the other hand, we look for solutions of the kind \(\omega=\sqrt{\mathcal{C}}\,k^{2}\). In this case, Eq. (4.27) yields \[\mathcal{C}k^{18}\left[\mu_{1}\mathcal{C}^{3}+\mu_{2}\mathcal{C}^{2}+\mu_{3} \mathcal{C}+\mu_{4}+\mathcal{O}(k^{-2})\right]\simeq 0\,, \tag{4.32}\] which leads to three non-trivial solutions for \(\mathcal{C}\). Then, we already have a set of four required dispersion relations corresponding to dynamical perturbations. Indeed, another ansatz, such as \(\omega\propto k^{3}\) would not lead to any nontrivial solutions. For the Schwarzschild solution (2.5) with \(l\gg 1\), the coefficients are approximately given by \[\mu_{1} =l^{8}(\beta-3)\,,\qquad\mu_{2}=-\frac{l^{6}r_{h}^{2}\left(\beta- 1\right)^{2}\left(4\beta^{2}-34\beta+39\right)}{6\beta}\,,\] \[\mu_{3} =\frac{l^{2}r_{h}^{4}\left(352\beta^{4}-2792\beta^{3}+7512\beta^ {2}-8100\beta+2997\right)\left(\beta-1\right)^{4}}{36\beta^{3}}\,,\] \[\mu_{4} =-\frac{r_{h}^{6}\left(\beta-1\right)^{7}\left(28\beta^{2}-122 \beta+105\right)^{2}}{108\beta^{4}}\,. \tag{4.33}\] In general, the solution of Eq. (4.32) depends on the value of \(\beta=r/r_{h}\). Just to give an example, let us study the stability of the Schwarzschild BH at \(\beta=2\). In this case, we obtain the three solutions \(\mathcal{C}_{1}=13r_{h}^{2}/(12l^{2})\), \(\mathcal{C}_{2,3}=\pm 9\sqrt{13}r_{h}^{2}/(52l^{3})\), and hence the corresponding propagation speeds are \[c_{\tau 2,\text{even}} = \frac{2\sqrt{39}}{3}\,\frac{k\tau_{h}}{l}\,, \tag{4.34}\] \[c_{\tau 3,\text{even}} = \frac{6}{13^{1/4}}\frac{k\tau_{h}}{l^{3/2}}\,,\] (4.35) \[c_{\tau 4,\text{even}}^{2} = -\frac{36}{\sqrt{13}}\,\frac{k^{2}r_{h}^{2}}{l^{3}}\,. \tag{4.36}\] This shows that two dynamical perturbations corresponding to \(c_{\tau 2,\text{even}}\) and \(c_{\tau 3,\text{even}}\) are superluminal for \(kr_{h}\gg l\gg 1\). Since \(c_{\tau 4,\text{even}}^{2}\) is negative, there is Laplacian instability for one of the dynamical perturbations. In the above discussion, we considered the case \(\beta=r/r_{h}=2\), but the appearance of Laplacian instability for a distance close to the horizon is sufficient to exclude the Schwarzschild BH with the Weyl coupling constant \(\alpha\) in the range (4.16). On using the propagation speed squared (4.36), the time scale of Laplacian instability can be estimated as \[t_{\text{ins}}\approx\frac{1}{\sqrt{l}}\left(\frac{l}{kr_{h}}\right)^{2}r_{h} \ll r_{h}\,. \tag{4.37}\] For the Schwarzschild BH with the horizon size \(r_{h}\approx 10\) km, we have \(t_{\text{ins}}\ll 3\times 10^{-5}\) s. It is worthy of mentioning that the angular frequency \(\omega\) of long-wavelength Gregory-Laflamme type instability found for the Schwarzschild BH with the coupling \(|\alpha|\gg r_{h}^{2}\) is in the range \(\omega r_{h}\lesssim 0.1\)[60]. The time scale of this long-wavelength instability is larger than the order \(10r_{h}\), so the Laplacian instability mentioned above destabilizes the Schwarzschild BH much more quickly. In particular, because of the peculiar dispersion relation \(\omega\propto k^{2}\), the modes with larger values of \(k\) are subject to Laplacian instabilities with shorter time scales. We note that there are also mass terms for the dynamical perturbations \(\vec{\psi}\). By looking at the equations of motion and setting to vanish the \(r\)-derivatives of \(\vec{\psi}\), one finds that typical squared mass terms at large distances are of order \(\mathcal{O}(1/|\alpha|)\) (as in the case of odd-parity perturbations). Therefore, the Laplacian instability occurs if the parameters satisfy the constraints \(r_{h}^{2}\ll|\alpha|\) and \(|\alpha|^{-1}\ll|\omega^{2}|\ll M^{2}\), where \(M\) is a cutoff of the theory at most of order \(M_{\text{Pl}}\). Using the instability mode (4.36), for instance, these conditions are expressed as \[|m_{W}^{2}|=\frac{1}{2|\alpha|}\ll\frac{1}{r_{h}^{2}}\,,\qquad\text{and}\qquad |m_{W}^{2}|\ll\frac{9\sqrt{13}}{52}\,\frac{1}{r_{h}^{2}}\frac{(kr_{h})^{4}}{l ^{3}}\ll M^{2}\,. \tag{4.38}\] Since we are considering the radial propagation with \(kr_{h}\gg l\gg 1\), the latter inequality is satisfied for \(r_{h}^{-1}\ll M\). For a BH with the solar mass \(M_{\odot}=2\times 10^{30}\) kg, this range corresponds to \(r_{h}^{-1}=6.68\times 10^{-20}\) GeV \(\ll M\lesssim M_{\text{Pl}}\), with \(\sqrt{|m_{W}^{2}|}\ll 6.68\times 10^{-20}\) GeV. The finiteness of \(m_{W}^{2}=1/(2\alpha)\) (for large values of \(r\)) is associated with the fact that the eigenvalues of \(\mathbf{K}\) do not vanish for \(r>0\). The same thing was also happening for odd-parity perturbations in current theory. However, this is in contrast with the behavior of the kinetic matrix of odd-parity perturbations in Einsteinian cubic gravity [57], where \(\det(\mathbf{K})\) approaches \(0\) at spatial infinity. The latter behavior is typically a signal of the strong coupling problem [55; 56; 58]. In Einsteinian cubic gravity, this property also gives rise to a mass term of odd-parity perturbations growing as a function of \(r\). In quadratic Weyl gravity, the strong coupling problem is absent for both odd- and even-parity perturbations. On the SSS background, we have shown that there are seven dynamical DOFs in total, whose number coincides with those obtained on the Minkowski and isotropic cosmological backgrounds [45; 46; 48]. This fact also supports the absence of a strong coupling problem for the propagating modes. It should be noticed that the non-Schwarzschild BH branch with the metric components (6) is present for the quantity \(p=r_{h}/\sqrt{2\alpha}\) in the range (7), i.e., \[|\alpha|=\mathcal{O}(r_{h}^{2})\,. \tag{109}\] This is the regime in which the WKB approximation starts to lose its validity. Hence our analysis based on the WKB approximation does not address the linear stability of such non-Schwarzschild BHs. To see whether some instabilities arise for BHs in the coupling range \(|\alpha|\lesssim r_{h}^{2}\), we need to resort to the numerical integration by setting proper boundary conditions for perturbations (in particular, \(\bar{H}_{1}\)) at spatial infinity and on the horizon. ### Angular speeds of propagation To study the BH stability along the angular direction, we consider solutions to the discriminant Eq. (96) in the other limit \(\omega\simeq l/r_{h}\gg k\gg r_{h}^{-1}\). In this case, there are four solutions with the dispersion relation \(\omega\propto l\), where \(\omega\) obeys \[\left(r^{2}\omega^{2}-fl^{2}\right)\left(\tilde{\mu}_{1}\omega^{6}+\tilde{\mu }_{2}l^{2}\omega^{4}+\tilde{\mu}_{3}l^{4}\omega^{2}+\tilde{\mu}_{4}l^{6}\right) =0\,, \tag{110}\] where \(\tilde{\mu}_{i}\)'s are \(r\)-dependent coefficients. The angular propagation speed \(c_{\Omega}=r\mathrm{d}\theta/\mathrm{d}\tau\), where \(\tau\) is the proper time, can be expressed as \(c_{\Omega}=r\omega/(\sqrt{f}l)\). For the Schwarzschild solution (5) with \(\beta=r/r_{h}\), the discriminant equation (110) reduces to \[\left(c_{\Omega}^{2}-1\right)\{81\,[15+4\beta(2\beta-5)]\,c_{ \Omega}^{6}+9[795+4\beta(116\beta-343)]c_{\Omega}^{4}-3[5697+4\beta(850\beta-2 403)]c_{\Omega}^{2}\] \[+8865+20\beta(272\beta-753)\} =0\,. \tag{111}\] In this equation, the coefficient of \(c_{\Omega}^{6}\) never vanishes for real values of \(\beta\). On the contrary, the last term, which does not have the \(c_{\Omega}\) dependence, may vanish for finite values of \(\beta\) in the range \(\beta>1\). However, that would only mean that we need to consider the next-to-leading-order coefficient \(\tilde{\mu}_{4}\) in Eq. (110). As an example, let us consider the BH stability at the distance \(\beta=r/r_{h}=2\). In this case, we have the following solutions \[c_{\Omega\Omega,\mathrm{even}}^{2} =1\,, \tag{112}\] \[c_{\Omega\Omega,\mathrm{even}}^{2} \simeq-0.7291\,,\] (113) \[c_{\Omega\Omega,\mathrm{even}}^{2} \simeq 1.1026-0.0761i\,,\] (114) \[c_{\Omega\Omega,\mathrm{even}}^{2} \simeq 1.1026+0.0761i\,. \tag{115}\] The first value (112) corresponds to the luminal propagation. Since the second propagation speed squared (113) is negative, there is Laplacian instability for this mode. The third and fourth modes have the following time dependence \[\psi_{i}\propto e^{-i\omega t}\propto\exp\biggl{(}\mp\frac{0.0128\,l}{r_{h}}t \biggr{)}\,, \tag{116}\] where the minus and plus signs correspond to Eqs. (114) and (115), respectively, and we have neglected an oscillating term. While the amplitude of the third mode decreases in time, the fourth one exhibits exponential growth with oscillations. The time scale of Laplacian instabilities for the modes (113) and (115) can be estimated as \(t_{\mathrm{ins}}=\mathcal{O}(r_{h}/l)\) and \(t_{\mathrm{ins}}=\mathcal{O}(10^{2}r_{h}/l)\), respectively. For sufficiently large multipoles \(l\), these instability time scales are less than \(r_{h}\). The angular Laplacian instabilities are physical for the parameters in the ranges \(r_{h}^{2}\ll|\alpha|\) and \(|\alpha|^{-1}\ll|\omega^{2}|\ll M^{2}\), where \(M\) is the cutoff mass scale. On using the instability modes discussed above, these conditions translate to \[|m_{W}^{2}|=\frac{1}{2|\alpha|}\ll\frac{1}{r_{h}^{2}}\,,\qquad\text{and} \qquad|m_{W}^{2}|\ll\frac{l^{2}}{r_{h}^{2}}\ll M^{2}\,, \tag{117}\] where the latter gives the inequality \(r_{h}^{-2}\ll M^{2}\). These conditions are the same as those discussed for radial propagating modes. Conclusions In this paper, we have investigated the linear stability of BHs in quadratic Weyl gravity given by the action (1). This theory generates the derivatives of metrics higher than second order in the field equations of motion. On the SSS background (2) there are derivatives up to fourth order in metric components \(f\) and \(h\), but they can be eliminated to give the second-order differential Eqs. (3) and (4). For any arbitrary Weyl coupling \(\alpha\), the Schwarzschild background (5) is always the solution to this theory. For the particular coupling range \(0.876\leq r_{h}/\sqrt{2\alpha}\leq 1.143\), there are also non-Schwarzschild hairy solutions where the metric components are approximately given by Eq. (6). In Sec. III, we studied the propagation of metric perturbations in the odd-parity sector. Since the second-order action of odd-parity perturbations can be expressed as Eq. (3) with the Lagrangian (6), there are three dynamical perturbations \(W\), \(Q\), and \(\chi\), where \(\chi\) contains the second time derivative of \(W\) as Eq. (4). On using the WKB approximation with large values of the angular frequency \(\omega\), wavenumber \(k\), and multipole \(l\), the equations of motion of dynamical perturbations reduce to the form (10) with (11). We showed that there is at least one ghost mode in the odd-parity sector, but the strong coupling problem associated with a vanishing determinant of the kinetic matrix is absent in Weyl gravity. Moreover, independent of the radial distance \(r\), the squared masses of dynamical DOFs are at most of the order \(m_{W}^{2}=1/(2\alpha)\). These properties are different from those in Einsteinian cubic gravity where the strong coupling problem leads to the blow-up of mass terms of odd-parity perturbations at large distances [57]. We also found that, under the WKB approximation, the speeds of propagation of odd modes along the radial and angular directions are all equivalent to 1. Since we did not specify the background solution to derive these results, they are valid for both the Schwarzschild and non-Schwarzschild BHs. Moreover, we do not need to specify the range of the Weyl coupling constant \(\alpha\) relative to \(r_{h}^{2}\). In the even-parity sector, we first introduced two Lagrange multipliers \(\chi_{1}\) and \(\chi_{2}\) to remove several higher-order time derivatives from the second-order perturbed action. Defining the rescaled fields \(\bar{H}_{1}\) and \(\bar{\chi}_{2}\) as in Eqs. (9) and (11), respectively, we showed that the second-order Lagrangian can be expressed as Eq. (12) with four dynamical perturbations \(\vec{\psi}=(h_{1},\chi_{1},H_{2},\bar{\chi}_{2})\). The Lagrangian also contains contributions of the nondynamical perturbation \(\bar{H}_{1}\) and its radial derivative. On the background close to the Minkowski spacetime, the field \(\bar{H}_{1}\) obeys the differential Eq. (14), whose solution is given by Eq. (15). To ensure the validity of the WKB approximation in the vicinity of the horizon, we require that the Weyl mass squared should be in the range \(|m_{W}^{2}|r_{h}^{2}\ll 1\), i.e., \(|\alpha|\gg r_{h}^{2}\). This is the regime in which only the Schwarzschild branch (5) is present. Then, we investigated the linear stability of Schwarzschild BHs for the distance \(r\) close to the horizon (\(r_{h}^{2}\lesssim r^{2}\ll|\alpha|\)). Among the four dynamical perturbations in the even-parity sector, there is at least one ghost mode. For large radial momentum modes with \(kr_{h}\gg l\gg 1\), we found that one of the dispersion relations is given by \(\omega=\sqrt{fh}\,k\) and hence its propagation is luminal. The other three dynamical perturbations satisfy the unusual dispersion relations \(\omega\propto k^{2}\). At the distance \(\beta=r/r_{h}=2\) in the vicinity of the horizon, we showed that one of the squared propagation speeds \(c_{r4,\text{even}}^{2}\) is largely negative. This leads to the strong Laplacian instability whose time scale is much shorter than \(r_{h}\). For the coupling \(|\alpha|\gg r_{h}^{2}\), it is also known that there is a long-wavelength instability for Schwarzschild BHs [60]. However, the time scale of this Gregory-Laflamme type instability is greater than the order \(10r_{h}\), so the Laplacian instability found in this paper destabilizes the Schwarzschild BH much more quickly. For high angular momentum modes of even-parity perturbations (\(l\gg kr_{h}\gg 1\)), one of the dynamical DOFs propagates luminally. At the distance \(\beta=r/r_{h}=2\), we found that two dynamical perturbations are prone to Laplacian instabilities, with time scales shorter than \(r_{h}\) for large multipoles. We also note that the squared masses of even modes are at most of the order \(m_{W}^{2}=1/(2\alpha)\) at large distances and that the strong coupling problem is absent in the even-parity sector. Provided the model parameters are in the range \(|\alpha|^{-1}\ll r_{h}^{-2}\ll M^{2}\), where \(M\) is the cutoff mass scale at most of order \(M_{\text{Pl}}\), the Schwarzschild BH is excluded by Laplacian instabilities both along the radial and angular directions. Several issues may deserve further investigation. First of all, we did not address the stabilities of Schwarzschild and non-Schwarzschild BHs for the coupling range \(|\alpha|\lesssim r_{h}^{2}\) due to the limitation of the WKB approximation in the even-parity sector. For this purpose, we need to solve the differential equation for \(\bar{H}_{1}\) coupled with other dynamical perturbation equations of motion by giving proper boundary conditions on the horizon and at spatial infinity. It is also worth implementing the quadratic Ricci scalar term \(\beta R^{2}\) in the action to see how the propagation of dynamical perturbations is modified. These issues are left for future work. ## Acknowledgements The work of ADF was supported by the Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research No. 20K03969 and by grant PID2020-118159GB-C41 funded by MCIN/AEI/10.13039/501100011033. ST is supported by the Grant-in-Aid for Scientific Research Fund of the JSPS No. 22K03642 and Waseda University Special Research Project No. 2023C-473. ## Appendix A Integrating the even-parity second-order action by parts To derive the second-order action of even-parity perturbations in the form (4.12), we perform the following integrations by parts: 1. \(A\dot{\psi}^{\prime\prime}\dot{\psi}\to-A(\dot{\psi}^{\prime})^{2}+\frac{1}{2} A^{\prime\prime}\dot{\psi}^{2}\,,\) 2. \(A\dot{\psi}^{\prime}\dot{\psi}\to-\frac{1}{2}\,A^{\prime}\,\dot{\psi}^{2}\,,\) 3. \(A\psi^{\prime\prime}\psi\to-A\,(\psi^{\prime})^{2}+\frac{1}{2}\,A^{\prime\prime }\,\psi^{2}\,,\) 4. \(A\psi^{\prime\prime}\psi^{\prime}\to-\frac{1}{2}\,A^{\prime}\,(\psi^{\prime} )^{2}\,,\) 5. \(A\psi^{\prime}\psi\to-\frac{1}{2}\,A^{\prime}\,\psi^{2}\,,\) 6. \(A\dot{\psi}\psi\to 0\,,\) 7. \(A\dot{\psi}^{\prime\prime}_{1}\dot{\psi}_{2}+B\dot{\psi}^{\prime\prime}_{2} \dot{\psi}_{1}\to-(A+B)\dot{\psi}^{\prime}_{1}\dot{\psi}^{\prime}_{2}+\frac{1 }{2}(A^{\prime\prime}+B^{\prime\prime})\dot{\psi}_{1}\dot{\psi}_{2}-\frac{1}{ 2}(A^{\prime}-B^{\prime})(\dot{\psi}^{\prime}_{1}\dot{\psi}_{2}-\dot{\psi}_{1 }\dot{\psi}^{\prime}_{2})\,,\) 8. \(A\dot{\psi}_{1}\psi^{\prime}_{2}+B\dot{\psi}_{2}\psi^{\prime}_{1}\to\frac{1}{2 }\,(A+B)\,(\dot{\psi}_{1}\psi^{\prime}_{2}+\dot{\psi}_{2}\psi^{\prime}_{1})+ \frac{1}{4}\,(A^{\prime}-B^{\prime})\,(\psi_{1}\dot{\psi}_{2}-\psi_{2}\dot{ \psi}_{1})\,,\) 9. \(A\psi_{1}\dot{\psi}_{2}+B\psi_{2}\dot{\psi}_{1}\rightarrow\frac{1}{2}\,(A-B)( \dot{\psi}_{1}\dot{\psi}_{2}-\psi_{2}\dot{\psi}_{1})\,,\) 10. \(A\dot{\psi}^{\prime}_{1}\dot{\psi}_{2}+B\dot{\psi}^{\prime}_{1}\dot{\psi}^{ \prime}_{2}\rightarrow\frac{1}{2}\,(A-B)\,(\dot{\psi}^{\prime}_{1}\dot{\psi}_{ 2}-\dot{\psi}^{\prime}_{2}\dot{\psi}_{1})-\frac{1}{2}\,(A^{\prime}+B^{\prime}) \dot{\psi}_{1}\dot{\psi}_{2}\,,\) 11. \(A\dot{\psi}^{\prime}_{1}\psi^{\prime\prime}_{2}+B\dot{\psi}^{\prime}_{2}\psi^{ \prime\prime}_{1}\rightarrow\frac{1}{2}\,(A+B)(\dot{\psi}^{\prime}_{1}\psi^{ \prime\prime}_{2}+\dot{\psi}^{\prime}_{2}\psi^{\prime\prime}_{1})-\frac{1}{4} \,(A^{\prime}-B^{\prime})(\dot{\psi}^{\prime}_{1}\psi^{\prime}_{2}-\dot{\psi}^ {\prime}_{2}\psi^{\prime}_{1})\,,\) 12. \(A\psi^{\prime\prime\prime}\psi^{\prime\prime}\to-\frac{1}{2}\,A^{\prime}\,( \psi^{\prime\prime})^{2}\,,\) 13. \(A\psi^{\prime\prime\prime}\psi\to-A\,(\psi^{\prime\prime})^{2}+\frac{1}{2}\,A ^{\prime\prime\prime}\,(\psi^{\prime})^{2}\,,\) 14. \(A\psi^{\prime\prime\prime}\psi\to\frac{3}{2}\,A^{\prime}\,(\psi^{\prime})^{2}- \frac{1}{2}\,A^{\prime\prime\prime}\,(\psi)^{2}\,,\) 15. \(A\psi^{\prime}_{1}\psi_{2}+B\psi_{1}\psi^{\prime}_{2}\rightarrow\frac{1}{2}\,(A -B)(\psi^{\prime}_{1}\psi_{2}-\psi_{1}\psi^{\prime}_{2})-\frac{1}{2}\,(A^{ \prime}+B^{\prime})\psi_{1}\psi_{2}\,,\) 16. \(A\psi^{\prime\prime}_{1}\psi^{\prime}_{2}+B\psi^{\prime}_{1}\psi^{\prime\prime }_{2}\rightarrow\frac{1}{2}\,(A-B)(\psi^{\prime\prime}_{1}\psi^{\prime}_{2}- \psi^{\prime}_{1}\psi^{\prime\prime}_{2})-\frac{1}{2}\,(A^{\prime}+B^{\prime}) \psi^{\prime}_{1}\psi^{\prime}_{2}\,,\) 17. \(A\psi^{\prime\prime}_{1}\psi_{2}+B\psi_{1}\psi^{\prime\prime}_{2}\to-(A+B) \psi^{\prime}_{1}\psi^{\prime}_{2}-\frac{1}{2}\,(A^{\prime}-B^{\prime})(\psi^{ \prime}_{1}\psi_{2}-\psi_{1}\psi^{\prime}_{2})+\frac{1}{2}\,(A^{\prime\prime }+B^{\prime\prime})\psi_{1}\psi_{2}\,,\) 18. \(A\psi^{\prime\prime\prime}_{1}\psi_{2}\!+\!B\psi_{1}\psi^{\prime\prime\prime}_{2 }\!\rightarrow\!-\frac{1}{2}\,(A\!-\!B)(\psi^{\prime\prime}_{1}\psi^{\prime}_{2 }\!-\!\psi^{\prime}_{1}\psi^{\prime\prime}_{2})\!+\!\frac{3}{2}\,(A^{\prime}\!+ \!B^{\prime})\,\psi^{\prime}_{1}\psi^{\prime}_{2}\!+\!\frac{1}{2}\,(A^{\prime \prime}\!-\!B^{\prime\prime})(\psi^{\prime}_{1}\psi_{2}\!-\!\psi_{1}\psi^{\prime }_{2})\!-\!\frac{1}{2}\,(A^{\prime\prime\prime}\!+\!B^{\prime\prime\prime})\psi_ {1}\psi_{2}\,,\) 19. \(A\psi^{\prime\prime\prime}_{1}\psi^{\prime\prime}_{2}+B\psi^{\prime\prime\prime}_{ 2}\psi^{\prime\prime}_{1}\rightarrow\frac{1}{2}\,(A-B)(\psi^{\prime\prime\prime}_{ 1}\psi^{\prime\prime}_{2}-\psi^{\prime}_{1}\psi^{\prime\prime\prime}_{2})-\frac{1}{ 2}\,(A^{\prime}+B^{\prime})\,\psi^{\prime\prime}_{1}\psi^{\prime\prime}_{2}\,,\) 20. \(A\psi^{\prime\prime}_{1}\psi^{\prime}_{2}+B\psi^{\prime\prime\prime}_{2}\psi^{ \prime}_{1}\rightarrow\frac{1}{2}\,(A-B)(\psi^{\prime\prime}_{1}\psi^{\prime}_{2}- \psi^{\prime}_{1}\psi^{\prime\prime}_{2})-\frac{1}{2}\,(A^{\prime}+B^{\prime}) \,\psi^{\prime}_{1}\psi^{\prime}_{2}\,,\) 21. \(A\psi^{\prime\prime\prime}_{1}\psi^{\prime}_{2}+B\psi^{\prime\prime\prime}_{ 2}\psi^{\prime}_{1}\rightarrow-(A+B)\,\psi^{\prime\prime}_{1}\psi^{\prime \prime\prime}_{2}-\frac{1}{2}\,(A^{\prime}-B^{\prime})(\psi^{\prime\prime}_{1} \psi^{\prime}_{2}-\psi^{\prime}_{1}\psi^{\prime\prime}_{2})+\frac{1}{2}\,(A^{ \prime\prime}+B^{\prime\prime})\,\psi^{\prime}_{1}\psi^{\prime}_{2}\,,\) where \(A\) and \(B\) are \(r\)-dependent functions.
2309.01653
Simulations of real-time system identification for superconducting cavities with a recursive least-squares algorithm
We explore the performance of a recursive least-squares algorithm to determine the bandwidth $\omega_{12}$ and the detuning $\Delta\omega$ of a superconducting cavity. We base the simulations on parameters of the ESS double-spoke cavities. Expressions for the signal-to-noise ratio of derived parameters are given to explore the applicability of the algorithm to other configurations.
Volker Ziemann
2023-09-04T15:06:33Z
http://arxiv.org/abs/2309.01653v2
Simulations of real-time system identification for superconducting cavities with a recursive least-squares algorithm ###### Abstract We explore the performance of a recursive least-squares algorithm to determine the bandwidth \(\omega_{12}\) and the detuning \(\Delta\omega\) of a superconducting cavity. We base the simulations on parameters of the ESS double-spoke cavities. Expressions for the signal-to-noise ratio of derived parameters are given to explore the applicability of the algorithm to other configurations. ## I Introduction Superconducting accelerating cavities are used to accelerate protons [1; 2], electrons [3; 4; 5], and heavy ions [6; 7; 8], both with pulsed [1; 4] and with continuous beams [3; 9]. Owing to the low losses, the cavities have a very narrow bandwidth on the order of Hz for bare cavities and a few \(100\,\mathrm{Hz}\) for cavities equipped with high-power couplers. In order to efficiently cool these cavities with liquid helium they are made of rather thin material, which makes them easily deformable and this changes their resonance frequency, often by an amount comparable to their bandwidth. In pulsed operation, the dominant deformation comes from the electro-magnetic pressure of the field inside the cavity, the Lorentz-force detuning [10; 11], while cavities operating continuously are perturbed by so-called microphonics [12; 13], caused by pressure variations of the liquid helium bath or mechanical perturbations, for example, by reciprocating pumps or by malfunctioning equipment. As a consequence of these perturbations, the cavities are detuned and force the power generators to increase their output to maintain fields necessary for stable operation of the beams. This reduces the efficiency of the system and requires an, often substantial, overhead of the power generation, forcing it to operate at a less than optimal working point. To avoid this sub-optimal mode of operation and to compensate the detuning, many accelerators employ active tuning systems that use stepper motors and piezo-actuators [14] to squeeze the cavities back in tune, which requires diagnostic systems to measure the detuning. These measurements are usually based on comparing the phase of the signal that excites the cavity, measured with a directional coupler just upstream of the input coupler, to the phase of the field inside the cavity, measured by a field probe or antenna inside the cavity. Both analog [12; 15] and digital [16; 17] signal processing systems are used; often as part of the low-level radio-frequency (LLRF) feedback system that stabilizes the fields in the cavity. Even more elaborate systems, based on various system identification algorithms, are used or planned [18; 19; 20; 21]. All these algorithms normally rely on low-pass filtering the often noisy signals from the directional couplers and antennas in order to provide a reliable estimate of the cavity detuning and the bandwidth. In this report, we focus on a complementary algorithm that continuously improves the estimated fit parameters by increasing the size of a system of equations. Instead of solving this rapidly growing system directly, we employ a recursive least-squares (RLS) algorithm [22; 23], which only requires moderate numerical expenditure in each time step. Remarkably, asymptotically the difference between the continuously improving estimates of the fit parameters and the "true" values--the so-called estimation error--approaches zero [24] albeit at the expense of a limited ability to resolve changing parameters. We therefore introduce a finite memory when solving the system, which downgrades old measurements in favor of new ones. This allows us to handle even changing parameters at the expense of an increased noise level of the fit parameters. In the following sections, we first introduce the model of the cavity and transform the continuous-time model to discrete time. In Section III we develop the RLS algorithm to identify the cavity parameters. In Section IV we explore the capabilities of the algorithm in simulations before calculating the signal-to-noise ratio in Section V and the conclusions. ## II Model Accelerating cavities can be described by an equivalent circuit composed of a resistor \(R\), an inductance \(L\), and a capacitor \(C\), all connected in parallel. This circuit is then excited by a current \(I\) and responds by a building up a voltage \(V=V_{r}+iV_{i}\) across the components. This voltage is decomposed into real (in-phase, I) and imaginary (out-of-phase, Q) components. After averaging over the fast oscillations the evolution of the real and imaginary parts of the voltage envelope is given by the following state-space representation [16] \[\left(\begin{array}{c}\frac{dV_{r}}{dt}\\ \frac{dV_{i}}{dt}\end{array}\right)=\left(\begin{array}{cc}-\omega_{12}&- \Delta\omega\\ \Delta\omega&-\omega_{12}\end{array}\right)\left(\begin{array}{c}V_{r}\\ V_{i}\end{array}\right)+\left(\begin{array}{cc}\omega_{12}R&0\\ 0&\omega_{12}R\end{array}\right)\left(\begin{array}{c}I_{r}\\ I_{i}\end{array}\right) \tag{1}\] of the system that describes the dynamics of the cavity voltage powered by a generator that provides the currents. The directional couplers used to measure the input signal, however, measure the forward component of the current \(\vec{I}^{+}\) rather than the total current \(\vec{I}=\vec{I}^{+}+\vec{I}^{-}\). Close to resonance, it is straightforward to show that the measured forward current \(\vec{I}^{+}\), which proportional to the signal from the directional coupler, is related to the total current \(\vec{I}\) by \[\vec{I}=\frac{2\beta}{1+\beta}\vec{I}^{+}=\frac{2Q_{L}}{Q_{E}}\vec{I}^{+} \tag{2}\] with the coupling factor \(\beta=Q_{0}/Q_{E}\) given by the ratio of the intrinsic quality factor of the cavity \(Q_{0}\) and the external quality factor \(Q_{E}\). Moreover, \(1/Q_{L}=1/Q_{0}+1/Q_{E}=(1+\beta)/Q_{0}\) defines the loaded quality factor \(Q_{L}\). Replacing the currents on the right-hand side of Equation 1 with the help of Equation 2 then leads to \[\left(\begin{array}{c}\frac{dV_{r}}{dt}\\ \frac{dV_{i}}{dt}\end{array}\right)=\left(\begin{array}{cc}-\omega_{12}&- \Delta\omega\\ \Delta\omega&-\omega_{12}\end{array}\right)\left(\begin{array}{c}V_{r}\\ V_{i}\end{array}\right)+\left(\begin{array}{cc}\omega_{E}R&0\\ 0&\omega_{E}R\end{array}\right)\left(\begin{array}{c}I_{r}^{+}\\ I_{i}^{+}\end{array}\right) \tag{3}\] with \(\omega_{E}=\hat{\omega}/Q_{E}\). We also introduce \(\omega_{12}=\hat{\omega}/2Q_{L}\) and the cavity resonance frequency \(\hat{\omega}\). Furthermore, we assume that magnitude and phase of all currents and voltages can be reliably measured after the hardware (antennas, cables, and amplifiers) is properly calibrated. Equation 3 is in the standard form of a linear dynamical system \(\dot{\vec{V}}=\bar{A}\vec{V}+\bar{B}\vec{I}^{+}\) where \(\vec{V}\) is the column vector with real and imaginary part of the voltages and \(\vec{I}^{+}\) that of the forward currents. The matrices \(\bar{A}\) and \(\bar{B}\) correspond to those in Equation 3 and are given by \[\bar{A}=\left(\begin{array}{cc}-\omega_{12}&-\Delta\omega\\ \Delta\omega&-\omega_{12}\end{array}\right)\qquad\mbox{and}\qquad\bar{B}=\left( \begin{array}{cc}\omega_{E}R&0\\ 0&\omega_{E}R\end{array}\right)\;. \tag{4}\] For the simulations we will convert the continuous-time system from Equation 3 to discrete time with time step \(\Delta t\), which corresponds to the sampling time if the system is implemented digitally. By replacing the derivatives of the voltages by finite differences \[\frac{d\vec{V}}{dt}\rightarrow\frac{\vec{V}_{t+1}-\vec{V}_{t}}{\Delta t} \tag{5}\] where we label the time steps by \(t\), Equation 3 becomes \[\vec{V}_{t+1}=A\vec{V}_{t}+B\vec{I}_{t}^{+}+\vec{w}_{t}\quad\text{with}\quad A= \left(\begin{array}{cc}1-\omega_{12}\Delta t&-\Delta\omega\Delta t\\ \Delta\omega\Delta t&1-\omega_{12}\Delta t\end{array}\right)\;, \tag{6}\] \(B=\omega_{E}\Delta tR\mathbf{1}\), and the process noise \(\vec{w}_{t}\). We assume that the noise is uncorrelated and has magnitude \(\sigma_{p}\). It is thus characterized by its expectation value \(E\left\{\vec{w}_{t}\vec{w}_{s}^{\top}\right\}=\sigma_{p}^{2}\delta_{ts} \mathbf{1}\). We add measurement noise \(\vec{w}_{t}^{\prime}\) by using \[\vec{V}_{t}^{\prime}=\vec{V}_{t}+\vec{w}_{t}^{\prime} \tag{7}\] in the system identification process. We assume it is uncorrelated, has magnitude \(\sigma_{m}\), and is characterized by \(E\left\{\vec{w}_{t}^{\prime}\vec{w}_{s}^{\prime\top}\right\}=\sigma_{m}^{2} \delta_{ts}\mathbf{1}\). ## III System identification Now we turn to the task of extracting \(\omega_{12}\Delta t\) and \(\Delta\omega\Delta t\) from continuously measured voltages \(\vec{V}_{t}^{\prime}\) and currents \(\vec{I}_{t}^{+}\). In order to isolate the sought parameters, we rewrite Equation 6 in the form \[\vec{V}_{t+1}^{\prime}=\left(\mathbf{1}+F\right)\vec{V}_{t}^{\prime}+B\vec{I} _{t}^{+}\qquad\text{with}\qquad F=\left(\begin{array}{cc}-\omega_{12}\Delta t &-\Delta\omega\Delta t\\ \Delta\omega\Delta t&-\omega_{12}\Delta t\end{array}\right) \tag{8}\] and \(B=\omega_{E}\Delta tR\mathbf{1}\). After reorganizing this equation to \[\vec{V}_{t+1}^{\prime}-\vec{V}_{t}^{\prime}-B\vec{I}_{t}^{+}=F\vec{V}_{t}^{\prime} \tag{9}\] we rewrite \(F\vec{V}_{t}^{\prime}\) on the right-hand side as \[F\vec{V}_{t}^{\prime}=-\omega_{12}\Delta t\left(\begin{array}{c}V_{r}^{ \prime}\\ V_{i}^{\prime}\end{array}\right)_{t}+\Delta\omega\Delta t\left(\begin{array}[] {c}-V_{i}^{\prime}\\ V_{r}^{\prime}\end{array}\right)_{t}=\left(\begin{array}{cc}-V_{r}^{\prime}&- V_{i}^{\prime}\\ -V_{i}^{\prime}&V_{r}^{\prime}\end{array}\right)_{t}\left(\begin{array}{c} \omega_{12}\Delta t\\ \Delta\omega\Delta t\end{array}\right)\;. \tag{10}\] We now introduce the abbreviations \[G_{t}=\left(\begin{array}{cc}-V_{r}^{\prime}&-V_{i}^{\prime}\\ -V_{i}^{\prime}&V_{r}^{\prime}\end{array}\right)_{t}\qquad\text{and}\qquad \vec{y}_{t+1}=\vec{V}_{t+1}^{\prime}-\vec{V}_{t}^{\prime}-B\vec{I}_{t}^{+} \tag{11}\] and stack Equation 9 for consecutive times on top of each other. In this way, we obtain a growing system of equations to determine \(\omega_{12}\Delta t\) and \(\Delta\omega\Delta t\) \[\left(\begin{array}{c}\vec{y}_{2}\\ \vec{y}_{3}\\ \vdots\\ \vec{y}_{T+1}\end{array}\right)=U_{T}\left(\begin{array}{c}\omega_{12} \Delta t\\ \Delta\omega\Delta t\end{array}\right)\quad\text{with}\quad U_{T}=\left( \begin{array}{c}G_{1}\\ G_{2}\\ \vdots\\ G_{T}\end{array}\right) \tag{12}\] that we solve in the least-squares sense with the Moore-Penrose pseudo-inverse [26] \[\vec{q}_{T}=\left(\begin{array}{c}\omega_{12}\Delta t\\ \Delta\omega\Delta t\end{array}\right)_{T}=\left(U_{T}^{\top}U_{T}\right)^{-1}U _{T}^{\top}\left(\begin{array}{c}\vec{y}_{2}\\ \vec{y}_{3}\\ \vdots\\ \vec{y}_{T+1}\end{array}\right). \tag{13}\] Here we introduce the abbreviation \(\vec{q}_{T}\) to denote the estimated parameters at time step \(T\). We can avoid lengthy evaluations by calculating Equation 13 recursively. With the definition \(P_{T}^{-1}=U_{T}^{\top}U_{T}\), its initial value \(P_{0}=p_{0}{\bf 1}\), and the definition of \(U_{T}\) from Equation 12 we express \(P_{T+1}\) through \(P_{T}\) in the following way \[P_{T+1}^{-1} = U_{T+1}^{\top}U_{T+1}\] \[= p_{0}{\bf 1}+G_{1}^{\top}G_{1}+G_{2}^{\top}G_{2}+\ldots+G_{T}^{ \top}G_{T}+G_{T+1}^{\top}G_{T+1}\] \[= P_{T}^{-1}+G_{T+1}^{\top}G_{T+1}\.\] We note that for all time steps \(t\) \[G_{t}^{\top}G_{t}=(V_{r}^{\prime 2}+V_{i}^{\prime 2})_{t}{\bf 1}=\vec{V}_{t}^{ \prime 2}{\bf 1} \tag{15}\] is proportional to the unit matrix \({\bf 1}\). This renders the fit into two orthogonal and independent parts; one for each of the fit parameters. To proceed, we introduce the scalar quantity \(p_{T}\) with \(P_{T}=p_{T}{\bf 1}\) and find that it obeys \[p_{T+1}^{-1}=p_{T}^{-1}+\vec{V}_{T}^{\prime 2}. \tag{16}\] Taking the reciprocal leads to \[p_{T+1}=\left[\frac{1}{1+p_{T}\vec{V}_{T}^{\prime 2}}\right]p_{T}. \tag{17}\] Note that we need to initialize this recursion with a non-zero value and set \(p_{0}=1\) in the simulations. Despite being numerically unity, we carry \(p_{0}\) through all equations, because it carries the inverse units of \(\vec{V}_{T}^{2}\). We now turn to finding \(\vec{q}_{T+1}\) by writing Equation 13 for \(T+1\) \[\vec{q}_{T+1} = p_{T+1}\left(G_{1}^{\top}\vec{y}_{2}+G_{2}^{\top}\vec{y}_{3}+ \ldots+G_{T}^{\top}\vec{y}_{T+1}+G_{T+1}^{\top}\vec{y}_{T+2}\right)\] \[= \left[\frac{1}{1+p_{T}\vec{V}_{T}^{\prime 2}}\right]p_{T}\left( \sum_{t=1}^{T}G_{t}^{\top}\vec{y}_{t+1}+G_{T+1}^{\top}\vec{y}_{T+2}\right)\] \[= \left[\frac{1}{1+p_{T}\vec{V}_{T}^{\prime 2}}\right]\left(\vec{q}_{ T}+p_{T}G_{T+1}^{\top}\vec{y}_{T+2}\right)\.\] Equations 17 and 18 constitute the algorithm to continuously update estimates for the two components of \(\vec{q}\), the bandwidth \(q(1)=\omega_{12}\Delta t\) and the detuning \(q(2)=\Delta\omega\Delta t\), as new voltage and current measurements-both enter in \(G_{T+1}\) and \(\vec{y}_{T+2}\)--become available. We refer to the MATLAB [27] code on github [28] for the details of the implementation. In Equations 17 and 18 new information from measurements are used to continuously improve the estimate of the fit parameters, but in situations where they change, we have to introduce a way to forget old information. Therefore, in order to emphasize newly added information we follow [29; 22] and introduce a "forgetting factor" \(\alpha=1-1/N_{f}\) where \(N_{f}\) is the time horizon over which old information is downgraded in the last equality of Equation 14, which now reads \[P_{T+1}^{-1}=\alpha P_{T}^{-1}+G_{T+1}^{\top}G_{T+1}. \tag{19}\] We see that we only have to replace \(P_{T}\) by \(P_{T}/\alpha\), or equivalently \(p_{T}\) by \(p_{T}/\alpha\), in the derivation of Equations 17 and 18 and find for the update of \(p_{T}\) \[p_{T+1}=\left[\frac{1}{\alpha+p_{T}\vec{V}_{T}^{\prime 2}}\right]p_{T} \tag{20}\] and for the update of the estimated parameters \(\vec{q}_{T}\) \[\vec{q}_{T+1}=\left[\frac{1}{\alpha+p_{T}\vec{V}_{T}^{\prime 2}}\right] \left(\alpha\vec{q}_{T}+p_{T}\hat{G}_{T+1}^{\top}\vec{y}_{T+2}\right) \tag{21}\] that are capable of following time-dependent system parameters. These expressions can be evaluated very efficiently. We find that the calculations in Equation 20 involve four multiplications and one inverse whereas the calculations in Equation 21 involve ten multiplications if we reuse the expression in the square bracket. Thus, in total fourteen multiplication and one, computationally more expensive, inverse are required. This is about ten times the computational effort needed for a PI controller that typically requires three multiplications. The processing delay of the system identification algorithm should therefore be correspondingly longer. The details of the timing depend of course on the hardware used to implement these algorithms. In particular, on a field-programmable gate array, many operations can be done in parallel. ## IV Simulations We base our simulations on parameters for the prototype spoke-cavity module [30] for the ESS [2], which operates at \(352\,\mathrm{MHz}\), has an external \(Q_{E}\)[31] in the range \(1.75\times 10^{5}\) to \(2.85\times 10^{5}\). One of the measured cavities exhibited a loaded-\(Q\) of \(Q_{L}=1.8\times 10^{5}\)[32] while it was operating at a high gradient. The resulting bandwidth is \(f_{12}=\omega_{12}/2\pi\approx 1000\,\mathrm{Hz}\). The cavity showed Lorentz-force detuning on the order of a few hundred Hz [32; 33; 34]; for our simulations we typically use \(\Delta f=\Delta\omega/2\pi=500\,\mathrm{Hz}\). Moreover, we use a process noise level of \(\sigma_{p}=10^{-4}\times V_{max}\) and a measurement noise level of \(\sigma_{m}=10^{-3}\times V_{max}\), where \(V_{max}\) is the peak voltage inside the cavity. We report the voltages and currents normalized to the values without detuning and denote them by \(v_{r},v_{i}\) and \(i_{r},i_{i}\) respectively. The peak voltage and current in those conditions then becomes unity. Furthermore, we assume that the data-acquisition system operates at a rate of \(10\,\mathrm{Msamples/s}\), resulting in \(\Delta t=100\,\mathrm{ns}\). We found that the forgetting horizon \(N_{f}\) scales with the relative noise levels \(\sigma_{p}\) and \(\sigma_{m}\). We use \(N_{f}=100\), unless explicitely specified, because it gave good results. The left-hand side in Figure 1 shows the normalized currents and voltages over the first 1000 iterations (\(100\,\mathrm{\mu s}\)), where the currents are turned on after 100 iterations. We observe that the real part of the current (black line) assumes its new value at that point whereas Figure 1: Left: the normalized currents (bottom) and the voltages (top) after starting to fill the cavity for 1000 iterations (\(100\,\mathrm{\mu s}\)). Right: the reconstructed fit parameters, the bandwidth \(f_{12}\) (black) and the detuning \(\Delta f\) (red). Note that the parameters are found despite the noise level (\(\sigma_{p}=10^{-4}\) and \(\sigma_{m}=10^{-3}\) of peak voltage) used in the simulation. the imaginary part (red line) stays zero. The voltages, shown on the upper panel slowly starts rising as the cavity is filled. Even the imaginary part of the voltage deviates from zero, owing to the finite value of the detuning. The right-hand side of Figure 1 shows the fit parameters \(f_{12}=\omega_{12}/2\pi\) and \(\Delta f=\Delta\omega/2\pi\) over the same 1000 iterations. We observe that during the first few hundred iterations the estimated fit parameters are very noisy, but settle on their correct value after this initial period. After about iteration 600 they meander quite closely around their "true" values. We can understand this behavior by noting that \(p_{T}\) is proportional to the diagonal element of the empirical covariance matrix \(P_{T}=\left(U_{T}^{\top}U_{T}\right)^{-1}\) of the least-squares fit in Equation 13. Therefore the square root of \(p_{T}\) is proportional to the error bars of the fit parameter. Figure 2 shows \(p_{T}\) for a simulation with \(N_{f}=100\) (black solid) and \(N_{f}=10\) (red dashes) for 5000 iterations. We observe that both curves initially increase during the period that the fit is noisy but then approach a constant value that determines the achievable error bars of the fit parameters. This value can be derived from Equation 20 by setting \(p_{T+1}=p_{T}=p_{\infty}\) and solving for \(p_{\infty}=1/N_{f}\vec{V}_{\infty}^{\prime 2}\). Here \(\vec{V}_{\infty}^{\prime}\) is the voltage inside the cavity. For the error bars of both components of \(\vec{q}\) we thus find \(\sigma_{m}/\sqrt{N_{f}\vec{V}_{\infty}^{\prime 2}})\), a value that corresponds to the rms deviations of the fit parameters, shown, for example on the second half in Figure 1. Figure 2: The variable \(p_{T}\) as a function of the iterations for \(N_{f}=100\) (solid) and \(N_{f}=10\) (dashes). Furthermore by construction, the off-diagonal elements of the matrix \(P_{T}=p_{T}\mathbf{1}\) are zero, which indicates that the fit of the bandwidth and the detuning are orthogonal and that makes the algorithm very robust. Moreover, we found that instead of operating open-loop, using a PI-controller to control the cavity voltage does not significantly alter the performance of the system identification process. We now explore the algorithm's ability to identify parameter changes during steady state operation. The left-hand side in Figure 3 illustrates the effect of microphonics on the the currents and voltages. We simulate this by an oscillation of \(\Delta f\) with amplitude of \(f_{12}/2\) and frequency \(1\,\mathrm{kHz}\). Especially \(v_{i}\) reveals this oscillation, though also \(v_{r}\) oscillates. The right-hand side of Figure 3 shows how the algorithm correctly identifies \(f_{12}\) and both the amplitude and oscillation frequency of \(\Delta f\). Increasing the oscillation frequency to \(20\,\mathrm{kHz}\) results in Figure 4 where we have reduced the duration of the simulation to \(10^{4}\) iterations in order to improve the visibility of oscillations on the plot. We see that the oscillations are still resolved, albeit at a lower amplitude, which is a consequence of the forgetting horizon \(N_{f}=100\). It implicitly introduces averaging over \(N_{f}\) iterations and thus behaves like a low-pass filter with a time constant of \(N_{f}\Delta t=10\,\mu\mathrm{s}\) or a cutoff frequency on the order of \(100\,\mathrm{kHz}\) that already causes some attenuation of the Figure 3: The currents and voltages (left) and the fit parameters (right) for \(10^{5}\) iterations (\(10\) ms) while the detuning \(\Delta f\) oscillates with an amplitude of \(500\,\mathrm{Hz}\) and with a mechanical-mode frequency of \(1\,\mathrm{kHz}\). The oscillations are clearly visible on both phases of the voltage and the correctly reconstructed fit parameters. 20 kHz oscillation. In Figure 5 we explore a rapid increase of the bandwidth, for example, due to a quench. In the simulation, we simply double the value of \(\omega_{12}\) after 5000 iterations. The plots in the top-left of Figure 5 show the currents and voltages and on the top-right the fit parameters. We find that the fitted bandwidth (black) is indeed doubled and that the reconstruction of the detuning is unaffected. The plot on the bottom left shows an enlarged view of the fit parameters around the time of the step. It shows that the doubled value is approached within about \(2\times N_{f}=200\) iterations. If we run the same simulation with a ten times reduced value of \(N_{f}=10\), we obtain the plot on the bottom right. We find that the changed value is approached within a few tens of iterations, albeit at the expense of an increased noise level, which is consistent with the discussion regarding Figure 2. Balancing the noise level and the response is just a matter of adjusting the value of \(N_{f}\), the topic of the following section. Figure 4: The reconstructed fit parameters for a 20 kHz mechanical oscillation of the detuning \(\Delta f\). The oscillations are still seen, but the amplitude is significantly reduced. This can be partially alleviated by decreasing \(N_{f}\), albeit at the expense of an increased noise level. ## V Signal to noise In Section IV we already found that the asymptotic noise level \(N\) for constant parameters is given by \[N=\frac{1}{\sqrt{N_{f}}}\frac{\sigma_{m}}{V_{\infty}^{\prime}}\, \tag{22}\] where we denote the magnitude of \(\vec{V}_{\infty}^{\prime}\) by \(V_{\infty}^{\prime}\). We now consider a situation where the system has reached a quasi-stationary state and that perturbations of the \(\omega_{12}\) and \(\Delta\omega\) are so small that they affect \(V_{\infty}^{\prime}\) very little. We can therefore also use it to write \(p_{\infty}=1/N_{f}V_{\infty}^{\prime 2}\) despite t Figure 5: The currents and voltages (top left) and fit parameters (top right) for \(10^{4}\) iterations (\(1\,\mathrm{ms}\)) as the bandwidth \(f_{12}\) is doubled at iteration \(5000\). The bottom row shows an enlarged view of fit parameters around the time of the change. On the left, we use \(N_{f}=100\) and on the right we use \(N_{f}=10\). temporally varying \(\omega_{12}\) and \(\Delta\omega\). Replacing \(p_{T}\) by \(p_{\infty}\) in Equation 21 then leads to \[\vec{q}_{T+1}=\alpha\vec{q}_{T}+\frac{1}{N_{f}V_{\infty}^{\prime 2}}G_{T+1}^{ \top}\vec{y}_{T+2}. \tag{23}\] Using Equation 9 and 10 we rewrite \(\vec{y}_{T+2}\) as \[\vec{y}_{T+2}=G_{T+1}\left(\begin{array}{c}\omega_{12}\Delta t\\ \Delta\omega\Delta t\end{array}\right)_{hw} \tag{24}\] where the vector on the right-hand side with the subscript \(hw\) are the "true" values of the hardware. Combining these equations, utilizing Equation 15, and replacing \(V_{T}^{\prime}\) by \(V_{\infty}^{\prime}\) we arrive at \[\vec{q}_{T+1}=\alpha\vec{q}_{T}+\frac{1}{N_{f}}\left(\begin{array}{c}\omega _{12}\Delta t\\ \Delta\omega\Delta t\end{array}\right)_{hw}. \tag{25}\] In the next step we use \(\alpha=1-1/N_{f}\) and reshuffle terms to obtain \[\frac{\vec{q}_{T+1}-\vec{q}_{T}}{\Delta t}=-\frac{1}{N_{f}\Delta t}\vec{q}_{T }-\frac{1}{N_{f}\Delta t}\left(\begin{array}{c}\omega_{12}\Delta t\\ \Delta\omega\Delta t\end{array}\right)_{hw}. \tag{26}\] Introducing \(\tau_{f}=N_{f}\Delta t\), replacing the finite difference by a differential, and Laplace-transforming the resulting equation we find \[\left(s+\frac{1}{\tau_{f}}\right)\tilde{\vec{q}}=\frac{1}{\tau_{f}}\left( \begin{array}{c}\tilde{\omega}_{12}\Delta t\\ \Delta\tilde{\omega}\Delta t\end{array}\right)_{hw} \tag{27}\] where \(s\) is the Laplace variable and we denote the Laplace transform of a variable by a tilde. We obtain the the time dependence by replacing \(s=i\omega=2\pi if\) \[\tilde{\vec{q}}=\frac{1}{1+i\omega\tau_{f}}\left(\begin{array}{c}\tilde{ \omega}_{12}\Delta t\\ \Delta\tilde{\omega}\Delta t\end{array}\right)_{hw} \tag{28}\] and find that the reconstructed system parameters \(\tilde{\vec{q}}\) are given by the hardware parameters passed through a low-pass filter with time constant \(\tau_{f}\). Of particular interest is the absolute value of the amplitude of the detuning \(\Delta\tilde{\omega}\) at frequency \(\omega\), which is given by \[S=\Delta\tilde{\omega}=\frac{\Delta\tilde{\omega}_{hw}}{\sqrt{1+(\omega\tau_{ f})^{2}}}. \tag{29}\] This constitutes the signal we strive to measure. For the signal-to-noise ratio \(S/N\) we then find \[S/N=\frac{\Delta\tilde{\omega}_{hw}}{\sqrt{1+\left(2\pi N_{f}f\Delta t\right)^ {2}}}\frac{\sqrt{N_{f}}}{(\sigma_{m}/V_{\infty}^{\prime})}\, \tag{30}\] where all parameters are explicitely written out in order to explore the trade-off among them. Apparently it depends on the magnitude (amplitude) of the detuning \(\Delta\tilde{\omega}_{hw}\) and the relative accuracy of the voltage measurement \(\sigma_{m}/V_{\infty}^{\prime}\), but also on the attenuation of an oscillation due to the forgetting time horizon \(N_{f}\). As long as \(S/N\) is sufficiently large, say 5 or so, the oscillation is discernible. ## VI Conclusions We worked out an algorithm to determine the cavity bandwidth \(f_{12}\) and the detuning \(\Delta f\) by correlating signal from a directional coupler before the cavity and the voltages inside the cavity. The calculations are very efficient and given by Equations 17 and 18 for static parameters and by Equations 20 and 21 for time-varying parameters. These recursion equations are very compact and require only moderate resources, for example, on a field-programmable gate array. Despite the absence of low-pass filtering, the RLS algorithm is resilient to noise of the measured voltages, because the forgetting horizon implicitly introduces a low-pass filter whose time constant is \(\tau_{f}=N_{f}\Delta t\). We can taylor the performance by selecting a large value of \(N_{f}\), which reduces the noise of the reconstructed parameters, whereas smaller values of \(N_{f}\) make the algorithm more responsive to parameter changes on faster times scales. The trade-off between achievable frequency resolution, \(N_{f}\), and measurement noise \(\sigma_{m}\) can be explored with the help of Equation 30. ###### Acknowledgements. Discussions with Tor Lofnes, Uppsala University are gratefully acknowledged.
2308.09126
EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding
We introduce EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior. For each question, EgoSchema requires the correct answer to be selected between five given options based on a three-minute-long video clip. While some prior works have proposed video datasets with long clip lengths, we posit that merely the length of the video clip does not truly capture the temporal difficulty of the video task that is being considered. To remedy this, we introduce temporal certificate sets, a general notion for capturing the intrinsic temporal understanding length associated with a broad range of video understanding tasks & datasets. Based on this metric, we find EgoSchema to have intrinsic temporal lengths over 5.7x longer than the second closest dataset and 10x to 100x longer than any other video understanding dataset. Further, our evaluation of several current state-of-the-art video and language models shows them to be severely lacking in long-term video understanding capabilities. Even models with several billions of parameters achieve QA accuracy less than 33% (random is 20%) on the EgoSchema multi-choice question answering task, while humans achieve about 76% accuracy. We posit that \name{}{}, with its long intrinsic temporal structures and diverse complexity, would serve as a valuable evaluation probe for developing effective long-term video understanding systems in the future. Data and Zero-shot model evaluation code are open-sourced for both public and commercial use under the Ego4D license at http://egoschema.github.io
Karttikeya Mangalam, Raiymbek Akshulakov, Jitendra Malik
2023-08-17T17:59:59Z
http://arxiv.org/abs/2308.09126v1
# EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding ###### Abstract We introduce EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over \(5000\) human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior. For each question, EgoSchema requires the correct answer to be selected between five given options based on a three-minute-long video clip. While some prior works have proposed video datasets with long clip lengths, we posit that merely the length of the video clip does not truly capture the temporal difficulty of the video task that is being considered. To remedy this, we introduce temporal certificate sets, a general notion for capturing the intrinsic temporal understanding length associated with a broad range of video understanding tasks & datasets. Based on this metric, we find EgoSchema to have intrinsic temporal lengths over \(5.7\times\) longer than the second closest dataset and \(10\times\) to \(100\times\) longer than any other video understanding dataset. Further, our evaluation of several current state-of-the-art video and language models shows them to be severely lacking in long-term video understanding capabilities. Even models with several billions of parameters achieve QA accuracy less than 33% (random is 20%) on the EgoSchema multi-choice question answering task, while humans achieve about 76% accuracy. We posit that EgoSchema, with its long intrinsic temporal structures and diverse complexity, would serve as a valuable evaluation probe for developing effective long-term video understanding systems in the future. Data and Zero-shot model evaluation code are open-sourced under the Ego4D license at egoschema.github.io. ## 1 Introduction We introduce EgoSchema, a diagnostic benchmark for assessing very long-form video-language understanding capabilities of modern multimodal systems. Understanding long natural videos requires a host of interconnected abilities such as action and scene understanding, perceiving and tracking object states, long-term visual memory, abstract reasoning, hierarchical information aggregation, and more. Shown in Fig. 1 is an exemplar of the curated EgoSchema dataset. Consider the visual cognitive faculties involved in answering the question: 'What is the overarching behavior of C and the man in the video?'. First, is the spatial recognition capabilities for disambiguating the referred character 'C' (camera wearer) and 'the man' as well as the present objects such as 'cards', 'notebook', deck as so on. Next is short-term temporal recognition capabilities of understanding the atomic actions and movement of the characters such as 'playing', 'taking notes','shuffling' etc. Built upon these are the capabilities for visually understanding the mental states such 'distracted', 'attention' and social dynamics such as 'teaching','showing'. Next are medium-term actions such as 'organizing the deck' or 'keeping track'. Finally, long-term reasoning capabilities need to be employed for abstracting the 'overarching behavior' of the video from all the low-level signals to be able to rule out all the other wrong options and conclude option 3 to be correct. Note that even for humans, it is impossible to answer the illustrated questions with only the shown 9 uniformly sampled frames from the three-minute video (Fig. 1). While there have been some prior attempts to formulate long-form video tasks [50; 43], they broadly tend to fall into two failure modes. The first failure mode stems from the difficulty of capturing the explosive diversity of human behavior in narrow pre-defined label spaces that leading unduly narrow and oddly specific tasks, such as like ratio or relationship prediction [50]. Hence, we propose to probe video systems capturing the rich complexity of long-form video with something just as rich and complex - natural language. However, natural language outputs are notoriously difficult to evaluate with popular metrics such as BLEU [39] and ROUGE [32] having well-known shortcomings [6]. Hence, we propose to evaluate language understanding as a multiple-choice question-answering task, thereby using the well-defined benchmark metric of overall question-answering accuracy. The second failure mode for a long-term video task is that the proposed task happens to actually be a short-term one - only disguised as a long-term task. To measure the intrinsic "long-term" nature Figure 1: **The EgoSchema dataset** contains over 5000 very long-form video language understanding questions spanning over 250 hours of real, diverse, and high-quality egocentric video data. Each question requires choosing the correct answer out of five choices based on a _three minute_ long video clip. The questions are manually curated to require very long _temporal certificates_ (§3.2). EgoSchema median certificate length is about \(100\) seconds, which is \(5\times\) longer than the closest second dataset and \(10\times\) to \(100\times\) longer (Fig. 3) than any other video understanding dataset. State-of-the-Art video-language models consisting of billion of parameters achieve very low accuracy (< 33%) in Zero-shot evaluation (random is 20%) while humans achieve about 76%. ‘C’ refers to the camera wearer. Visualized clips are available at egoschema.github.io/explorer. of a video understanding task, we propose the notion of temporal _certificate length_[4]. Intuitively, certificate length (SS3.2) is the length of the video a human verifier needs to observe to be convinced of the veracity of the marked annotation. The idea of temporal certificates is not limited only to question-answering or vision-language tasks but is applicable to several video understanding tasks, including pure vision tasks such as action classification, detection, or even temporal action localization. Based on the length of the temporal _certificate_, we propose the following temporal understanding taxonomy for video tasks: Datasets with certificate length in the order of \(1\) second are termed short video tasks. Next, we name datasets with certificate length in the order of \(10\) seconds as, long-form video tasks. Finally, datasets with certificate length in the order of \(100\) seconds are termed as, very long-form video tasks. Fig. 3 presents estimates of the certificate lengths for a variety of datasets plotted against the temporal length of the video clip. We observe that the temporal certificate length is quite weakly correlated with the length of the video clip. This is due to the intentional design choice in defining the certificate set, which decouples the task of searching or retrieving the relevant sub-clip from a bigger clip from the task of visually understanding the retrieved sub-clip. And in this manner, using temporal certificate length as a metric for measuring the intrinsic temporal hardness of a dataset, avoids the failure mode of formulating an implicitly short-term task disguised as a long-term one. Section 3.2 details precise operationalizations for estimating the temporal certificate sets. In summary, our contributions are three-fold. _First_, we propose the notion of temporal certificates, a broadly applicable notion that measures the intrinsic temporal hardness of clips in a video understanding dataset. We estimate temporal certificate lengths for a broad variety of existing datasets and show that EgoSchema has a median temporal certificate of about \(100\) seconds, which is \(5\times\) longer than the dataset with the second longest certificate length [50], and \(25\times\) to \(100\times\) longer than all other existing video understanding datasets (with or without language). _Second_, building upon the notion of temporal certificates, we introduce EgoSchema, a diagnostic benchmark for assessing the very long-form video understanding capability of multimodal video-language systems. _Third_, we benchmark both state-of-the-art video-language systems and humans in Zero-shot settings on EgoSchema to find that even the most advanced current video-language understanding systems consisting of billion of parameters achieve very low accuracy in long-from multiple-choice question-answering (< 33%) while humans achieve about \(76\%\) accuracy in the unconstrained setting. Figure 3: **Certificate Length across video datasets** for a broad spectrum of tasks such as action classification, detection, relationship classification, concept classification, video classification, and multiple choice question-answering. §4.1 details the precise operationalizations. Figure 2: We introduce the notion of a temporal certificate set (top, §3.2), a tool to measure the intrinsic temporal length of a benchmark and show the EgoSchema certificate length distribution (bottom, §4.1) for randomly chosen \(100\) clips. ## 2 Related Works **Video Question-Answering Datasets.** Visual Question-Answering [3] is a popular video-language task with several large internet-scale datasets for video-language pre-training such as Ego4D [20], HowTo100M [34] and HowToVQA69M [33]. However, as the scope and size of pre-training datasets and models soar, it becomes critical to construct evaluations for assessing the model capabilities on various axes. Hence, many smaller datasets have been proposed for evaluating different aspects of video-language understanding such as compositional reasoning [21; 22], causal and common scene comprehension [52], instruction understanding [33; 56], video description ability [54], dynamic environments understanding [15], complex web video understanding [62], situated reasoning [49], spatiotemporal reasoning [27], social intelligence [64], dynamic neuro-symbolic reasoning [61], external knowledge-based reasoning [16] and many more [36; 60; 41; 10; 9; 13; 44; 55; 29; 30; 8; 63; 11; 31; 58; 65; 51; 24]. How2VQA69M [33] and iVQA [33] have leveraged HowTo100M [34] ASR text for generating questions. However, unlike Ego4D narrations that are used in EgoSchema, ASR text does not necessarily describe the visual elements in the scene. Hence, questions can suffer from biases where a key required information is visually absent. Additionally, generated question-answers also have quite short certificate lengths (iVQA in Fig. 2) due to the local nature of the ASR text. **Long-form Video Understanding Datasets** have been very sparsely explored in prior works. [50] posits a long-form video understanding benchmark but the proposed tasks are unduly narrow and specific, such as the 'like' ratio and view count prediction. Also, [50] average certificate length is about \(5.7\times\) smaller than EgoSchema. [35] proposes a dataset for benchmarking efficient video inference consisting of frame-wise object mask annotations from Mask-RCNN [25] but without any long-term annotations. [42] introduces a dataset of about 111 hours of video sourced from Kinetics-400 [7] for generic event boundary detection. While the task itself requires comprehensive understanding, the video clip length is only 10 seconds long, with temporal _certificates_ (SS3.2) being much shorter. [46] proposes a question-answering dataset based on long movie clips but due to the open-ended nature of questions, successful approaches tend to neglect the visual data and are biased purely with approaches using additional text such as story lines. [43] proposes MAD, a language grounding dataset with an average clip of \(110\) minutes. However, the length of the retrieved clip is quite short (average \(4.1\) seconds) thereby resulting in a temporal _certificate_ (SS3.2) only a few seconds long. Further, MAD [43] and several other movie-based datasets [26; 47; 53] do not release any video data because of copyright issues. In contrast, EgoSchema has an average certificate length of about \(100\) seconds. Further, EgoSchema will be publicly released under the Ego4D license, which allows direct public use of the video and text data for both research and commercial purposes. Figure 4: EgoSchema data pipeline. Stage I filters the suitable Ego4D RGB videos and narrations for question-answer generation (§3.1.1). Stage II uses narrations in a chained LLM prompting §3.1.2) procedure to generate multiple \(\mathcal{Q}\mathcal{A}\mathcal{W}\) triplets per three-minute video clip (§3.1.2). Stage III performs pre-filtering with rule-based and LLM-based logic (§3.1.3). Finally, Stage IV involves two rounds of human curation on filtered \(\mathcal{Q}\mathcal{A}\mathcal{W}\) for selecting very long-form video-language understanding data (§3.1.4). The stage width ratios are indicative of the filter selection ratios. ## 3 Collecting EgoSchema Collecting video and language datasets, even without a focus on very long-form video is quite challenging. Manually collecting, observing, and annotating videos with free-form language, in contrast to using images and pre-defined label categories, is both labor-intensive and time-consuming and thereby quite expensive. In addition to burgeoning cost, ensuring visual data diversity and minimizing visual and linguistic bias while ensuring high quality of marked annotations also contribute to the overall difficulty. All these factors get severely more challenging for long-form videos. In this work, we propose a staged data collection pipeline (Fig. 4) utilizing existing large-scale but short-term video datasets, rule-based filtering procedures, and exciting new capabilities afforded by LLMs to significantly lighten the burden on human annotators. We use the proposed pipeline for curating EgoSchema, a high-quality and diverse very long-form video question-answering dataset. Associated datasheets [17] and data cards [40] for EgoSchema are provided in the _supplementary_. ### EgoSchema Pipeline #### 3.1.1 Stage I: Raw Data Filtering Ego4D [20] has over 3670 hours of RGB video spread consisting of over 3.85 million narration instances covering over 1,772 unique verbs (activities) and 4,336 unique nouns (objects) [20]. The narrators are instructed to continuously pause and describe everything that the camera wearer (*C') does. This creates dense and precise narrations that accurately describe the visuals. Naturally, the collected video has non-uniform length and narration density. Since we would like to standardize the clip length for evaluation and have sufficiently rich narrations to allow interesting question-answer pairs to form in later stages, we filter the data based on the length and narration density. We choose to filter for non-overlapping three-minute clips each with at least 30 human annotated narrations (each narration is a timestamped sentence) to build EgoSchema. Detailed statistic of the number of viable clips for different possible length and narration density choices is discussed in _supplementary_. #### 3.1.2 Stage II: Question Answer Generation The filtered narrations are processed with a capable LLM to generate \(N\) Question-Answer triplets (\(\mathcal{QAW}\)), each consisting of the question \(\mathcal{Q}\), the correct answer \(\mathcal{A}\), and \(M\) wrong answers \(\mathcal{W}\), per clip. To achieve this, we experimented with several LLM inference call chaining procedures with trade-offs between quality and cost of generation that are briefly described next. **One-shot** is the simplest prompting procedure to prompt for all \(N\) instances of \(\mathcal{QAW}\) in one inference call. This is the most cost-efficient option but we found the generations to be of significantly low quality. The generated \(\mathcal{Q}\) often are very similar to each other and the generated \(\mathcal{AW}\) have a very high false positive rate for the correct answers as well as a false negative rate for the wrong answers. **N-shot** is the next natural prompting procedure where we generate one \(\mathcal{QAW}\) per LLM inference call. This significantly improves the false positive and false negative rates but since the generated \(\mathcal{Q}\) are independent and generated with the same prompt, they still tend to be very similar (comparable to one-shot), even at higher sampling temperatures. Further, the cost of generation also scales with \(N\) Figure 5: An abridged example of the generation and filtering prompts used in the EgoSchema data generation pipeline (§3). Full versions are provided in the _supplementary_. **QAW-shot** generates each of the \(N\) questions \(\mathcal{Q}\) in one inference call, followed by another inference call for generating \(N\) correct answer \(\mathcal{A}|\mathcal{Q}\) and finally, \(N\times M\) wrong answers, \(\mathcal{W}|\mathcal{Q},\mathcal{A}\). Since each of the \(N\)\(\mathcal{Q}\) is generated jointly, they can be forced to be distinct with appropriate prompting. Similarly, the generated \(\mathcal{A}\) and \(\mathcal{W}\) can also be made distinct. However, this requires 3 _chained_ LLM inference calls, and generation failures in earlier calls cascade steeply. **Q(AW)-shot** generates each of the \(N\) questions \(\mathcal{Q}\) in one inference call, followed by a final inference call for generating all the \(N\) correct and \(N\times M\) incorrect answers in one go \(\mathcal{A},\mathcal{W}|\mathcal{Q}\). It enjoys the same uniqueness properties as QAW-shot while having just two chained calls, making it both \(~{}30\%\) cheaper and less prone to generation failure cascading. Further, between Q(AW)-shot and QAW-shot, we observe Q(AW)-shot to have a higher generated \(\mathcal{A}\) quality, perhaps since LLM can jointly model \(\mathcal{W}\) while generating \(\mathcal{A}\). We choose this to be our main method of choice for generating \(\mathcal{QAW}\). **Prompt** for imputing narrations into the LLM has a tremendous effect on the quality of generated \(\mathcal{QAW}\). We experiment with several seed prompts for each of which we inspect the quality of the \(N\) generated \(\mathcal{QAW}\) for \(10\) clips. Based on this we iteratively improve the seed prompts manually in a zeroth order optimization fashion. In total, we experiment with a total of about \(85\) prompts in this fashion to arrive at our final EgoSchema prompts \(-\mathcal{P}_{\mathcal{Q}}\) for generating \(N\times\mathcal{Q}\) questions and \(\mathcal{P}_{\mathcal{A}\mathcal{W}}\) for generating all remaining options \((\mathcal{A}\mathcal{W})|\mathcal{Q}\). While we fix the \(\mathcal{P}_{\mathcal{Q}}\) prompt, we use multiple \(\mathcal{P}_{\mathcal{A}\mathcal{W}}\) prompts so as to avoid any unintended bias in the options. Fig. 5 shows an abridged example of \(\mathcal{P}_{\mathcal{Q}}\) and \(\mathcal{P}_{\mathcal{A}\mathcal{W}}\), full versions available in _supplementary_ material. **Choice of LLM** is extremely crucial for obtaining interesting long-form \(\mathcal{Q}\) and generating hard negatives for \(\mathcal{W}\). With weaker LLMs, the \(\mathcal{Q}\) diversity across video clips remains narrow, and \(\mathcal{W}\) tends to be either obviously wrong or, too similar to \(\mathcal{A}\) and thus a false negative. While we experimented with both GPT-3 [5] and ChatGPT [37] but only found good quality generated \(\mathcal{QAW}\) at a high enough rate with GPT-4 [38], Bard [18], and Claude [2]. For details please see _supplementary_. We generate \(N=3\) questions per three-minute clip as well as \(M=4\) wrong answers to every question in addition to the correct answer. We observe that larger \(N\) or \(M\) tends to generate similar questions and wrong answers putting unnecessary pressure on Stages III and IV for filtering. #### 3.1.3 Stage III: Generated Question Answer Filtering While Stage II produces several high-quality \(\mathcal{QAW}\), even the best LLM generations are prone to output format aberrations, hallucinations, and sometimes plain false outputs. Further, despite specific pinpointed prompts (Fig. 5), LLMs can fail to comply. Since, we want to ensure EgoSchema to be extremely high-quality and accurate, we set up several filtering rounds to ensure the correctness and high difficulty of questions. **Rule-based filtering.** Keywords from the prompts such as 'long-term', 'narrations', 'timestamp' etc. can sometimes bleed into the generated \(\mathcal{QAW}\) which are then discarded. The output generations can also fail to parse according to a specified format and are also then discarded and the concerned \(\mathcal{QAW}\) is regenerated. **LLM-based filtering.** While rule-based filtering needs out logic errors, we would like to further enrich \(\mathcal{QAW}\) before employing human labor. For example, we aim to ensure EgoSchema requires grounded visual reasoning to solve, and hence questions should not be answerable _ungrounded_, without carefully observing the video. Hence, we develop a "blind" baseline. **Blind filtering baseline** employs LLM to guess the correct answer based on the question, without having access to the video narrations conditioned on the shown filtering prompt (Fig. 5). All such ungrounded questions that can be answered blindly are filtered out. This also ensures that generated \(\mathcal{W}\) are indeed relevant and plausible answers to \(\mathcal{Q}\), since otherwise, the LLM would be able to guess \(\mathcal{A}\) based only on the setting of \(\mathcal{Q}\). Note that this is overly restrictive since it is possible that a question is guessed correctly through chance and is not necessarily ungrounded. However, we choose to optimize precision over recall since the amount of filtered \(\mathcal{QAW}\) is still large enough. **No-\(\mathcal{Q}\) baseline.** We also experimented with a No-\(\mathcal{Q}\) baseline, where the LLM is prompted to guess the correct answer using the narrations but without the question \(\mathcal{Q}\). This ensures that the wrong answers are relevant and plausible to the video clip. However, we found this baseline to have near random accuracy (\(\sim 20\%\)), highlighting the efficacy of Stage II. Hence, we decided to not use this filter in the final pipeline. Additional details including the full prompt are in _supplementary_. #### 3.1.4 Stage IV: Manual \(\mathcal{Q}\mathcal{A}\mathcal{W}\) Curation While LLM filtering ensures that the generated \(\mathcal{Q}\mathcal{A}\) relates to the video content, it's also necessary to ensure the veracity and a long temporal certificate length for every generated \(\mathcal{Q}\mathcal{A}\mathcal{W}\). This is achieved through a two-step manual curation process. In the first round of curation, annotators are tasked with three primary responsibilities: **(A)** First, they verify that \(\mathcal{Q}\) is well-formed and \(\mathcal{A}\) is indeed the correct answer to \(\mathcal{Q}\). **(B)** Next, they confirm that all the \(M\) distractors, \(\mathcal{W}\), are indeed wrong answers to \(\mathcal{Q}\). **(C)** Finally, they ensure that the temporal certificate length for answering \(\mathcal{Q}\) is at least 30 seconds. A \(\mathcal{Q}\mathcal{A}\mathcal{W}\) is discarded if any of these three conditions are not met. This reduces the number of admissible questions by a factor of about \(4\times\) to \(5\times\) within the first round itself. Next is a second round of re-curation, to reinforce the conditions and guarantee data of the highest quality. We find that more than \(97\%\) of the questions that pass the first round also pass the second round, speaking to the efficacy of the curation process. A crucial aspect of ensuring that the question assesses very long-form video-language understanding capabilities is the notion of temporal certificate length (condition (C) above), which we describe next. The detailed procedures for onboarding and training the human annotators, as well as the instructions for the curation process are provided in the _supplementary_. ### Temporal Certificates We define the temporal _certificate_ of a given video in a video understanding task to be the minimum set of _subclips_ of the video that are both _necessary_ and _sufficient_ to convince a human verifier that the marked annotation for that data (such as timestamps in temporal activity localization, class label in activity recognition or, the correct option in multiple-choice question-answering) is indeed correct, without having to watch the rest of the clip outside of the certificate set (Fig. 2). Naturally, we define certificate length to be the sum of the temporal lengths of the sub-clips present in the certificate set. **Meta-rules.** Datasets often have implicit rules that apply uniformly across the entire dataset. We call these conventions meta-rules and allow the human verifier to be well aware of them. For example, in temporal action localization datasets [28], an implicit assumption is that the action to be localized in a contiguous sub-clip and hence can be uniquely determined by the start and end timestamps. Since this rule is valid for all data, we consider it to be a meta-rule. A comprehensive understanding of _meta_-rules of a dataset is necessary for accurate estimation of the certificate set, and hence the certificate length. Otherwise, a spuriously long certificate might be necessary to ensure the veracity of the marked annotations. For example, consider the task of action classification on Kinetics-400. A valid meta-rule to be made available to the human verifier in this case is the mutual exclusivity of action classes i.e., each data point can belong only to one of the 400 classes present in Kinetics-400. Without this understanding, given, say a 10-second clip of a human skiing, the certificate set needs to necessarily encompass the entire 10 seconds since otherwise the human verifier might not be convinced that all of the other 399 actions are not occurring in the clip. However, with the knowledge of the label exclusivity meta-rule, the certificate length will be drastically reduced to just a fraction of a second since just observing the action of skiing in a few frames is sufficient for the human verifier to out-rule all other action classes. **Certificate Conventions**. For small certificate lengths, it is difficult for humans to estimate the exact sub-clip timestamps to be included in the certificate set. Hence, we choose to have a minimum length of \(0.1\) second for a certificate. Further, in the case of two non-contiguous certificates, we collapse them into one if their closest ends are \(<5\) seconds apart. In cases where a fact needs to be verified at several places throughout the video, we let the annotator make a reasonable judgment for the length of the certificate to be included as long as it follows the above conditions. ## 4 Benchmarking EgoSchema ### Evaluating Certificate Lengths Fig. 3 presents certificate lengths for a spectrum of tasks spread across \(15\) different datasets such as, action classification (Kinetics [7], Something-Something [19], UCF101 [45], HVU-Action [12]), detection (AVA [23]), relationship classification (LVU [50]), concept classification (HVU-Concept [12]), video classification (Youtube-8M [1]), Question-Answering (NextQA [52], AGQA [22], NextQA [52], IVQA [33], MSRVTT [54], ActivityNet-QA [62], EgoSchema). For EgoSchema we benchmark the certificate length for 5 hours of video data (\(100\mathcal{Q}\mathcal{AV}\)) chosen randomly. For each other dataset, we ensure that (A) each annotated label class (if applicable) has at least 1 data sample evaluated and, (B) at least two hours of human effort is applied. Fig. 2 shows the histogram of estimated EgoSchema temporal certificate lengths for the 100 clips. Fig. 3 plots the certificate length against the actual clip length. We observe that EgoSchema has temporal certificate length \(5.7\times\) longer than the second longest certificate length dataset, and \(10\times\) to \(100\times\) longer than all other video understanding datasets. ### Evaluating Multiple-choice Question Answering on EgoSchema In Table 6, We benchmark several state-of-the-art video-language models, with the intention of adding more models in the future, in a Zero-shot question-answering setting on EgoSchema. We evaluate each model in at least two settings. First is the conventional inference setting, where the model is assessed based on the same number of frames it was trained with. And second is a less challenging setting, where the model is tested on the maximum number of frames possible to execute inference with, using an 80G A100, without exceeding the GPU memory capacity. In both settings, frames are sampled uniformly from the input video clip. **FrozenBiLM**[57] adapts frozen multi-modal encoders trained on web-scale data for the task of question answering and achieves state-of-the-art zero-shot QA accuracy across \(8\) video question-answering datasets. We choose the How2QA FrozenBiLM model under both \(10\) and \(90\) frames. **VIOLET**[14] a masked token modeling-based video language transformer that performs competitively on a variety of video-language tasks. We evaluate four of the best VIOLET models that are finetuned on different tasks for both \(5\) and \(75\) frames and choose the model with the best overall accuracy. More details are in _supplementary_. **mPLUG-Owl**[59] proposes a training strategy to add image & video modality to pretrained large language models. We adapt mPLUG to facilitate the multiple choice QA by prompting the model with each of the options individually in the format: 'Given question <question text>, is answer <answer text> correct?' along with the video frames. Then, we choose the option with the highest softmax score of the token 'Yes' in the output text. We observe accuracy to be non-monotonic in frame length, and report results in \(1\) to \(30\) frames in Table 6. **InternVideo**[48] proposes training video-language models jointly with masked video modeling and contrastive learning objectives. By default, InternVideo does not directly support multiple-choice video QA. We adapt the MSRVTT finetuned InternVideo model, which performs zero-shot multiple-choice tasks, by incorporating the question with each answer choice in the format: 'Question: <question text>? Is it <answer text>'. Then, we choose the option with the highest output score as the prediction. We report results spanning 10 to 90 input frames in Table 6. We observe that performance is monotonic with the number of frames but the gain saturates around just \(30\) frames. **Human.** We also benchmark human performance on multiple-choice question answering task on EgoSchema in Table 7. _First_, are time pressure settings where the annotators are asked to Figure 6: **Benchmarking Zero-shot QA on EgoSchema** choose the correct answer under one ('In <1 min') and three ('In <3 min') minutes. Humans can already achieve an impressive 67.0% accuracy, in under 1 minute! Interestingly, this only slightly increases (+1.0%) when allowed three minutes. We believe that this can inform about performance on EgoSchema in limited model inference capacities. We believe this could inform about the frame rate needed for long-form video understanding in future models. _Second_, we also benchmark human performance using only 1 fps video ('180 frames'). Surprisingly, we observe that just with 1 fps humans can achieve an impressive 67.2%. _Third_, we evaluate human performance in a restrictive setting where the annotator is forced to first watch the video without reading the text, and then answer the question without re-watching the video ('Video \(\longrightarrow\) Text'). Curiously, this achieves better accuracy than the 'No constraint' setting where the annotators are asked to simply answer without any constraints (76.2% vs. 75.0%). A possible hypothesis is that watching the video without text allows the annotator to focus more closely on the video, thereby benefiting performance than the setting where the attention is somewhat divided between the text and video. We believe this will help us understand the performance trade-offs in the early vs. late fusion of video and text modalities for long-form video-language models. Accuracy for 'No constraint' setting is estimated over 9 hours of video. All other accuracies are estimated over 5 hours of video. ## 5 Conclusion We present EgoSchema, a novel diagnostic benchmark designed for assessing very long-form video-language understanding capabilities of modern multimodal models. We also introduce the notion of a temporal _certificate_ set, a probe that can be applied to a wide array of video tasks and benchmarks for understanding their intrinsic temporal lengths. We estimate temporal certificates of 15 varied datasets and demonstrate EgoSchema to exhibit temporal certificate length approximately \(5.7\times\) longer than the next longest dataset and \(25\times\) to \(100\times\) longer than all other video understanding datasets. We also benchmark several state-of-the-art models on EgoSchema and find their Zero-shot question-answering accuracy to be less than \(33\%\) while humans achieve 76%. We believe that EgoSchema will play a key role in the development and evaluation of future very long-form video-language models. **Limitations.** EgoSchema RGB clips are sourced from Ego4D [20] and inherit Ego4D egocentric video biases. Further, the text is carefully curated for veracity, there are inevitable text data distribution biases that can occur in LLM-generated outputs due to biases present in web-scale LLM training data. Finally, human curation itself is far from perfect and while we perform two rounds of curation to minimize false positives, the collected EgoSchema is most likely to inevitably contain some small mislabelled or ill-formed question-answer sets. We plan to host a crowd-sourced errata board to minimize human curation error over time with the support of the open-source research community.
2310.13334
Convergence analysis on the alternating direction method of multipliers for the cosparse optimization problem
From a dual perspective of the sparse representation model, Nam et al. proposed the cosparse analysis model. In this paper, we aim to investigate the convergence of the alternating direction method of multipliers (ADMM) for the cosparse optimization problem. First, we examine the variational inequality representation of the cosparse optimization problem by introducing auxiliary variables. Second, ADMM is used to solve cosparse optimization problem. Finally, by utilizing a tight frame with a uniform row norm and building upon lemmas and the strict contraction theorem, we establish a worst-case $\mathcal{O}(1/t)$ convergence rate in the ergodic sense.
Zisheng Liu, Ting Zhang
2023-10-20T07:53:27Z
http://arxiv.org/abs/2310.13334v2
Convergence analysis on the alternating direction method of multipliers for the cosparse optimization problem ###### Abstract From a dual perspective of the sparse representation model, Nam et al. proposed the cosparse analysis model. In this paper, we aim to investigate the convergence of the alternating direction method of multipliers (ADMM) for the cosparse optimization problem. First, we examine the variational inequality representation of the cosparse optimization problem by introducing auxiliary variables. Second, ADMM is used to solve cosparse optimization problem. Finally, by utilizing a tight frame with a uniform row norm and building upon lemmas and the strict contraction theorem, we establish a worst-case \(\mathcal{O}(1/t)\) convergence rate in the ergodic sense. s + Footnote †: journal: Computer Science parse representation model, cosparse analysis model, alternating direction method of multipliers, variational inequality, convergence analysis ## 1 Introduction Low-dimensional signal recovery takes advantage of the inherent low-dimensionality of many natural signals, despite their high ambient dimension. Utilizing prior information about the low-dimensional space can significantly aid in recovering the signal of interest. Sparsity, a widely recognized form of prior information, serves as the foundation for the burgeoning field of compressive sensing (CS [1, 2, 3, 4, 5]). The recovery of sparse inputs has found numerous applications in areas such as imaging, speech, radar signal processing, sub-Nyquist sampling, and more [6, 7, 8, 9]. A typical sparse recovery problem is associated with the following linear system: \[y=Mx, \tag{1}\] where \(y\in R^{m}\) is an observed vector, \(M\in R^{m\times d}\) is a measurement matrix and \(x\in R^{d}\) is an unknown signal which would be estimated from \(y\). According to the Nyquist Shannon sampling theorem, if the \(k\)-space data is undersampled so much that it fails to meet the Nyquist sampling criterion, then reconstructing the data can be difficult or impossible without prior knowledge of \(x\). ### Sparse synthesis model Over the past decade, the application of compressed sensing significantly increased the image reconstruction speed and efficiency because of its capability to reconstruct images from highly undersampled signals. Sparse prior is widely used in CS-based reconstruction methods. For the sparse synthesis model, if a vector \(x\) is sufficient sparse, under the incoherence assumptions on the measurement matrix \(M\), \(x\) can be robustly estimated by the problem \[\begin{split}&\min_{x}\ \|x\|_{\tau}\\ & s.t.\ y=Mx,\end{split} \tag{2}\] where \(0\leq\tau\leq 1\). The advanced ideas and methods have been explored by applications in signals and image processing [10, 11, 12, 13]. After years of research, this model is becoming more and more mature and stable. ### Cosparse analysis model In the recent decade, the cosparse analysis model is an alternative approach has gained popularity [14, 15, 17, 18, 19, 20]. Within this framework, a potentially redundant analysis operator \(D\in\mathbb{R}^{n\times d}(n\geq d)\) is employed, and the analyzed vector \(Dx\) is expected to be sparse. This implies that a signal \(x\in R^{d}\) belongs to the cosparse analysis model with cosparsity \(\ell\) if \(\ell=n-\|Dx\|_{0}\). In this paper, the quantity \(\ell\) represents the number of rows in \(D\) that are orthogonal to the signal. Consequently, \(x\) is referred to as \(\ell\)-cosparse or simply cosparse. The specific definitions of cosparse and cosupport can be found in literature [15], for ease of reference, we have listed them below. **Definition 1.1** (Cosparse).: A signal \(x\in R^{d}\) is said to be cosparse with respect to an analysis operator \(D\in R^{n\times d}\) if the analysis representation vector \(Dx\) contains many zero elements. Further, the number of zero elements \[\ell=n-\|Dx\|_{0}\] is called the cosparsity of \(x\), we also say \(x\) is \(\ell\)-cosparse. **Definition 1.2** (Cosupport).: For a signal \(x\in R^{d}\) and a given analysis operator \(D\in R^{n\times d}\) with its rows \(D_{j}\in R^{d}(1\leq j\leq n)\), the cosupport is defined by \[\Lambda:=\{j|\langle D_{j},x\rangle=0\}.\] In this paper, \(D\) is a tight frame with uniform row norm. We remind the reader that a frame is defined as below. **Definition 1.3** (Frame[21, 22]).: Let \(\Phi=\{\varphi_{i}\}_{i=1}^{N}\subseteq R^{n}\) be a vector sequence of the Hilbert space with \(N\geq n\). If there exist constants \(0<A\leq B<\infty\) such that \[\forall x\in R^{n},\quad A\|x\|^{2}\leq\sum_{i=1}^{N}|\langle x,\varphi_{i} \rangle|^{2}\leq B\|x\|^{2}, \tag{3}\] then \(\Phi\) is referred to as a finite frame of \(R^{n}\). The constants \(A\) and \(B\) in the above formula are known as the lower and upper bounds of the finite frame \(\Phi\), respectively. They are considered to be the optimal bounds, with \(A\) being the supremum in the lower bound and \(B\) being the infimum in the upper bound. If \(A=B\), then the frame \(\Phi\) is called an \(A\)-tight frame. If \(A=B=1\), then \(\Phi\) is called a Parseval frame. If there exists a constant \(C\) such that each meta-norm \(\|\varphi_{i}\|=C\) of the frame \(\Phi\), then \(\Phi\) is called an iso-norm frame. In particular, for a tight frame, if \(C=1\), it is referred to as a uniformly tight frame. According to the definition of cosparsity, the cosparse analysis model focuses on the zero elements of the analysis representation vector \(Dx\), rather than the non-zero elements. This perspective contrasts with the sparse synthesis model. If the cosparsity \(\ell\) is significantly large, meaning that the number of zeros \(\ell\) is close to \(d\), we say that \(x\) has a cosparse representation. The cosupport set is identified by iteratively removing rows from \(D\) for which \(\langle D_{j},x\rangle\neq 0\) until the index set \(\Lambda\) remains unchanged, with \(|\Lambda|\geq\ell\). If the analysis representation vector \(Dx\) is sparse, similar to the sparse model, the estimation of \(x\) from the measurements can be achieved by \[\begin{split}&\min_{x}\ \|Dx\|_{0}\\ & s.t.\ y=Mx.\end{split} \tag{4}\] The minimization problem (4) is known to be NP-hard [15], necessitating the use of approximation methods. Similar to the sparse model, one option is to use the greedy analysis pursuit (GAP) approach, which is inspired by the orthogonal matching pursuit (OMP) algorithm [14, 15, 16]. Alternatively, the nonconvex \(\ell_{0}\) norm can be approximated by the convex \(\ell_{1}\) norm, leading to the relaxed problem known as analysis basis pursuit (ABP) [23]. In this case, \(x\) can be estimated by solving a modified optimization problem \[\begin{split}&\min_{x}\|Dx\|_{1}\\ & s.t.\ \|y-Mx\|_{2}\leq\epsilon,\end{split} \tag{5}\] where \(\|\cdot\|_{1}\) is the \(\ell_{1}\) norm that sums the absolute values of a vector and \(\epsilon\) is a upper bound on the noise level \(\|v\|_{2}\). ABP is equivalent to the unconstrained optimization \[\min_{x}\|Dx\|_{1}+\frac{\alpha}{2}\|y-Mx\|_{2}^{2}, \tag{6}\] which we call analysis LASSO (ALASSO). It can be said that ABP and ALASSO are equivalent in the sense that for any \(\epsilon>0\), there exists an \(\alpha\) such that the optimal solutions of ABP and ALASSO are identical. For the optimization problem (6), our previous work presented the modified GAP algorithm and error analysis [18, 24]. The simulations we conducted demonstrated the advantages of the proposed method for the cosparse optimization problem. These optimization problems can also be solved using interior point methods [25]. However, as the problem dimension increases, these techniques become time-consuming since they require solutions of linear systems. Other suggested approaches include the alternating direction method of multipliers (ADMM) [26, 27, 28, 37] and the accelerated alternating minimization method (AAM) [29]. In this paper, we propose a new way to analyze the convergence theory of the cosparse optimization problem based on the variational inequality. ### Organization of the paper Our focus in this paper is on the cosparse optimization problem and its convergence study based on a variational inequality. The paper is structured as follows: In Section 2, we introduce auxiliary variables and investigate the variational inequality characterization of the cosparse optimization problem. In Section 3, we present several lemmas that establish the strict contraction of the ADMM for the cosparse optimization problem. Using these lemmas and the strict contraction theorem, we establish a worst-case \(\mathcal{O}(1/t)\) convergence rate in the ergodic sense. Finally, Section 4 provides a brief conclusion. ## 2 Preliminaries To apply the ADMM for solving the cosparse optimization problem (6), we convert the unconstrained optimization problem mentioned above into a constrained optimization problem as follows \[\begin{split}&\min_{x,z}\lVert z\rVert_{1}+\frac{\alpha}{2}\lVert y -Mx\rVert_{2}^{2}\\ & s.t.\;Dx-z=0,\end{split} \tag{7}\] where an auxiliary variable \(z\in R^{n}\) is introduced in (6) to transfer \(Dx\) out of the nondifferentiable term \(\lVert\cdot\rVert_{1}\) and \(\alpha>0\) is a penalty parameter. In this section, we summarize the variational inequality (VI) characterization of (7). Initially, we present the optimality condition of the constrained optimization problem (7), which forms the foundation for our subsequent convergence analysis [30, 32]. We then proceed to express the Lagrangian function of (7) as follows \[L(z,x,\lambda)=\lVert z\rVert_{1}+\frac{\alpha}{2}\lVert y-Mx\rVert_{2}^{2}- \lambda^{T}(Dx-z). \tag{8}\] In (8), we assume that \(x\in\mathcal{X}\), \(z\in\mathcal{Z}\) and \(\lambda\in R^{n}\) where \(\mathcal{X}\subset R^{d}\) and \(\mathcal{Z}\subset R^{n}\) are closed convex sets, we call \((z^{*},x^{*},\lambda^{*})\in\Omega:=\mathcal{Z}\times\mathcal{X}\times R^{n}\) to be a saddle point of \(L(z,x,\lambda)\) if the following inequalities are satisfied \[L(z^{*},x^{*},\lambda)\leq L(z^{*},x^{*},\lambda^{*})\leq L(z,x,\lambda^{*}). \tag{9}\] Obviously, a saddle point \((z^{*},x^{*},\lambda^{*})\) can be characterized by the system \[\left\{\begin{array}{l}z^{*}=\arg\min\{L(z,x^{*},\lambda^{*})|z\in \mathcal{Z}\},\\ x^{*}=\arg\min\{L(z^{*},x,\lambda^{*})|x\in\mathcal{X}\},\\ \lambda^{*}=\arg\max\{L(z^{*},x^{*},\lambda)|\lambda\in R^{n}\},\end{array}\right. \tag{10}\] which can be rewritten as \[\left\{\begin{array}{l}z^{*}\in\mathcal{Z},L(z,x^{*},\lambda^{*})-L(z^{*},x ^{*},\lambda^{*})\geq 0,\\ x^{*}\in\mathcal{X},L(z^{*},x,\lambda^{*})-L(z^{*},x^{*},\lambda^{*})\geq 0,\\ \lambda^{*}\in R^{n},L(z^{*},x^{*},\lambda^{*})-L(z^{*},x^{*},\lambda)\geq 0.\end{array}\right. \tag{11}\] Below, we present a summary of the method for expressing the optimality condition of the cosparse analysis model (7) via a variational inequality. **Proposition 2.1**.: _Suppose \(\mathcal{X}\subset R^{d}\) is a closed convex set, and \(\theta(x):R^{d}\to R\) is a convex function. Furthermore, let \(f(x)\) be differentiable in \(\mathcal{X}\). We assume that the set of solutions for the minimization problem \(\min\{\theta(x)+f(x)|x\in\mathcal{X}\}\) is nonempty, then,_ \[x^{*}=\arg\min\{\theta(x)+f(x)|x\in\mathcal{X}\} \tag{12}\] _if and only if_ \[x,\ x^{*}\in\mathcal{X},\ \theta(x)-\theta(x^{*})+(x-x^{*})^{T}\nabla f (x^{*})\geq 0. \tag{13}\] The proof of Proposition 2.1 is available in [33]. Let \(\theta_{1}(z)=\|z\|_{1}\) and \(\theta_{2}(x)=\frac{\alpha}{2}\|y-Mx\|_{2}^{2}\), according to the above inequality (13), a saddle point \((z^{*},x^{*},\lambda^{*})\) of the Lagrangian function (8) can be characterized by a solution point of the following variational inequality \[\omega,\omega^{*}\in\Omega,\ \ \theta(u)-\theta(u^{*})+(\omega- \omega^{*})^{T}F(\omega^{*})\geq 0, \tag{14}\] where \[\theta(u)=\theta_{1}(z)+\theta_{2}(x),\ \ \Omega= \mathcal{Z}\times\mathcal{X}\times R^{n}, \tag{15}\] and \[\omega=\left(\begin{array}{c}z\\ x\\ \lambda\end{array}\right),\ \ u=\left(\begin{array}{c}z\\ x\end{array}\right),\ \ F(\omega)=\left(\begin{array}{c}\lambda\\ -D^{T}\lambda\\ Dx-z\end{array}\right), \tag{16}\] Since \(F\) is an affine operator, and \[F(\omega)=\left(\begin{array}{ccc}0&0&I\\ 0&0&-D^{T}\\ -I&D&0\end{array}\right)\left(\begin{array}{c}z\\ x\\ \lambda\end{array}\right), \tag{17}\] According to the antisymmetry of the affine matrix, it follows that \[(\omega-\bar{\omega})^{T}[F(\omega)-F(\bar{\omega})]\equiv 0,\ \forall\ \omega, \bar{\omega}\in\Omega. \tag{18}\] Using inequality (13) and combining (8), we derive the following conclusion with \((z^{*},x^{*},\lambda^{*})\in\Omega\), \[\left\{\begin{array}{l}\theta_{1}(z)-\theta_{1}(z^{*})+(z-z^{*})^{T}\lambda^{*} \geq 0,\\ \theta_{2}(x)-\theta_{2}(x^{*})+(x-x^{*})^{T}(-D^{T}\lambda^{*})\geq 0,\\ (\lambda-\lambda^{*})^{T}(Dx^{*}-z^{*})\geq 0.\end{array}\right. \tag{19}\] After conducting the aforementioned analysis, the linear constrained cosparse optimization problem is reformulated as a variational inequality. Consequently, the task is ultimately simplified to identifying a saddle point of the Lagrangian function. In the subsequent section, the convergence analysis of the ADMM method for addressing the cosparse optimization problem, as denoted by equation (7), will be discussed. ## 3 Convergence analysis of the cosparse optimization problem ### Variational inequality characterization of ADMM The augmented Lagrangian function of the problem (7) can be formulated as follows \[\mathcal{L}_{\beta}(z,x,\lambda)= \|z\|_{1}+\frac{\alpha}{2}\|y-Mx\|_{2}^{2}-\lambda^{T}(Dx-z)+ \frac{\beta}{2}\|Dx-z\|_{2}^{2}, \tag{20}\] where \(\lambda\) is the Lagrange multiplier and \(\beta>0\) is a penalty parameter for the linear constraints. Thus, applying directly the augmented Lagrangian function (20) and starting with an initial iterate \((x^{0},\lambda^{0})\in\mathcal{X}\times R^{n}\), the ADMM generates its sequence via following iterative scheme \[\left\{\begin{array}{l}z^{k+1}=\arg\min\{\mathcal{L}_{\beta}(z,x^{k},\lambda ^{k})|z\in\mathcal{Z}\},\\ x^{k+1}=\arg\min\{\mathcal{L}_{\beta}(z^{k+1},x,\lambda^{k})|x\in\mathcal{X}\},\\ \lambda^{k+1}=\lambda^{k}-\beta(Dx^{k+1}-z^{k+1}),\;\lambda\in R^{n}\end{array}\right. \tag{21}\] the corresponding variational inequalities of (21) can be given as \[\left\{\begin{array}{l}\theta_{1}(z)-\theta_{1}(z^{k+1})+(z-z^{k+1})^{T}[ \lambda^{k}-\beta(Dx^{k}-z^{k+1})]\geq 0,\\ \theta_{2}(x)-\theta_{2}(x^{k+1})+(x-x^{k+1})^{T}[-D^{T}\lambda^{k}+\beta D^{ T}(Dx^{k+1}-z^{k+1})]\geq 0,\\ (\lambda-\lambda^{k+1})^{T}[(Dx^{k+1}-z^{k+1})+\frac{1}{\beta}(\lambda^{k+1}- \lambda^{k})]\geq 0.\end{array}\right. \tag{22}\] For some reviews on the classical ADMM, one can refer to literatures [28, 30, 31, 34, 35, 36]. ### Assertions To establish that \(\{\omega^{k}\}\) is strictly contractive with respect \(\Omega\), we first present several lemmas. **Lemma 3.1**.: _Let the sequence \(\{\omega^{k}\}\) be generated by (21). Then, we have_ \[\begin{split}&\theta(u)-\theta(u^{k+1})+(\omega-\omega^{k+1})^{T}F (\omega)\\ \geq&(z-z^{k+1})^{T}\beta(Dx^{k}-Dx^{k+1})+ \frac{1}{\beta}(\lambda-\lambda^{k+1})^{T}(\lambda^{k}-\lambda^{k+1}),\ \forall\omega\in\Omega.\end{split} \tag{23}\] Proof.: From (22) we know that \[\theta_{1}(z)-\theta_{1}(z^{k+1})+(z-z^{k+1})^{T}[\lambda^{k}-\beta(Dx^{k}-z^{k+1 })]\geq 0,\ \forall z\in\mathcal{Z} \tag{24}\] and \[\theta_{2}(x)-\theta_{2}(x^{k+1})+(x-x^{k+1})^{T}(-D^{T}\lambda^{k}+\beta D^{ T}(Dx^{k+1}-z^{k+1}))\geq 0,\ \forall x\in\mathcal{X}. \tag{25}\] Using \(\lambda^{k+1}=\lambda^{k}-\beta(Dx^{k+1}-z^{k+1})\) we can easily deduce \[\lambda^{k}=\lambda^{k+1}+\beta(Dx^{k+1}-z^{k+1}) \tag{26}\] and \[(Dx^{k+1}-z^{k+1})=\frac{1}{\beta}(\lambda^{k}-\lambda^{k+1}). \tag{27}\] Putting the formulations (26) and (27) into (24) and (25), respectively, then we have the following inequalities \[\theta_{1}(z)-\theta_{1}(z^{k+1})+(z-z^{k+1})^{T}[\lambda^{k+1}+\beta(Dx^{k+ 1}-z^{k+1})-\beta(Dx^{k}-z^{k+1})]\geq 0, \tag{28}\] \[\theta_{2}(x)-\theta_{2}(x^{k+1})+(x-x^{k+1})^{T}(-D^{T}\lambda^{k+1})\geq 0, \tag{29}\] and \[(\lambda-\lambda^{k+1})^{T}(Dx^{k+1}-z^{k+1})\geq(\lambda-\lambda^{k+1})^{T} \frac{1}{\beta}(\lambda^{k}-\lambda^{k+1}). \tag{30}\] Combining (28), (29) and (30) we have \[\left\{\begin{array}{l}\theta_{1}(z)-\theta_{1}(z^{k+1})+(z-z^{k+1})^{T} \lambda^{k+1}\geq(z-z^{k+1})^{T}\beta(Dx^{k}-Dx^{k+1}),\\ \theta_{2}(x)-\theta_{2}(x^{k+1})+(x-x^{k+1})^{T}(-D^{T}\lambda^{k+1})\geq 0, \\ (\lambda-\lambda^{k+1})^{T}(Dx^{k+1}-z^{k+1})\geq(\lambda-\lambda^{k+1})^{T} \frac{1}{\beta}(\lambda^{k}-\lambda^{k+1}),\end{array}\right. \tag{31}\] which is \[\theta(u)-\theta(u^{k+1})+(\omega-\omega^{k+1})^{T}F(\omega^{k+1})\] \[\geq (z-z^{k+1})^{T}\beta(Dx^{k}-Dx^{k+1})+(\lambda-\lambda^{k+1})^{T} \frac{1}{\beta}(\lambda^{k}-\lambda^{k+1}).\] Note that the matrix in the operator \(F\) is skew-symmetric, then, using (18), we have \[\theta(u)-\theta(u^{k+1})+(\omega-\omega^{k+1})^{T}F(\omega) \tag{32}\] \[\geq (z-z^{k+1})^{T}\beta(Dx^{k}-Dx^{k+1})+(\lambda-\lambda^{k+1})^{T} \frac{1}{\beta}(\lambda^{k}-\lambda^{k+1}).\] The Lemma 3.1 is proved. **Lemma 3.2**.: _Let the sequence \(\{\omega^{k}\}\) be generated by (21). Then, we have_ \[\begin{split}&\beta(z-z^{k+1})^{T}(Dx^{k}-Dx^{k+1})+\frac{1}{ \beta}(\lambda-\lambda^{k+1})^{T}(\lambda^{k}-\lambda^{k+1})\\ =&-\frac{1}{2\beta}\|\lambda^{k}-\lambda\|_{2}^{2}- \frac{\beta}{2}\|Dx^{k}-z\|_{2}^{2}+\frac{1}{2\beta}\|\lambda^{k+1}-\lambda\|_ {2}^{2}+\frac{\beta}{2}\|Dx^{k+1}-z\|_{2}^{2}\\ &+\frac{\beta}{2}\|Dx^{k}-z^{k+1}\|_{2}^{2}.\end{split} \tag{33}\] Proof.: Applying the identity \[(a-b)^{T}(c-d)=\frac{1}{2}\{\|a-d\|_{2}^{2}-\|a-c\|_{2}^{2}\}+\frac{1}{2}\{\|c- b\|_{2}^{2}-\|d-b\|_{2}^{2}\}\] to the left-hand side in (33) with \[a=z,\ b=z^{k+1},\ c=Dx^{k},\ d=Dx^{k+1},\] we obtain \[\begin{split}&\beta(z-z^{k+1})^{T}(Dx^{k}-Dx^{k+1})\\ =&\frac{\beta}{2}\{\|z-Dx^{k+1}\|_{2}^{2}-\|z-Dx^{k} \|_{2}^{2}\}+\frac{\beta}{2}\{\|Dx^{k}-z^{k+1}\|_{2}^{2}-\|Dx^{k+1}-z^{k+1}\|_ {2}^{2}\}.\end{split} \tag{34}\] Using the identity \[b^{T}(b-a)=\frac{1}{2}(\|b\|_{2}^{2}-\|a\|_{2}^{2}+\|b-a\|_{2}^{2}),\] and let \[a=\lambda-\lambda^{k},\ b=\lambda-\lambda^{k+1},\] we obtain \[\frac{1}{\beta}(\lambda-\lambda^{k+1})^{T}(\lambda^{k}-\lambda^{k+1})=\frac{1} {2\beta}\{\|\lambda-\lambda^{k+1}\|_{2}^{2}-\|\lambda-\lambda^{k}\|_{2}^{2}+\| \lambda^{k}-\lambda^{k+1}\|_{2}^{2}\}. \tag{35}\] Using \[\beta\|Dx^{k+1}-z^{k+1}\|_{2}^{2}=\frac{1}{\beta}\|\lambda^{k}- \lambda^{k+1}\|_{2}^{2},\] and combining (34) and (35), we complete the proof of this lemma. **Lemma 3.3**.: _Let the sequence \(\{x^{k}\}\), \(\{z^{k}\}\) and \(\{\lambda^{k}\}\) be generated by (21), then,_ \[\beta\|Dx^{k}-z^{k+1}\|_{2}^{2}\geq\beta\|Dx^{k}-Dx^{k+1}\|_{2}^{2}+\frac{1}{ \beta}\|\lambda^{k}-\lambda^{k+1}\|_{2}^{2}. \tag{36}\] Proof.: Based on the second inequality of inequality (31), we can derive the following result \[\left\{\begin{array}{l}\theta_{2}(x)-\theta_{2}(x^{k+1})+(x-x^{k+1})^{T}(-D^{T} \lambda^{k+1})\geq 0,\\ \theta_{2}(x)-\theta_{2}(x^{k})+(x-x^{k})^{T}(-D^{T}\lambda^{k})\geq 0.\end{array}\right. \tag{37}\] Let \(x=x^{k}\) and \(x=x^{k+1}\) in (37), respectively, then \[\left\{\begin{array}{l}\theta_{2}(x^{k})-\theta_{2}(x^{k+1})+(x^{k}-x^{k+1}) ^{T}(-D^{T}\lambda^{k+1})\geq 0,\\ \theta_{2}(x^{k+1})-\theta_{2}(x^{k})+(x^{k+1}-x^{k})^{T}(-D^{T}\lambda^{k}) \geq 0.\end{array}\right.\] From above inequalities, we have \[(\lambda^{k}-\lambda^{k+1})^{T}(Dx^{k}-Dx^{k+1})\geq 0. \tag{38}\] Using \[(Dx^{k+1}-z^{k+1})=\frac{1}{\beta}(\lambda^{k}-\lambda^{k+1}),\] then we obtain \[\beta\|Dx^{k}-z^{k+1}\|_{2}^{2}\] \[= \beta\|Dx^{k}-Dx^{k+1}+Dx^{k+1}-z^{k+1}\|_{2}^{2}\] \[= \beta\|Dx^{k}-Dx^{k+1}+\frac{1}{\beta}(\lambda^{k}-\lambda^{k+1} )\|_{2}^{2} \tag{39}\] \[\geq \beta\|Dx^{k}-Dx^{k+1}\|_{2}^{2}+\frac{1}{\beta}\|\lambda^{k}- \lambda^{k+1})\|_{2}^{2}.\] The proof of this lemma is completed. ### Strict contraction To present the main result of the paper, it is necessary to establish the strict contractility of the iterative sequence. The following subsection provides a proof of the strong contractility of the iterative sequence \(\{\omega^{k}\}\), which relies on Lemma 3.1, Lemma 3.2, and Lemma 3.3. **Theorem 3.4**.: _Assuming that the sequence \(\{\omega^{k}\}\) is generated by equation (21), we can state the following_ \[\|v^{k+1}-v^{*}\|_{H}^{2}\leq\|v^{k}-v^{*}\|_{H}^{2}-\|v^{k}-v^{k+1}\|_{H}^{2} \tag{40}\] _where_ \[v=\left(\begin{array}{c}\lambda\\ x\end{array}\right),\ \ H=\left(\begin{array}{cc}\frac{1}{\beta}I_{m}&0\\ 0&\beta I_{d}\end{array}\right),\ \ \mathcal{V}^{*}=\{(\lambda^{*},x^{*})|(z^{*},x^{*}, \lambda^{*})\in\Omega\}. \tag{41}\] Proof.: We can deduce from Lemma 3.1 and Lemma 3.2 that \[\begin{split}&\theta(u^{k+1})-\theta(u)+(\omega^{k+1}-\omega)^{T}F( \omega)\\ \leq&\frac{1}{2\beta}\|\lambda^{k}-\lambda\|_{2}^{2}+ \frac{\beta}{2}\|Dx^{k}-z\|_{2}^{2}-\frac{1}{2\beta}\|\lambda^{k+1}-\lambda\|_ {2}^{2}-\frac{\beta}{2}\|Dx^{k+1}-z\|_{2}^{2}\\ &-\frac{\beta}{2}\|Dx^{k}-z^{k+1}\|_{2}^{2}.\end{split} \tag{42}\] By utilizing Lemma 3.3, we can rewrite equation (42) as follows \[\begin{split} 0\leq&\theta(u^{k+1})-\theta(u^{*})+( \omega^{k+1}-\omega^{*})^{T}F(\omega^{*})\\ \leq&\frac{1}{2\beta}\|\lambda^{k}-\lambda^{*}\|_{2 }^{2}+\frac{\beta}{2}\|Dx^{k}-z^{*}\|_{2}^{2}-\frac{1}{2\beta}\|\lambda^{k+1} -\lambda^{*}\|_{2}^{2}-\frac{\beta}{2}\|Dx^{k+1}-z^{*}\|_{2}^{2}\\ -&\frac{1}{2\beta}\|\lambda^{k}-\lambda^{k+1}\|_{2 }^{2}-\frac{\beta}{2}\|Dx^{k}-Dx^{k+1}\|_{2}^{2}.\end{split} \tag{43}\] That is \[\begin{split}&\frac{1}{\beta}\|\lambda^{k+1}-\lambda^{*}\|_{2}^{2}+ \beta\|Dx^{k+1}-z^{*}\|_{2}^{2}\\ \leq&\frac{1}{\beta}\|\lambda^{k}-\lambda^{*}\|_{2 }^{2}+\beta\|Dx^{k}-z^{*}\|_{2}^{2}-(\frac{1}{\beta}\|\lambda^{k}-\lambda^{k+1} \|_{2}^{2}+\beta\|Dx^{k}-Dx^{k+1}\|_{2}^{2}).\end{split} \tag{44}\] Let \[Dx^{*}=z^{*},\ v=\left(\begin{array}{c}\lambda\\ x\end{array}\right)\ \text{and}\ H=\left(\begin{array}{cc}\frac{1}{\beta}I_{m}&0\\ 0&\beta D^{T}D\end{array}\right),\] therefore, the left-hand side of inequality (44) becomes \[\begin{split}&\frac{1}{\beta}\|\lambda^{k+1}-\lambda^{*}\|_{2}^{ 2}+\beta\|Dx^{k+1}-z^{*}\|_{2}^{2}\\ =&\left(\begin{array}{c}\lambda^{k+1}-\lambda^{*}\\ x^{k+1}-x^{*}\end{array}\right)^{T}\left(\begin{array}{cc}\frac{1}{\beta}I_ {m}&0\\ 0&\beta D^{T}D\end{array}\right)\left(\begin{array}{c}\lambda^{k+1}-\lambda^{ *}\\ x^{k+1}-x^{*}\end{array}\right)\\ =&(v^{k+1}-v^{*})^{T}\left(\begin{array}{cc}\frac{1}{\beta}I_{m}&0\\ 0&\beta D^{T}D\end{array}\right)(v^{k+1}-v^{*})\\ =&\|v^{k+1}-v^{*}\|_{H}^{2}.\end{split} \tag{45}\] Likewise, the sum of the first two terms on the right-hand side of inequality (44) is \[\begin{split}&\frac{1}{\beta}\|\lambda^{k}-\lambda^{*}\|_{2}^{2 }+\beta\|Dx^{k}-Dx^{*}\|_{2}^{2}\\ =&\left(\begin{array}{c}\lambda^{k}-\lambda^{*}\\ x^{k}-x^{*}\end{array}\right)^{T}\left(\begin{array}{cc}\frac{1}{\beta}I_ {m}&0\\ 0&\beta D^{T}D\end{array}\right)\left(\begin{array}{c}\lambda^{k}-\lambda^{ *}\\ x^{k}-x^{*}\end{array}\right)\\ =&(v^{k}-v^{*})^{T}\left(\begin{array}{cc}\frac{1}{\beta}I_ {m}&0\\ 0&\beta D^{T}D\end{array}\right)(v^{k}-v^{*})\\ =&\|v^{k}-v^{*}\|_{H}^{2}\end{split} \tag{46}\] and the sum of the last two terms on the right-hand side of inequality (44) is \[\begin{split}&\frac{1}{\beta}\|\lambda^{k}-\lambda^{k+1}\|_{2}^{2}+ \beta\|Dx^{k}-Dx^{k+1}\|_{2}^{2}\\ =&\left(\begin{array}{c}\lambda^{k}-\lambda^{k+1} \\ x^{k}-x^{k+1}\end{array}\right)^{T}\left(\begin{array}{cc}\frac{1}{\beta}I_{m} &0\\ 0&\beta D^{T}D\end{array}\right)\left(\begin{array}{c}\lambda^{k}-\lambda^{k+1 }\\ x^{k}-x^{k+1}\end{array}\right)\\ =&(v^{k}-v^{k+1})^{T}\left(\begin{array}{cc}\frac{1}{\beta}I_{m} &0\\ 0&\beta D^{T}D\end{array}\right)(v^{k}-v^{k+1})\\ =&\|v^{k}-v^{k+1}\|_{H}^{2}.\end{split} \tag{47}\] Since \(D\in R^{n\times d}\) is a unit tight frame, we have that \(D^{T}D=I_{d}\). By combining formulas (45), (46), and (47), we complete the proof of the Theorem 3.4. According to Theorem 3.4, we know that \(H\) is a positive definite matrix, and inequality (40) implies that the sequence \(\{v^{k}\}\) is bounded. Assuming that the initial vector is \(v_{0}=(\lambda_{0},x_{0})^{T}\), we can obtain the following expression by summing both sides of inequality (40) \[\sum_{k=0}^{\infty}\|v^{k}-v^{k+1}\|_{H}^{2}\leq\|v^{0}-v^{*}\|_{H}^{2}. \tag{48}\] The above equation indicates that \(\lim_{k\to\infty}\|v^{k}-v^{k+1}\|_{H}^{2}=0\). Therefore, any subsequence \(v^{k_{j}}\) of \(v^{k}\) also has \(\lim_{j\to\infty}\|v^{k_{j}}-v^{k_{j}+1}\|_{H}^{2}=0\). Suppose there exists a subsequence that converges to \(\bar{v}\), then formula (23) implies that \(\bar{v}\) is the solution of formula (21). This shows that any accumulation point of the sequence \(v^{k}\) is a solution of (21). According to formula (40), \(v^{k}\) cannot have more than one accumulation point, and hence \(v^{k}\) converges to \(\bar{v}\in\mathcal{V}^{*}\). ### Convergence rate in ergodic sense Combining with Theorem 3.4, we prove a worst-case \(\mathcal{O}(1/t)\) convergence rate in a ergodic sense of the ADMM scheme (21) for cosparse signal reconstruction problem. **Theorem 3.5**.: _Let the sequence \(\{\omega^{k}\}\) be generated by (21). Then, for any positive integer \(t\), we have_ \[\begin{split}&\theta(u^{t})-\theta(u)+(\omega^{t}-\omega)^{T}F( \omega)\\ \leq&\frac{1}{2(t+1)}[\frac{1}{\beta}\|\lambda^{0}- \lambda\|_{2}^{2}+\beta\|Dx^{0}-z\|_{2}^{2}],\ \forall\omega\in\Omega\end{split} \tag{49}\] _where_ \[\omega^{t}=\frac{1}{t+1}(\sum_{k=0}^{t}\omega^{k+1}). \tag{50}\] Proof.: For any integer \(k\), by (42) we obtain \[\theta(u^{k+1})-\theta(u)+(\omega^{k+1}-\omega)^{T}F(\omega) \tag{51}\] \[\leq \frac{1}{2\beta}\|\lambda^{k}-\lambda\|_{2}^{2}+\frac{\beta}{2}\|Dx ^{k}-z\|_{2}^{2}-\frac{1}{2\beta}\|\lambda^{k+1}-\lambda\|_{2}^{2}-\frac{\beta }{2}\|Dx^{k+1}-z\|_{2}^{2}.\] Suppose \(k=0,1,2,\ldots,t\) are non-negative integers. By summing the left and right ends of the inequality (51), we deduce that \[\sum_{k=0}^{t}\theta(u^{k+1})-(t+1)\theta(u)+\left[\sum_{k=0}^{t} \omega^{k+1}-(t+1)\omega\right]^{T}F(\omega) \tag{52}\] \[\leq \frac{1}{2\beta}\|\lambda^{0}-\lambda\|_{2}^{2}+\frac{\beta}{2} \|Dx^{0}-z\|_{2}^{2},\ \ \forall\omega\in\Omega.\] The left and right ends of the inequality (52) are multiplied by \(\frac{1}{t+1}\) at the same time, and let \[\omega^{t}=\frac{1}{t+1}\sum_{k=0}^{t}\omega^{k+1}, \tag{53}\] then, the inequality (52) is equivalent to \[\frac{1}{t+1}\sum_{k=0}^{t}\theta(u^{k+1})-\theta(u)+(\omega^{t} -\omega)^{T}F(\omega) \tag{54}\] \[\leq \frac{1}{2(t+1)}[\frac{1}{\beta}\|\lambda^{0}-\lambda\|_{2}^{2}+ \beta\|Dx^{0}-z\|_{2}^{2}].\] Given that the function \(\theta(u)\) is convex, let \[u^{t}=\frac{1}{t+1}\sum_{k=0}^{t}u^{k+1}=\frac{1}{t+1}(u^{1}+u^{2}+\cdots+u^{t }),\] we can derive the following expression \[\theta(u^{t})= \theta\left[\frac{1}{t+1}(u^{1}+u^{2}+\cdots+u^{t})\right] \tag{55}\] \[\leq \frac{1}{t+1}\left[\theta(u^{1})+\theta(u^{1})+\cdots+\theta(u^{ t})\right]\] \[= \frac{1}{t+1}\sum_{k=0}^{t}\theta(u^{k+1}).\] By utilizing equations (54) and (55), we complete the proof of the Theorem 3.5. After \(t\)-th iterations, then \(\omega^{t}\) defined by (53) satisfies \[\tilde{\omega}\in\Omega\ \ \text{and}\ \sup_{\omega\in\mathcal{D}_{\tilde{ \omega}}}\{\theta(\tilde{u})-\theta(u)+(\tilde{\omega}-\omega)^{T}F(\omega)\} \leq\frac{d}{2t}=\mathcal{O}(\frac{1}{t}),\] where \[D_{\tilde{\omega}}=\{\omega\in\Omega|\|\omega-\tilde{\omega}\|\leq 1\},\ \ d:= \sup\{\frac{1}{\beta}\|\lambda^{0}-\lambda\|_{2}^{2}+\beta\|Dx^{0}-z\|_{2}^{2 }|\omega\in\mathcal{D}_{\tilde{\omega}}\},\] and \(v^{0}=(\lambda^{0},x^{0})\) is the initial iteration point. That means \(\omega^{t}\) is an \(\mathcal{O}(\frac{1}{t})\) solution of (22). ## 4 Conclusions This paper presents a novel approach to analyze the convergence of cosparse optimization problem. In order to complete the proof of the main theorem, we first give the overall framework of the ADMM to solve the cosparse optimization problem, secondly, through three lemmas, this paper gives the basic inequalities required for the theorem proof, and finally, our analysis establishes a worst-case convergence rate of \(\mathcal{O}(1/t)\), which demonstrates the effectiveness of our approach. Researchers currently rely on a range of methods to solve separable convex optimization problems. Two popular approaches are the generalized symmetric ADMM and parameterizable proximal point algorithms [37, 38, 39]. These methods have demonstrated their effectiveness and superiority in various experiments. In our future work, we plan to explore the potential of combining these methods to solve the cosparse signal reconstruction problem. ## Funding The authors were supported by the National Natural Science Foundation of China Mathematics Tian Yuan Fund under grant No. 12226323 and 12226315, the National Natural Science Foundation of China under grant No. 62103136, the Henan Province Undergraduate College Youth Backbone Teacher Training Program. ## Acknowledgments The authors wish to thank Professor Zheng-Hai Huang for providing his valuable comments which have significantly improved the quality of this paper.
2308.11769
Investigation of the mass spectra of singly heavy baryons $Σ_{Q}$, $Ξ^{\prime}_{Q}$ and $Ω_{Q}$ $(Q=c, b)$ in the Regge trajectory model
Very recently, LHCb Collaboration observed that two new $\Omega_{c}^{0}$ states decay into $\Xi^{+}_{c}K^{-}$ with masses of about $3185$ MeV and $3327$ MeV. However, their spin parity quantum numbers $J^{P}$ have not been determined. In this paper, we exploit the quark-diquark model, the linear Regge trajectory and the perturbation treatment method to analyze the mass spectra of the discovered experimental data for the singly heavy baryons $\Sigma_{c}/\Sigma_{b}$, $\Xi^{\prime}_{c}/\Xi^{\prime}_{b}$ and $\Omega_{c}/\Omega_{b}$. In addition, we further predict the mass spectra of several unobserved $\Sigma_{c}/\Sigma_{b}$, $\Xi^{\prime}_{c}/\Xi^{\prime}_{b}$ and $\Omega_{c}/\Omega_{b}$ baryons. In the case of the $\Omega_c(3185)^{0}$ and $\Omega_c(3327)^{0}$ states, we determine $\Omega_{c}(3185)^{0}$ as $2S$ state and $\Omega_{c}(3327)^{0}$ as $1D$ state with $J^{P}=1/2^{+}$ and $J^{P}=3/2^{+}$, respectively. An overall good agreement of the obtained predictions with available experimental data are found.
Ji-Hai Pan, Jisi Pan
2023-08-21T03:56:45Z
http://arxiv.org/abs/2308.11769v3
Investigation of the mass spectra of singly heavy baryons \(\Sigma_{Q}\), \(\Xi^{\prime}_{Q}\) and \(\Omega_{Q}(Q=c,d)\) in Regge trajectory model ###### Abstract Recently, the LHCb observed that two new very narrow excited states \(\Omega_{c}^{0}\) decays into \(\Xi_{c}^{+}K^{-}\) with masses of about 3185 MeV and 3327 MeV, but their parity quantum numbers have not been determined. In this paper, we exploited the quark-diquark model and the linear Regge trajectory model, combined with the perturbation treatment method to analyze and study the mass spectra of the discovered experimental data for the singly heavy baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\), and then we further predict the mass spectra of several unobserved the baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\). In the case of the baryons \(\Omega_{c}(3185)^{0}\) and \(\Omega_{c}(3327)^{0}\), we determine the \(\Omega_{c}(3185)^{0}\) is \(2S\) state and the \(\Omega_{c}(3327)^{0}\) is \(1D\) state with the parity quantum number \(J^{P}=1/2^{+}\) and \(J^{P}=3/2^{+}\), respectively. An overall good agreement of the obtained predictions with available experimental data is found. + Footnote †: Electronic address: [email protected] + Footnote †: Electronic address: [email protected] ## I Introduction With the discovery of more and more highly excited strongly interacting particles in experiments such as LHCb, Belle, BaBar, and CLEO, people become to further deepen our understanding of the singly heavy baryons. The singly heavy baryons are composed of the two light quarks (namely diquark) spin one (\(S_{d}=1\)) in quark-diquark picture form a anti-color triplet (\(\bar{3}_{c}\)), with one heavy quark (\(S_{d}=1/2\)). The latest review of particle physics by the PDG [1] is needed to learn more important information about the singly heavy baryons. From PDG [1], the establishment of the \(S-\)wave/\(P-\)wave/\(D-\)wave baryon states \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\) are gradually perfect. In the \(\Sigma_{c}/\Sigma_{b}\) baryons, \(\Sigma_{c}(2455)^{0,+,++}\), \(\Sigma_{c}(2520)^{0,+,++}\) can be well interpreted as the \(S-\)wave charmed baryons with \(J^{P}=1/2^{+}\), \(J^{P}=3/2^{+}\), respectively. The triplet of the excited \(\Sigma_{c}(2800)^{0,+,++}\), was observed by Belle [2] in 2005. The four ground state \(\Sigma_{b}\) of \(\Sigma_{b}(5815)^{-+}\) and \(\Sigma_{b}^{*}(5835)^{-+}\) have been observed by the CDF collaboration in [3] with \(J^{P}=1/2^{+}\) \(J^{P}=3/2^{+}\), respectively. In the \(\Xi_{c}^{\prime}/\Xi_{b}^{\prime}\) baryons, the neutral states \(\Xi_{c}^{0}\) and its charged partner \(\Xi_{c}(2645)^{+}\) were reported by the CLEO Collaboration [4; 5] in the \(\Xi_{c}^{+}\pi^{-}\) states and \(\Xi_{c}^{0}\pi^{+}\) states as \(S-\)wave with \(J^{P}=1/2^{+}\), \(J^{P}=3/2^{+}\), respectively. While the \(\Xi_{c}(2923)\)[6] and \(\Xi_{c}(2930)^{+}\)[7], which are the good candidates for \(P-\)wave baryons, but \(J^{P}\) can not be determined. Similarly, LHCb observed two new charged states \(\Xi_{b}^{\prime}(5935)^{-}\) and \(\Xi_{b}^{*}(5955)^{-}\) of the \(\Xi_{b}^{\prime}\) baryons in [8]. Note that, in addition, the \(\Xi_{b}^{\prime}\) baryon has the only one \(\Xi_{b}(5945)^{0}\) neutral state with \(J^{P}=3/2^{+}\) is based on quark model expectations. Therefore, the discovery of these singly heavy baryons have very important significance for research. This is also meaningful for the study of the \(\Omega_{c}/\Omega_{b}\) baryons, initially only two states \(\Omega_{c}^{0}\) and \(\Omega_{c}(2770)^{0}\) with \(J^{P}=1/2^{+}\), \(J^{P}=3/2^{+}\), have been discovered experimentally. In 2017, the LHCb Collaboration reported five new narrow excited states of \(\Omega_{c}\) in the decay channel of \(\Xi_{c}^{+}K^{-}\)[9], but the \(J^{P}\) are unknown, which were later confirmed by the Belle Collaboration [10], and many different discussions of the excited states \(\Omega_{c}\)[11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] and see also more analysis in [23; 24; 25; 26; 27]. In 2020, the LHCb Collaboration reported the discovery of four narrower excited states \(\Omega_{b}\) in the \(\Xi_{b}^{0}K^{-}\) decays channel [28]. For example, several analysis about that in [29; 30]. Very recently, the LHCb, again, which observed that the two new very narrow excited states \(\Omega_{c}\) decaying into \(\Xi_{c}^{+}K^{-}\)[31], with masses of \(\Omega_{c}(3185)^{0}\) and \(\Omega_{c}(3327)^{0}\) about 3185 MeV and 3327 MeV. In our model, the calculation results show that the two new very narrow excited states as \(2S_{1/2}\) and \(1D_{3/2}\) for \(\Omega_{c}(3185)^{0}\) and \(\Omega_{c}(3327)^{0}\), respectively. In the present paper, the purpose of this work is to calculate the mass spectra of the singly heavy baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi_{c}^{\prime}/\Xi_{b}^{\prime}\) and \(\Omega_{c}/\Omega_{b}\) excited states from the linear Regge trajectory (spin independent mass) formula and the spin dependent potential (spin dependent mass) with the corresponding spin-parity quantum numbers \(J^{P}\), respectively. Combining a simple scaling relationship to calculate the spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\). This paper is organized as follows. We analyze the Regge trajectory formula to give the spin-average masses of the baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi_{c}^{\prime}/\Xi_{b}^{\prime}\) and \(\Omega_{c}/\Omega_{b}\) excited states in Sec. II. In Sec. III, we review about the spin-dependent Hamiltonian and scaling relationship. We talk about the mass spectrums of the baryons \(\Omega_{c}/\Omega_{b}\) in Sec. IV. In Sec. V, We discuss the mass spectrums of the baryons \(\Sigma_{c}/\Sigma_{b}\). In Sec. VI, similar mass analysis is given for the baryons \(\Xi_{c}^{\prime}/\Xi_{b}^{\prime}\). Finally, we outline our conclusion in Section VII. The Regge trajectory and the spin average masses The singly heavy baryon is a bound state system composed of the heavy quark(\(Q=c,b\)) and diquark(\(d=qq\)), and the diquarks are regarded as double quarks which are in the color antitriplet state, it is interesting to discuss the color interaction between quark and diquark in the heavy-light baryons system. Here, in order to estimate the masses splitting in orbitally excited charm and bottom baryons, we use Regge-like masses relation [32; 33; 34] to comprehensively analyze the whole singly heavy system of baryons, \[(\bar{M}_{L}-M_{Q})^{2}=\alpha\pi L+a_{0}, \tag{1}\] where, \(\bar{M}_{L}\), \(M_{Q}\) are the spin-average mass and the heavy quark mass, respectively. We take the heavy quark masses \(M_{c}=1.44\)GeV and \(M_{b}=4.48\)GeV in [35]. \(L\) is the orbital angular momentum of the baryons system(\(L=0,1,2,\cdots\)). \(\alpha\) is the QCD string tension coefficient between the heavy quark and diquark. The intercept factor \(a_{0}\) depends on the diquark mass \(m_{d}\) and the non-relativistic kinematic energy \(P_{Q}^{2}/M_{Q}\) for the heavy quark, \[a_{0}=(m_{d}+\frac{P_{Q}^{2}}{M_{Q}})^{2}, \tag{2}\] with \(m_{d}\) the diquark effective mass, it is note that non-relativistic kinematic 3-momentum \(P_{Q}\) in Eq. (2) has been associated with both \(M_{Q}\) and \(v_{Q}\), \[P_{Q}\equiv M_{Q}v_{Q}\,\hskip 28.452756ptv_{Q}=(1-\frac{m_{\rm bareQ}^{2}}{M_{ Q}^{2}})^{\frac{1}{2}}, \tag{3}\] here, \(m_{\rm bareQ}\) is the heavy quark bare mass, \(v_{Q}\) is the velocity of the heavy quark, and the \(3-\)momentum \(P_{Q}\) is conserved in the heavy quark limit \(M_{Q}\rightarrow\infty\). Substituting Eqs. (2) and (3) into Eq. (1), one can obtain the spin-averaged masses [35; 36] become \[\bar{M}_{L}= M_{Q}+\sqrt{\alpha\pi L+\left(m_{d}+M_{Q}\left(1-\frac{m_{\rm bareQ }^{2}}{M_{Q}^{2}}\right)\right)^{2}}. \tag{4}\] For obtaining the spin-average mass of the baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\), that is meaningful to re-examine the Regge-like mass relation in Eq. (4). We propose through analysis and observation of experimental data in PDG [1] that the slope ratio of the Regge trajectory is \(1.37:1\) between the radial and angular-momentum. Then, \(\pi\alpha L\) in Eq. (4) is replaced by \(\pi\alpha(L+1.37n)\), \[\bar{M}_{L}=M_{Q}+\sqrt{\alpha\pi(L+1.37n)+\left(m_{d}+M_{Q}\left(1-\frac{m_{ \rm bareQ}^{2}}{M_{Q}^{2}}\right)\right)^{2}}, \tag{5}\] where, \(n\) is a radial quantum number(\(n=0,1,2,\cdots\)). We take the component masses \(m_{nn}=0.745\) GeV, \(m_{ns}=0.872\) GeV, \(m_{ss}=0.991\) GeV for the diquark, the bare mass of the heavy quark \(m_{\rm bare}=1.275\) GeV, \(m_{\rm bare}=4.18\) GeV with \(\alpha(\Sigma_{c})=0.212\) GeV, \(\alpha(\Xi^{\prime}_{c})=0.255\) GeV, \(\alpha(\Omega_{c})=0.316\) GeV and \(\alpha(\Sigma_{b})=0.246\) GeV, \(\alpha(\Xi^{\prime}_{b})=0.307\) GeV, \(\alpha(\Omega_{b})=0.318\) GeV for the singly heavy baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\) by giving the parameters in [35]. Therefore, using Eq. (5) to calculate the spin-average masses of the excited states \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\) are listed in Table I. Obtaining the shifted spin-averaged mass squared \((M-\bar{M})^{2}=\alpha\pi(L+1.37n)+(m_{d}+M_{Q}(1-\frac{m_{\rm bare}Q}{M_{Q}} ))^{2}\) by Regge trajectory Eq. (5) of the heavy-light hadron systems relating the orbital angular momentum \(L\) and the radial quantum number \(n\). Hence, one can compute the mass squared \((M-\bar{M})^{2}\) for the charm baryons in Fig. 1, 3, 5 and the bottom baryons in Fig. 2, 4, 6 with \(n=0\), 1, 2, 3 and 4. The (red) solid circles correspond to the observed (mean) masses, and the empty circles indicate the predicted value in Fig. 1-6, that shows the Regge trajectories taking \(L\) and \(n\). However, it can be seen that when \(n\) is certain, the mass squared \((M-\bar{M})^{2}\) increases linearly with \(L\), and as \(n\) increases lead to the mass squared \((M-\bar{M})^{2}\) also increases. ## III The spin-dependent potential and scaling relationship Even though the baryon is made up of one heavy quarks and diquark composed of a three-body system with the strong interaction, it is helpful to understand the measured mass data of the excited baryons using a simple heavy quark-diquark picture. To estimate the splitting masses for the singly heavy baryons, we consider the spin-dependent Hamiltonian \(H^{SD}\)[37; 38] between one heavy quark (\(Q\)) and the spin-1 diquark (\(d\)) is, \[H^{SD}=a_{1}{\bf L}\cdot{\bf S}_{d}+a_{2}{\bf L}\cdot{\bf S}_{Q}+bS_{12}+c{ \bf S}_{d}\cdot{\bf S}_{Q}, \tag{6}\] \[S_{12}=3({\bf S}_{d}\cdot\hat{\bf r})({\bf S}_{Q}\cdot\hat{\bf r})/r^{2}-{\bf S }_{d}\cdot{\bf S}_{Q},\] where, the first two terms are spin-orbit interactions, the third is the tensor energy, and the last is the contact interaction between the heavy quark spin \({\bf S}_{Q}\) and the diquark spin \({\bf S}_{d}\). Here, \(a_{1}\), \(a_{2}\), \(b\), \(c\) are the spin coupling parameters, and \(S_{12}\) in [27] with \(L=1\) and \(L=2\) can be given by \[L=1:S_{12}=-\frac{3}{5}[({\bf L}\cdot{\bf S}_{d})({\bf L}\cdot{\bf S}_{Q})+({ \bf L}\cdot{\bf S}_{Q})({\bf L}\cdot{\bf S}_{d})-\frac{4}{3}({\bf S}_{d}\cdot {\bf S}_{Q})]. \tag{7}\] \[L=2:S_{12}=-\frac{1}{7}[({\bf L}\cdot{\bf S}_{d})({\bf L}\cdot{\bf S}_{Q})+({ \bf L}\cdot{\bf S}_{Q})({\bf L}\cdot{\bf S}_{d})-4({\bf S}_{d}\cdot{\bf S}_{Q} )]. \tag{8}\] \begin{table} \begin{tabular}{c c c c c c} State(MeV) & \(\bar{M}(L=0)\) & \(\bar{M}(L=1)\) & \(\bar{M}(L=2)\) & \(\bar{M}(L=3)\) & \(\bar{M}(L=4)\) \\ \hline \(\Sigma_{c}(n=0)\) & 2496.09 & 2774.67 & 3004.41 & 3204.48 & 3384.07 \\ \(\Sigma_{c}(n=1)\) & 2864.00 & 3081.28 & 3272.98 & 3446.45 & 3606.07 \\ \(\Sigma_{c}(n=2)\) & 3154.71 & 3339.01 & 3506.94 & 3662.22 & 3807.34 \\ \(\Sigma_{c}(n=3)\) & 3402.82 & 3565.72 & 3716.99 & 3858.83 & 3992.79 \\ \(\Sigma_{c}(n=4)\) & 3622.91 & 3770.48 & 3909.24 & 4040.61 & 4165.65 \\ \hline \(\Sigma_{b}(n=0)\) & 5804.91 & 6036.09 & 6237.11 & 6417.38 & 6582.25 \\ \(\Sigma_{b}(n=1)\) & 6113.34 & 6305.88 & 6479.97 & 6640.07 & 6789.09 \\ \(\Sigma_{b}(n=2)\) & 6372.16 & 6540.65 & 6696.37 & 6841.85 & 6978.87 \\ \(\Sigma_{b}(n=3)\) & 6599.60 & 6751.29 & 6893.45 & 7027.70 & 7155.22 \\ \(\Sigma_{b}(n=4)\) & 6804.90 & 6943.98 & 7075.61 & 7200.89 & 7320.64 \\ \hline \(\Xi^{\prime}_{c}(n=0)\) & 2623.09 & 2923.52 & 3172.61 & 3390.14 & 3585.72 \\ \(\Xi^{\prime}_{c}(n=1)\) & 3020.26 & 3256.13 & 3464.71 & 3653.72 & 3827.81 \\ \(\Xi^{\prime}_{c}(n=2)\) & 3335.98 & 3536.63 & 3719.68 & 3889.09 & 4047.52 \\ \(\Xi^{\prime}_{c}(n=3)\) & 3606.16 & 3783.79 & 3948.88 & 4103.75 & 4250.10 \\ \(\Xi^{\prime}_{c}(n=4)\) & 3846.19 & 4007.27 & 4158.82 & 4302.36 & 4439.03 \\ \hline \(\Xi^{\prime}_{b}(n=0)\) & 5931.91 & 6185.62 & 6406.20 & 6604.00 & 6784.88 \\ \(\Xi^{\prime}_{b}(n=1)\) & 6270.41 & 6481.67 & 6672.66 & 6848.31 & 7011.79 \\ \(\Xi^{\prime}_{b}(n=2)\) & 6554.39 & 6739.24 & 6910.08 & 7069.67 & 7219.98 \\ \(\Xi^{\prime}_{b}(n=3)\) & 6803.92 & 6970.32 & 7126.28 & 7273.55 & 7413.43 \\ \(\Xi^{\prime}_{b}(n=4)\) & 7029.14 & 7181.71 & 7326.11 & 7463.53 & 7594.89 \\ \hline \(\Omega_{c}(n=0)\) & 2742.09 & 3079.57 & 3358.58 & 3601.87 & 3820.42 \\ \(\Omega_{c}(n=1)\) & 3188.00 & 3452.03 & 3685.22 & 3896.37 & 4090.75 \\ \(\Omega_{c}(n=2)\) & 3541.32 & 3765.58 & 3970.03 & 4159.15 & 4335.95 \\ \(\Omega_{c}(n=3)\) & 3843.25 & 4041.61 & 4225.88 & 4398.69 & 4561.95 \\ \(\Omega_{c}(n=4)\) & 4111.27 & 4291.04 & 4460.13 & 4620.24 & 4772.66 \\ \hline \(\Omega_{b}(n=0)\) & 6050.91 & 6341.93 & 6593.25 & 6817.70 & 7022.40 \\ \(\Omega_{b}(n=1)\) & 6438.68 & 6678.97 & 6895.47 & 7094.10 & 7278.67 \\ \(\Omega_{b}(n=2)\) & 6761.47 & 6970.81 & 7163.87 & 7343.94 & 7513.35 \\ \(\Omega_{b}(n=3)\) & 7043.93 & 7231.87 & 7407.77 & 7573.60 & 7731.13 \\ \(\Omega_{b}(n=4)\) & 7298.23 & 7470.23 & 7632.85 & 7787.49 & 7935.22 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean masses of the excited \(\Sigma_{Q}\), \(\Xi^{\prime}_{Q}\) and \(\Omega_{Q}(Q=c,d)\) predicted by Eqs. (5). Figure 1: \(\Sigma_{c}\) baryons spin-average mass Combined with the experimental data in [9] of the \(\Omega_{c}\)(css), we use the Regge trajectory Eq. (4) to fit the effective masses of of the charm quark (\(c\)) and two strange quarks (\(ss\)) in [36] are \(M_{c}=1.44\)GeV and \(m_{ss}=0.991\)GeV, respectively. In the case of doubly strange \(Qss\) baryons with the \(ss\)-diquark comparable with the heavy quark \(Q\) in mass, the finite mass effect of the heavy quark may become important and makes it appropriate to go beyond the \(jj\) coupling. In other words, close to the masses of \(M_{c}\) and \(m_{ss}\). The purpose, therefore, is to treat the last term \(cS_{d}\cdot S_{Q}\) of in Eq. (6) as a perturbation, and the first three terms as operators used to define representations. Considering a new scheme of state classification named the \(Jl\)s mixing coupling in Ref. [36], the bases \(|J,j_{LS}=j^{\prime}\rangle\) diagonalize \(a_{1}({\bf L}\cdot{\bf S}_{d})+a_{2}({\bf L}\cdot{\bf S}_{Q})+bS_{12}\) instead of \(a_{1}({\bf L}\cdot{\bf S}_{d})\) in Ref. [38] solely. The operator is: \[H^{\prime}=a_{1}({\bf L}\cdot{\bf S}_{d})+a_{2}({\bf L}\cdot{\bf S}_{Q})+bS_{1 2}, \tag{9}\] diagonalizing the mass operator \(H^{\prime}\) to compute the masses shifts \(\Delta M\) in of \(P-\)wave in Eq. (B6) and \(D-\)wave in Eq. (C4) for the singly heavy baryons, see appendix B and C. In terms of this scheme five bases of \(P-\)wave that \({}^{2S+1}P_{J}={}^{2}P_{1/2},{}^{4}P_{1/2},{}^{2}P_{3/2},{}^{4}P_{3/2},{}^{4}P_ {5/2}\), and the six bases \({}^{2S+1}D_{J}={}^{4}D_{1/2},{}^{2}D_{3/2},{}^{4}D_{3/2},{}^{2}P_{5/2},{}^{4}P_ {5/2},{}^{4}P_{5/2}\) of \(D-\)wave. Specially, with \(L=0\) in \(S-\)wave in appendix A, then, the first three terms of Eq. (6) can be eliminated, only the last term survives in Eq. (A1). Next, It is necessary to estimate the four spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\) in heavy-light quark system, we employ the following scaling relation to the partner in heavy baryons, for example, see Ref. [38], \[a_{1}(b) = a_{1}(c),\] \[a_{2}(b) = \frac{M_{c}^{b}}{M_{b}^{b}}a_{2}(c), \tag{10}\] \[b(b) = \frac{M_{c}^{b}}{M_{b}^{b}}b(c).\] with the superscript refers to an effective quark mass in a baryon, these quantities which are considered roughly inversely proportional to the heavy quark mass(\(M_{Q}\)) or the diquark mass (\(m_{d}\)). It makes sense to extend this formalism Eq. (10) in terms of the whole the singly heavy baryon framework, the computation of the parameters \(a_{1}\), \(a_{2}\), \(b\) can be obtain in [39] by the principal quantum number \(N\) with the radial quantum number \(n\) and orbital quantum number \(L\), let us take the following preliminary expression, (i) The parameter \(a_{1}\) is proportional to \(\frac{1}{M_{Q}m_{d}}\langle\frac{1}{r}\rangle\). (ii) The parameter \(a_{2}\) is proportional to \(\frac{1}{M_{Q}m_{d}}\langle\frac{1}{r}\rangle\). (iii) The tensor parameter \(b\) is proportional to \(\frac{1}{M_{Q}m_{d}}\langle\frac{1}{r^{3}}\rangle\). Where, \(\langle 1/r\rangle=1/((n+L+1)^{2}a_{B})\), \(\langle 1/r^{3}\rangle=1/(L(L+1/2)(L+1)(n+L+1)^{2}a_{B}^{3})\) and the \(a_{B}\) is Bohr radius. In the case of \(a_{2}\), the scaling law might be in the same order as \(a_{1}\) to \(n,L\) in the highly excited states, the parameter \(b\) should be smaller than the \(a_{1}\), \(a_{2}\) as \(b\) scales like \(\langle\frac{1}{r^{3}}\rangle\). In addition, for obtaining the parameter \(c\) in Eq. (6) is essential to really the scale relationship like the parameters \(a_{1}\), \(a_{2}\), \(b\) in (i)-(iii), of course, in the highly excited states it is a small quantity and can be ignored in [38]. But in the S-wave it becomes dominant in determining the mass splitting in Eq. (A4) for the singly heavy baryons, the hyperfine structure term in [39; 40], \[H^{hp}=\frac{8}{9M_{Q}m_{d}}\bigtriangledown^{2}V{\bf S}_{d}\cdot{\bf S}_{Q}= \frac{32\pi\alpha_{s}}{9M_{Q}m_{d}}{\bf S}_{d}\cdot{\bf S}_{Q}\delta^{3}({\bf r }), \tag{11}\] here, \(\nabla^{2}\) is the Laplace operator, the derivative of the Coulomb potential V take \(\bigtriangledown^{2}V=4\pi\alpha_{s}\delta^{3}({\bf r})\) with the strong coupling \(\alpha_{s}\), \(\delta({\bf r})\) is delta function, taking the average \(\langle\delta^{3}({\bf r})\rangle=|\psi(0)|^{2}\) just established with the hydrogen-like atoms wave function \(\psi({\bf r})\) of the S-wave (L=0) in [41], the Eq. (11) become \[\langle H^{hp}\rangle=\frac{32\pi\alpha_{s}}{9M_{Q}m_{d}}\frac{1}{N^{3}a_{B}^ {3}}\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle, \tag{12}\] with \(N=n+L+1\). Although the hyperfine structure term in Eq. (6) is a small quantity in the highly excited state which results from a short-distance interaction, it plays a dominant role in \(S-\)wave for the singly heavy baryons, because \(L=0\), the first three terms of Eq. (6) can be eliminated. However, we would like to extend Eq. (12) to the orbital excited heavy baryons, \[\langle H^{hp}\rangle=\frac{32\pi\alpha_{s}}{9M_{Q}m_{d}}\frac{1}{(L+\lambda) N^{3}a_{B}^{3}}\,\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle, \tag{13}\] where, we take the parameter \(\lambda=3.3\) for systematic analysis of experimental values. Analyzing the coefficient in Eq. (13), which is inversely proportional to the \(M_{Q}\), \(m_{d}\) and the \((L+\lambda)N^{3}\), and then corresponding to the parameter \(c\) can be determined as follows, (iv) The parameter \(c\) is proportional to \(\frac{1}{M_{Q}m_{d}}\frac{1}{(L+\lambda)N^{3}}\). One can obtain the mass scale relation of the spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\) for the baryon system in Eq. (6) \[\left\{\begin{array}{l}a_{1}(B_{a},(n+1)L)\ =\ \frac{M_{Q}^{\prime}m_{d}^{ \prime}}{M_{Q}m_{d}}\frac{N_{a_{1}}^{\prime}}{N_{a_{1}}}a_{1}(B_{a}^{\prime}, (n^{\prime}+1)L^{\prime}),\\ a_{2}(B_{a},(n+1)L)\ =\ \frac{M_{Q}^{\prime}m_{d}^{\prime}}{M_{Q}m_{d}}\frac{N_{a _{2}}^{\prime}}{N_{a_{2}}}a_{2}(B_{a}^{\prime},(n^{\prime}+1)L^{\prime}),\\ b(B_{a},(n+1)L)\ =\ \ \frac{M_{Q}^{\prime}m_{d}^{\prime}}{M_{Q}m_{d}}\frac{N_{b}^{ \prime}}{N_{b}}b(B_{a}^{\prime},(n^{\prime}+1)L^{\prime}),\\ c(B_{a},(n+1)L)\ =\ \frac{M_{Q}^{\prime}m_{d}^{\prime}}{M_{Q}m_{d}}\frac{N_{c}^{ \prime}}{N_{b}}c(B_{a}^{\prime},((n^{\prime}+1)L^{\prime})),\end{array}\right. \tag{14}\] where, \(n,n^{\prime}=0,1,2,\cdots\), \(L,L^{\prime}=S,P,D,F,\cdots\), and \(B_{a},B^{\prime}_{a}\) are baryons, with \(N_{a_{1}}=(n+L+1)^{2}=N_{a_{2}}\), \(N_{b}=L(L+1/2)(L+1)(n+L+1)^{3}\), \(N_{c}=(L+\lambda)(n+L+1)^{3}\) corresponding to the similar form of \(N^{\prime}_{a_{1}}\), \(N^{\prime}_{a_{2}}\), \(N^{\prime}_{b}\), \(N^{\prime}_{c}\), respectively. ## IV The baryons \(\Omega_{c}\) and \(\Omega_{b}\) For the \(\Omega_{c}\) baryons family, recently, it was a pleasant surprise that the LHCb Collaboration discovered five new very narrow \(\Omega_{c}\) states observed in \(\Xi_{c}^{+}K\) decay channels: \(\Omega_{c}(3000)^{0}\), \(\Omega_{c}(3050)^{0}\), \(\Omega_{c}(3065)^{0}\), \(\Omega_{c}(3090)^{0}\), \(\Omega_{c}(3120)^{0}\), the spin parity quantum number \(J^{P}\) which is unknown. For example, many different model discussions of \(\Omega_{c}\) excited states [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22], and using different methods to analyze the narrow \(\Omega_{c}\) states corresponding to the five states of the \(P-\)wave, the mass \(M(1/2,0^{\prime})\), \(M(1/2,1^{\prime})\), \(M(3/2,1^{\prime})\), \(M(3/2,2^{\prime})\), \(M(5/2,2^{\prime})\) and the spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\) are given in [27; 36; 42]. \[M(\Omega_{c},1P):3000.41\mathrm{MeV},3050.19\mathrm{MeV},3065.54 \mathrm{MeV},3090.10\mathrm{MeV},3119.10\mathrm{MeV}, \tag{15}\] \[a_{1}=26.96\mathrm{MeV},\quad a_{2}=25.76\mathrm{MeV},\quad b=13.51\mathrm{MeV},\quad c=4.04\mathrm{MeV}, \tag{16}\] utilizing the parameters of Eq. (16) to be taken as the object of the scaling relation in Eq. (14). In earlier times, the observed \(1S\) wave states \(\Omega_{c}^{0}\) and \(\Omega_{c}(2770)^{0}\) with \(J^{P}=1/2^{+}\) and \(J^{P}=3/2^{+}\) correspond to the masses at \(M(\Omega_{c},1/2^{+})\)=2695.2 MeV and \(M(\Omega_{c},3/2^{+})\)=2765.9 MeV, respectively, had already been established. As seen in our model calculations, the spin-averaged mass and the parameter are given in Eq. (5) and Eq. (14) with \(L=0,n=0\), \[\bar{M}(\Omega_{c},1S)=M_{c}+\left(m_{ss}+M_{c}\left(1-\frac{m_{ \mathrm{bare}}^{2}}{M_{c}^{2}}\right)\right)=2742.09\mathrm{MeV}, \tag{17}\] \[c(\Omega_{c},1S)=\frac{N^{\prime}_{c}}{N_{c}}c(\Omega_{c},1P)= \frac{(1+3.3)(0+1+1)^{3}}{(0+3.3)(0+0+1)^{3}}4.04\mathrm{MeV}=42.11\mathrm{ MeV}, \tag{18}\] with the masses \(M_{c}=1.44\) GeV, \(m_{ss}=0.991\) GeV. Substituting these parameters Eq. (17) and Eq. (18) into Eq. (14), closer to our predictions \(M(\Omega_{c},1/2^{+})\)=2699.98 MeV and \(M(\Omega_{c},3/2^{+})\)=2763.15 MeV in Table 4 Corresponding to experimental value. In addition, the observation of two new states \(\Omega_{c}\) baryons in [28], masses at 3185.1 MeV and 3327.1 MeV for \(\Omega_{c}(3185)^{0}\), \(\Omega_{c}(3327)^{0}\). As shown in Table. 4, the \(\Omega_{c}(3185)^{0}\) can be grouped into the \(2S\) family, we analyze the \(2S-\)wave masses of which are 3185.20 MeV and 3193.09 MeV with \(J^{P}=1/2^{+}\) and \(J^{P}=3/2^{+}\), implying that the experimental value 3185.1 MeV as \(\Omega_{c}\) state with \(J^{P}=1/2^{+}\), which are extrapolated from Regge-trajectory of the \(\Omega_{c}\) spectra in Table 1. For example, the spin-averaged mass and the parameter are given in Eq. (5) and Eq. (14) in the \(2S-\)wave (\(L=0,n=1\)), \[\bar{M}(\Omega_{c},2S) = M_{c}+\sqrt{\pi\alpha(\Omega_{c})\times 1.37+\left(m_{ss}+M_{c} \left(1-\frac{m_{\rm bare}^{2}}{M_{c}^{2}}\right)\right)^{2}}=3188.00{\rm MeV}, \tag{19}\] \[c(\Omega_{c},2S) = \frac{(L^{\prime}+3.3)(n^{\prime}+L^{\prime}+1)^{3}}{(L+3.3)(n+L +1)^{3}}c(\Omega_{c},1P)=\frac{(1+3.3)(0+1+1)^{3}}{(0+3.3)(1+0+1)^{3}}4.04{\rm MeV}\] (20) \[= 5.26{\rm MeV},\] and the \(1D-\)wave (\(L=2,n=0\)), \[\bar{M}(\Omega_{c},1D) = M_{c}+\sqrt{2\pi\alpha(\Omega_{c})+\left(m_{ss}+M_{c}\left(1- \frac{m_{\rm bare}^{2}}{M_{c}^{2}}\right)\right)^{2}}=3358.58{\rm MeV}, \tag{21}\] \[a_{1}(\Omega_{c},1D) = \frac{(n^{\prime}+L^{\prime}+1)^{2}}{(n+L+1)^{2}}a_{1}(\Omega_{ c},1P)=\frac{(0+1+1)^{2}}{(0+2+1)^{2}}26.96{\rm MeV}=11.98{\rm MeV},\] (22) \[a_{2}(\Omega_{c},1D) = \frac{(n^{\prime}+L^{\prime}+1)^{2}}{(n+L+1)^{2}}a_{2}(\Omega_{ c},1P)=\frac{(0+1+1)^{2}}{(0+2+1)^{2}}25.76{\rm MeV}=11.45{\rm MeV},\] (23) \[b(\Omega_{c},1D) = \frac{L^{\prime}(L^{\prime}+\frac{1}{2})(L^{\prime}+1)(n^{\prime }+L^{\prime}+1)^{3}}{L(L+\frac{1}{2})(L+1)(n+L+1)^{3}}b(\Omega_{c},1P)=\frac{ (1+\frac{1}{2})(1+1)(0+1+1)^{3}}{2(2+\frac{1}{2})(2+1)(0+2+1)^{3}}13.51{\rm MeV}\] (24) \[= 0.80{\rm MeV},\] \[c(\Omega_{c},1D) = \frac{(L^{\prime}+3.3)(n^{\prime}+L^{\prime}+1)^{3}}{(L+3.3)(n+L +1)^{3}}c(\Omega_{c},1P)=\frac{(1+3.3)(0+1+1)^{3}}{(2+3.3)(0+2+1)^{3}}4.04{\rm MeV}\] (25) \[= 0.97{\rm MeV}.\] The above Eq. (22)-(25) with our assignments, we revised our predictions of \(\Omega_{c}(3327)^{0}\). It shows that the state at mass parameter \(M(\Omega_{c})=3327.1\) MeV, assigned by us to \(J^{P}=3/2^{+}\) state in the \(D-\)wave rather than the \(2S\) state, as the level-splitting \(\Delta E\)=3327 MeV-3185 MeV=142 MeV\(\gg\) 5.26 MeV. One might have speculated that the \(2S\) states in [43] or the mixed state of \(\Omega_{c}(3327)^{0}\). But, we still need more observable objects to clarify about their internal structure, in our work, we calculated the spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\) by the scaling relation Eq. (14) listed in Table 2 for the \(\Omega_{c}\) baryons, and show as our mass results to compare other models in Tabel IV for the \(\Omega_{c}\) baryons. For the \(\Omega_{b}\) baryons family, in the quark model \(\Omega_{b}^{-}\) is \(\Omega_{b}\) ground state which apply the bottom systems \(b(ss)\) which consists of a bottom quark \(b\) and a spin-1 diquark \(ss\) in which mass \(M(\Omega_{b})=6045.2\) MeV in [1]. Recently, the LHCb experiments [28] reported four extremely narrow \(\Omega_{b}\) baryon excited states in \(\Xi_{b}^{0}K\) decay. According to our predictions with the observations, we suggested that the four states \(\Omega_{b}(6316)^{-}\), \(\Omega_{b}(6330)^{-}\), \(\Omega_{b}(6340)^{-}\), \(\Omega_{b}(6350)^{-}\) may be assigned as the \(1P-\)wave states around the spin-average mass \(\bar{M}=6341.93\) MeV with \(J^{P}=1/2^{-}\), \(1/2^{-}\), \(3/2^{-}\) and \(3/2^{-}\), respectively. Based on the masses of the \(1P-\)wave states \(\Omega_{b}\), we predict there exists another excited \(\Omega_{b}\) baryon with \(J^{P}=5/2^{?}\) in addition to the four \(\Omega_{b}\) observed by the LHCb, its mass \(M(5/2,2^{\prime})\) is about 6357 MeV. The corresponding spin coupling parameters and the masses as \(M(1/2,0^{\prime})\), \(M(1/2,1^{\prime})\), \(M(3/2,1^{\prime})\), \(M(3/2,2^{\prime})\), \(M(5/2,2^{\prime})\) are \[\Omega_{b}(1P):\bar{M}=6341.93MeV,\ a_{1}=8.67\mathrm{MeV},a_{2}= 8.28\mathrm{MeV},b=4.34\mathrm{MeV},c=1.30\mathrm{MeV}. \tag{26}\] \[M(\Omega_{b},1P):6318.95\mathrm{MeV},6334.95\mathrm{MeV},6339.9 0\mathrm{MeV},6347.80\mathrm{MeV},6357.09\mathrm{MeV}. \tag{27}\] To comparison, we can refer to the prediction data of other workers [44]. For the \(\Omega_{b}\) baryons, we calculate the parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\) listed in Table 3 and show as our mass results to compare other models in Tabel V. ## V The baryons \(\Sigma_{c}\) and \(\Sigma_{b}\) By analyzing the existing experimental data in PDG [1], we explore some patterns of the odd-parity \(\Sigma_{Q}(Q=c,b)\) baryons consisting of a light isospin-one nonstrange diquark (\(nn=uu,ud,dd\)) in a state of the orbital angular momentum \(L\) with respect to the spin-1/2 heavy quark \(Q=(c,b)\). Regarding the \(\Sigma_{Q}(Q=c,b)\) baryons, it has also been observed in experiments, and the data can be referenced from PDG [1]. \begin{table} \begin{tabular}{c c c c c} \hline \hline State: & \(a_{1}\) & \(a_{2}\) & b & c \\ \hline 1s & & & & 42.11 \\ 2s & & & & 5.26 \\ 3s & & & & 1.56 \\ 4s & & & & 0.66 \\ 5s & & & & 0.34 \\ \hline 1P & 26.96 & 25.76 & 13.51 & 4.04 \\ 2P & 11.98 & 11.45 & 4.00 & 1.20 \\ 3P & 6.74 & 6.44 & 1.69 & 0.51 \\ 4P & 4.31 & 4.12 & 0.86 & 0.26 \\ 5P & 3.00 & 2.86 & 0.50 & 0.15 \\ \hline 1D & 11.98 & 11.45 & 0.80 & 0.97 \\ 2D & 6.74 & 6.44 & 0.34 & 0.41 \\ 3D & 4.31 & 4.12 & 0.17 & 0.21 \\ 4D & 3.00 & 2.86 & 0.10 & 0.12 \\ 5D & 2.20 & 2.10 & 0.06 & 0.08 \\ \hline \hline \end{tabular} \end{table} Table 2: The spin coupling parameters of the baryon \(\Omega_{c}\). In the \(\Sigma_{c}\) baryon family, in [1] the two masses \(M(\Sigma_{c},1/2^{+})=2452.65\) MeV, \(M(\Sigma_{c},3/2^{+})=2517.4\) MeV for the \(\Sigma_{c}(2455)^{+}\), \(\Sigma_{c}(2520)^{+}\) with \(J^{P}=1/2^{+}\) and \(3/2^{+}\), respectively, which was discovered and identified as the \(S-\)wave baryons by the LHCb experiment. Accordingly, which yields the experimental mean masses in \(1S\), \[\bar{M}=\Sigma(2J+1)M/\Sigma(2J+1)=(2\times 2453.75+4\times 2517.5)/6=2496.25MeV, \tag{28}\] while the level-splitting between the \(\Sigma_{c}(2455)^{+}\) and \(\Sigma_{c}(2520)^{+}\), \(\Delta E\)=2517.4 MeV\(-\)2452.65 MeV=64.75 MeV. For a better comparison of the experimental mean mass, in this work, we list the parameters in Tabel VI and the masses in Tabel VIII. We employ Eq. (5) to calculate the spin-averaged mass \(\bar{M}_{1S}\) of \(1S-\)wave with \(L=0\), \(n=0\) which become \[\bar{M}(\Sigma_{c},1S)=M_{c}+\left(m_{nn}+M_{c}\left(1-\frac{m_{\rm bare}^{2} }{M_{c}^{2}}\right)\right)=2496.09{\rm MeV}, \tag{29}\] with \(m_{\rm bare}=1.275\) GeV, \(m_{nn}=0.745\) GeV, as well as the following rough estimate of the parameter \(c\) by Eq. (14) \[c(\Sigma_{c},1S)=\frac{M_{c}m_{ss}}{M_{c}m_{nn}}\frac{N_{c}^{\prime}}{N_{c}}c (\Omega_{c},1P)=\frac{m_{ss}}{m_{nn}}\frac{(1+3.3)(0+1+1)^{3}}{(0+3.3)(0+0+1)^ {3}}4.04{\rm MeV}=56.02MeV, \tag{30}\] with \(c=4.04\) MeV givn in Eq. (16) and here the heavy quark \(M_{Q}(Q=c)\) cancel out. \begin{table} \begin{tabular}{c c c c c} \hline \hline State: & \(a_{1}\) & \(a_{2}\) & b & c \\ \hline 1s & & & & 13.54 \\ 2s & & & & 1.69 \\ 3s & & & & 0.50 \\ 4s & & & & 0.21 \\ 5s & & & & 0.11 \\ \hline 1P & 8.67 & 8.28 & 4.34 & 1.30 \\ 2P & 3.85 & 3.68 & 1.29 & 0.38 \\ 3P & 2.17 & 2.07 & 0.54 & 0.16 \\ 4P & 1.39 & 1.32 & 0.28 & 0.08 \\ 5P & 0.96 & 0.92 & 0.16 & 0.05 \\ \hline 1D & 3.85 & 3.68 & 0.26 & 0.31 \\ 2D & 2.17 & 2.07 & 0.11 & 0.13 \\ 3D & 1.39 & 1.32 & 0.06 & 0.07 \\ 4D & 0.96 & 0.920 & 0.03 & 0.04 \\ 5D & 0.71 & 0.68 & 0.02 & 0.03 \\ \hline \hline \end{tabular} \end{table} Table 3: The spin coupling parameters of the baryon \(\Omega_{b}\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline State \(J^{p}\) & Baryon & Mass & Ours & EFG [45] & Ref.[46] & Ref.[47] \\ \hline \({}^{1}\)S\({}_{1/2}\) 1/2\({}^{+}\) & \(\Omega_{c}^{0}\) & 2695.2 & 2699.98 & 2698 & 2695 & 2702 \\ \({}^{1}\)S\({}_{1/2}\) 3/2\({}^{+}\) & \(\Omega_{c}(2770)^{0}\) & 2765.9 & 2763.15 & 2768 & 2767 & 2772 \\ \({}^{2}\)S\({}_{1/2}\) 1/2\({}^{+}\) & \(\Omega_{c}(3185)^{0}\) & 3185.1 & 3185.20 & 3088 & 3100 & 3164 \\ \({}^{2}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3193.09 & 3123 & 3126 & 3197 \\ \({}^{3}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 3543.86 & 3489 & 3436 & 3566 \\ \({}^{3}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3548.95 & 3510 & 3450 & 3571 \\ \({}^{4}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 3847.96 & 3814 & 3737 & 3928 \\ \({}^{4}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3848.95 & 3830 & 3745 & 3910 \\ \({}^{5}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 4117.37 & 4102 & 4015 & 4259 \\ \({}^{5}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 4117.88 & 4114 & 4021 & 4222 \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & \(\Omega_{c}(3000)^{0}\) & 3000.41 & 3001.93 & 2966 & 3011 & \\ \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & \(\Omega_{c}(3005)^{0}\) & 3050.19 & 3051.74 & 3035 & 2976 & \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{-}\) & \(\Omega_{c}(3050)^{0}\) & 3065.54 & 3071.74 & 3029 & 3028 & 3049 \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{-}\) & \(\Omega_{c}(3009)^{0}\) & 3090.10 & 3091.72 & 3054 & 2993 & \\ \({}^{1}\)P\({}_{1/2}\) 5/2\({}^{-}\) & \(\Omega_{c}(3120)^{0}\) & 3119.10 & 3120.64 & 3051 & 2947 & 3055 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3422.48 & 3384 & 3345 & \\ \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3442.70 & 3435 & 3315 & \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & & 3447.59 & 3415 & 3359 & 3408 \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & & 3460.73 & 3433 & 3330 & \\ \({}^{2}\)P\({}_{1/2}\) 5/2\({}^{-}\) & & & 3473.23 & 3427 & 3290 & 3393 \\ \hline \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3752.49 & 3717 & 3641 & \\ \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3763.38 & 3754 & 3620 & \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3765.54 & 3737 & 3656 & 3732 \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3773.58 & 3752 & 3632 & \\ \({}^{3}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 3780.50 & 3744 & 3601 & 3700 \\ \hline \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 4036.37 & 4099 & 3926 & \\ \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 4043.18 & 4037 & 3903 & \\ \({}^{4}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 4044.32 & 4023 & 3938 & 4031 \\ \({}^{4}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & & 4049.72 & 4036 & 3915 & \\ \({}^{4}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 4051.10 & 4028 & 3884 & 3983 \\ \hline \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 4290.35 & & & \\ \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 4295.00 & & & \\ \({}^{5}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & & 4295.68 & & & \\ \({}^{5}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & & 4299.55 & & & \\ \({}^{5}\)P\({}_{1/2}\) 5/2\({}^{-}\) & & & 4302.57 & & & 4248 \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & & 3308.41 & 3287 & 3215 & \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{+}\) & \(\Omega_{c}(3327)^{0}\) & 3327.1 & 3326.92 & 3282 & 3231 & \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & & 3342.64 & 3298 & 3262 & \\ \({}^{1}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & & 3356.96 & 3286 & 3188 & 3360 \\ \({}^{1}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & & 3373.08 & 3297 & 3173 & \\ \({}^{1}\)P\({}_{1/2}\) 7/2\({}^{+}\) & & & 3377.52 & 3283 & 3136 & 3314 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & & 3659.91 & 3623 & 3524 & \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & & 3670.21 & 3613 & 3538 & \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & & 3679.26 & 3627 & 365 & \\ \({}^{2}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & & 3687.03 & 3614 & 3502 & 3680 \\ \({}^{2}\)P\({}_{1/2}\ \begin{table} \begin{tabular}{c c c c c c c} \hline \hline State \(J^{p}\) & Baryon & Mass & Ours & EFG [45] & Ref.[48] & Ref.[49] \\ \hline \({}^{1}\)S\({}_{1/2}\) 1/2\({}^{+}\) & \(\Omega_{b}\)\({}^{-}\) & 6045.2 & 6040.25 & 6064 & 6046 & 6054 \\ \({}^{1}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 6000.55 & 6088 & 6082 & 6074 \\ \({}^{2}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 6439.49 & 6450 & 6438 & 6455 \\ \({}^{2}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 6442.03 & 6461 & 6462 & 6481 \\ \({}^{3}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 6763.25 & 6804 & 6740 & 6832 \\ \({}^{3}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 6764.01 & 6811 & 6753 & 6864 \\ \({}^{4}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 7045.87 & 7091 & 7022 & 7190 \\ \({}^{4}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 7046.19 & 7096 & 7030 & 7226 \\ \({}^{5}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 7300.17 & 7338 & 7290 & 7531 \\ \({}^{5}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 7300.33 & 7343 & 7296 & 7572 \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & \(\Omega_{b}(6330)^{-}\) & 6315.6 & 6318.95 & 6330 & 4344 & \\ \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & \(\Omega_{b}(6330)^{-}\) & 6333.3 & 6334.95 & 6339 & 4345 & \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{-}\) & \(\Omega_{b}(6330)^{-}\) & 6339.7 & 6339.90 & 6331 & 4311 & 6348 \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{-}\) & \(\Omega_{b}(6350)^{-}\) & 6349.8 & 6347.80 & 6340 & 4343 & \\ \({}^{1}\)P\({}_{1/2}\) 5/2\({}^{-}\) & & 6357.09 & 6334 & 4339 & 6362 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 6670.62 & 6706 & 6596 & \\ \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 67671.12 & 6710 & 6597 & \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & 6678.69 & 6699 & 6594 & 6662 \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & 6682.91 & 6705 & 6595 & \\ \({}^{2}\)P\({}_{1/2}\) 5/2\({}^{-}\) & & 6686.93 & 6700 & 6592 & 6653 \\ \hline \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 6967.16 & 7003 & 6829 & \\ \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 6970.66 & 7009 & 6830 & \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 6971.35 & 6998 & 6827 & 6962 \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 6973.94 & 7002 & 6828 & \\ \({}^{3}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 6976.16 & 6996 & 6826 & 6689 \\ \hline \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 7230.27 & 7257 & 7044 & \\ \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 7232.46 & 7265 & 7043 & \\ \({}^{4}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 7232.83 & 7250 & 7043 & 7249 \\ \({}^{4}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & 7234.56 & 7258 & 7043 & \\ \({}^{4}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 7235.97 & 7251 & 7042 & 7200 \\ \hline \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 7469.70 & & & \\ \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 7471.19 & & & \\ \({}^{5}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & 7471.41 & & & \\ \({}^{5}\)P\({}_{1/2}\) 3/2\({}^{-}\) & & 7472.65 & & & \\ \({}^{5}\)P\({}_{1/2}\) 5/2\({}^{-}\) & & 7473.62 & & & 7488 \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 6578.47 & 6540 & 6485 & \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & 6584.41 & 6530 & 6480 & \\ \({}^{1}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & 6589.46 & 6549 & 6482 & \\ \({}^{1}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & 6594.07 & 6520 & 6476 & 6629 \\ \({}^{1}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & 6599.25 & 6529 & 6478 & \\ \({}^{1}\)P\({}_{1/2}\) 7/2\({}^{+}\) & & 6607.10 & 6517 & 6472 & 6638 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 6880.04 & 6857 & 6730 & \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & 6891.35 & 6846 & 6726 & \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & 6894.26 & 6863 & 6727 & \\ \({}^{2}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & 6896.75 & 6837 & 6723 & 6659 \\ \({}^{2}\)P\({}_{1/2}\) 5/2\({}^{+}\) & & 6899.76 & 6846 & 6724 & \\ \({}^{2}\)P\({}_{1/2}\) 7/2\({}^{+}\) & & 6904.12 & 6834 & 6720 & 6643 \\ \hline \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 7159.80 & & 6956 & \\ \({} The \(\Sigma_{c}(2800)\), which might be a good candidate of \(1P-\)wave excitations by the Belle observed in [2]. The discussion model of \(\Sigma_{Q}(Q=c,b)\) can be referred to [38]. In the \(Jls\) coupling frame, we also compute the \(P-\)wave masses of the \(\Sigma_{c}(2800)\). \[\Sigma_{c}(1P):\bar{M}=2774.67{\rm MeV},\ a_{1}=35.86{\rm MeV},a_ {2}=34.27{\rm MeV},b=17.97{\rm MeV},c=5.37{\rm MeV}. \tag{31}\] \[M(\Sigma_{c},1P):2668.86{\rm MeV},2735.11{\rm MeV},2755.59{\rm MeV },2788.31{\rm MeV},2826.76{\rm MeV}. \tag{32}\] The Belle observed the excited state \(\Sigma_{c}(2800)\) in the \(\Lambda_{c}^{+}\pi\) decay which mass at \(M(\Sigma_{c})\)= 2792 MeV, the \(J^{P}\) has not been determined, making it difficult to determine its properties. The mass of \(\Sigma_{c}(2800)\) calculated by our model is 2788.31 MeV with \(J^{P}=3/2^{-}\), the result of our calculation is in agreement with the experiment as show in Table 8. Hence, we should advocate the four state \(|^{4}P_{3/2},3/2^{-}\rangle\) in \(P-\)wave for \(\Sigma_{c}(2800)\). The discussion of its nature see [50; 51]. In the \(\Sigma_{b}\) baryon family, there are four \(\Sigma_{b}\) states masses at \(M(\Sigma_{b}^{+},1/2^{+})=5810.64\) MeV and \(M(\Sigma_{b}^{*+},3/2^{+})=5830.32\) MeV in PDG [1] for the \(\Sigma_{b}^{+}\) and \(\Sigma_{b}^{*+}\), respectively. \(M(\Sigma_{b}^{-},1/2^{+})=5815.64\) MeV and \(M(\Sigma_{b}^{*-},3/2^{+})=5834.74{\rm MeV}\) for the \(\Sigma_{b}^{-}\), \(\Sigma_{b}^{*-}\), respectively. It should be pointed out that the neutral \(1S\) states \(\Sigma_{b}^{0}\), \(\Sigma_{b}^{*0}\) are still missing. In addition, the \(\Sigma_{b}(6097)\), has been measured using fully reconstructed \(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\pi^{-}\) and \(\Lambda_{c}^{+}\to\rho\kappa_{c}^{+}\pi^{+}\) decays in [3]. In our calculations, the \(\Sigma_{b}(6097)\) can be a good candidate of \(1P-\)wave excitations, the spin-averaged mass and the parameters are given in Eq. (5) and Eq. (14) in the \(1P\)-wave (\(L=1,n=0\)), \[\bar{M}(\Sigma_{b},1P) = M_{b}+\sqrt{2\pi\alpha(\Sigma_{b})+\left(m_{nn}+M_{b}\left(1- \frac{m_{\rm bare}^{2}}{M_{b}^{2}}\right)\right)^{2}} \tag{33}\] \[= 6036.09{\rm MeV},\] \[a_{1}(\Sigma_{b},1P) = \frac{M_{c}m_{ss}}{M_{b}m_{nn}}\frac{(n^{\prime}+L^{\prime}+1)^{2 }}{(n+L+1)^{2}}a_{1}(\Omega_{c},1P)\] (34) \[= \frac{1.44\times 0.991}{4.48\times 0.745}\frac{(0+1+1)^{2}}{(0+ 1+1)^{2}}26.96{\rm MeV}\] \[= 11.53{\rm MeV},\] \[a_{2}(\Sigma_{b},1P) = \frac{M_{c}m_{ss}}{M_{b}m_{nn}}\frac{(n^{\prime}+L^{\prime}+1)^{2 }}{(n+L+1)^{2}}a_{2}(\Omega_{c},1P)\] (35) \[= \frac{1.44\times 0.991}{4.48\times 0.745}\frac{(0+1+1)^{2}}{(0+ 1+1)^{2}}25.76{\rm MeV}\] \[= 11.01{\rm MeV},\] \[b(\Sigma_{b},1P) = \frac{M_{c}m_{ss}}{M_{b}m_{nn}}\frac{L^{\prime}(L^{\prime}+\frac{ 1}{2})(L^{\prime}+1)(n^{\prime}+L^{\prime}+1)^{3}}{L(L+\frac{1}{2})(L+1)(n+L+1) ^{3}}b(\Omega_{c},1P)\] (36) \[= \frac{1.44\times 0.991}{4.48\times 0.745}\frac{(1+\frac{1}{2})(1+1 )(0+1+1)^{3}}{(1+\frac{1}{2})(1+1)(0+1+1)^{3}}13.51{\rm MeV}\] \[= 5.78{\rm MeV},\] \[c(\Sigma_{b},1P) = \frac{M_{c}m_{ss}}{M_{b}m_{nn}}\frac{(L^{\prime}+3.3)(n^{\prime}+L^{ \prime}+1)^{3}}{(L+3.3)(n+L+1)^{3}}c(\Omega_{c},1P) \tag{37}\] \[= \frac{1.44\times 0.991}{4.48\times 0.745}\frac{(1+3.3)(0+1+1)^{3}} {(1+3.3)(0+1+1)^{3}}4.04\mathrm{MeV}\] \[= 1.73\mathrm{MeV},\] with \(J^{P}=5/2^{-}\). Evidently, the parameters \(a_{1}\), \(a_{2}\), \(b\) are within a reasonable condition in (i)-(iii), the \(c\) in (iv) becomes a non-vanishing but small value of the highly excited states see Tabel VII and the masses see Tabel IX for the \(\Sigma_{b}\) baryons. ## VI The baryons \(\Xi_{c}^{\prime}\) and \(\Xi_{b}^{\prime}\) Similar method can apply to the excited \(\Xi_{Q}^{\prime}(csn\) or \(bsn)\) baryons system by the bases of the \(Jls\) coupling in this section, to analyze their masses and the parameters. For the \(\Xi_{c}^{\prime}\) baryons system, the ground \(1S-\)wave states (\(L=0\)) with the spin-parity \(J^{P}=1/2^{+}\) and \(J^{P}=3/2^{+}\), which correspond to \(\Xi_{c}^{\prime 0}\) and \(\Xi_{c}(2645)^{0}\) in which masses at \(M(\Xi_{c}^{\prime 0},1/2^{+})=2578.7\) MeV and \(M(\Xi_{c}^{0},3/2^{+})=\)2646.16 MeV listed in PDG [1], have been established. In this work, the calculation results are shown the spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\) listed in Table 10 to the \(\Xi_{c}^{\prime}\), and show as our mass results to compare other models in Table 11. \begin{table} \begin{tabular}{c c c c c} \hline \hline State: & \(a_{1}\) & \(a_{2}\) & b & c \\ \hline 1S & & & & 56.02 \\ 2S & & & & 7.00 \\ 3S & & & & 2.07 \\ 4S & & & & 0.88 \\ 5S & & & & 0.45 \\ \hline 1P & 35.86 & 34.27 & 17.97 & 5.37 \\ 2P & 15.94 & 15.23 & 5.32 & 1.59 \\ 3P & 8.97 & 8.57 & 2.25 & 0.67 \\ 4P & 5.74 & 5.48 & 1.15 & 0.34 \\ 5P & 3.98 & 3.81 & 0.67 & 0.20 \\ \hline 1D & 15.94 & 15.23 & 1.06 & 1.29 \\ 2D & 8.97 & 8.57 & 0.45 & 0.55 \\ 3D & 5.74 & 5.48 & 0.23 & 0.28 \\ 4D & 3.98 & 3.81 & 0.13 & 0.16 \\ 5D & 2.93 & 2.80 & 0.08 & 0.10 \\ \hline \hline \end{tabular} \end{table} Table 6: The spin coupling parameters of the baryon \(\Sigma_{c}\). A classification of the \(P-\)wave states (\(L=1\)) is similar to the other charm baryons can be used the mass scale relation Eq. (14) to calculate the spin coupling parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\). Analyzing the masses we calculated in the Table 11, it implies that the \(\Xi_{c}(2923)^{0}\) and the \(\Xi_{c}(2930)^{0}\) with the two spin-parity \(J^{P}=3/2^{-}\) might be a good candidate for \(P\) states of \(\Xi_{c}^{\prime}\) baryons, the masses parameters \(M(\Xi_{c}(2923))^{0}=2907.21\) MeV, \(M(\Xi_{c}(2930))^{0}=2935.17\) MeV, compared with experimental values within a reasonable range, the one of our mass is 10 MeV lower than the \(\Xi_{c}(2923)^{0}\). For more analysis of the \(\Xi_{c}^{\prime}\) in [52; 54]. In addition, the \(\Xi_{c}(3123)\) were also confirmed by the BaBar Collaboration, which were also observed with the resonance parameters \(M({\Xi_{c}}^{+})=3122.9\) MeV in PDG [1]. From the analysis of our data in the Table 11, the mass split is relatively small about 20 MeV in the \(P-\)wave. In the past, the quantum number of the \(\Xi_{c}(3123)\) was not determined, therefore, it is possible to decide in our frame the \(\Xi_{c}(3123)\) with the \(J^{P}=3/2^{+}\) can be a good candidate for \(1D\) states of the \(\Xi_{c}^{\prime}\) baryons. For the \(\Xi_{b}^{\prime}\) baryons system, in 2015, the LHCb Collaboration, in the \(\Xi_{b}^{\prime 0}\pi^{-}\) decay channel, observed two new charged states \(\Xi_{b}^{\prime}(5935)^{-}\) and \(\Xi_{b}^{*}(5955)^{-}\) in [8], the masses parameters \(M(\Xi_{b}^{\prime-},1/2^{+})=5935.02\) MeV, \(M(\Xi_{b}^{*-},1/2^{+})=5955.33\) MeV, which were proposed to be the ground states \(\Xi_{b}^{\prime-}\) and \(\Xi_{b}^{*-}\) with the \(J^{P}=1/2^{+}\) and \(J^{P}=3/2^{+}\), respectively. Our ground \begin{table} \begin{tabular}{c c c c c} \hline \hline State: & \(a_{1}\) & \(a_{2}\) & b & c \\ \hline 1S & & & & 18.00 \\ 2S & & & & 2.25 \\ 3S & & & & 0.67 \\ 4S & & & & 0.28 \\ 5S & & & & 0.14 \\ \hline 1P & 11.53 & 11.01 & 5.78 & 1.73 \\ 2P & 5.12 & 4.90 & 1.71 & 0.51 \\ 3P & 2.88 & 2.75 & 0.72 & 0.22 \\ 4P & 1.84 & 1.76 & 0.37 & 0.11 \\ 5P & 1.28 & 1.22 & 0.21 & 0.06 \\ \hline 1D & 5.12 & 4.90 & 0.34 & 0.42 \\ 2D & 2.88 & 2.75 & 0.14 & 0.18 \\ 3D & 1.84 & 1.76 & 0.07 & 0.09 \\ 4D & 1.28 & 1.22 & 0.04 & 0.05 \\ 5D & 0.94 & 0.90 & 0.03 & 0.03 \\ \hline \hline \end{tabular} \end{table} Table 7: The spin coupling parameters of the baryon \(\Sigma_{b}\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline State \(J^{\bar{p}}\) & Baryon & PDG & Ours & EFG [45] & Ref.[52] & Ref.[46] \\ \hline \({}^{1}\)S\({}_{1/2}\) 1/2\({}^{+}\) & \(\Sigma_{c}\)(2453)\({}^{+}\) & 2452.65 & 2440.07 & 2443 & 2456 & 2452 \\ \({}^{1}\)S\({}_{1/2}\) 3/2\({}^{+}\) & \(\Sigma_{c}\)(2520)\({}^{+}\) & 2517.4 & 2524.10 & 2519 & 2515 & 2518 \\ \({}^{2}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 2857.00 & 2901 & 2850 & 2891 \\ \({}^{2}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 2867.50 & 2936 & 2876 & 2917 \\ \({}^{3}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 3152.63 & 3271 & 3091 & 3261 \\ \({}^{3}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 3155.75 & 3293 & 3109 & 3274 \\ \({}^{4}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 3401.95 & 3581 & & 3593 \\ \({}^{4}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 3403.26 & 3598 & & 3601 \\ \({}^{5}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & 3622.47 & 3861 & & 3900 \\ \({}^{5}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & 3623.14 & 3873 & & 3906 \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 2698.86 & 2713 & 2702 & 2809 \\ \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 2735.11 & 2799 & 2765 & 2755 \\ \({}^{1}\)P\({}_{2/2}\) 3/2\({}^{-}\) & & 2735.59 & 2773 & 2785 & 2835 \\ \({}^{1}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 2792 & 2788.31 & 2798 & 2798 & 2782 \\ \({}^{1}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 2826.76 & 2789 & 2790 & 2710 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3037.05 & 3125 & 2971 & 3174 \\ \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3063.95 & 3172 & 3018 & 3128 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3070.46 & 3151 & 3036 & 3196 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3087.94 & 3172 & 3044 & 3151 \\ \({}^{2}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 3104.56 & 3161 & 3040 & 3090 \\ \hline \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3314.88 & 3455 & & 3305 \\ \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3329.38 & 3488 & & 3465 \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3332.25 & 3469 & & 3525 \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3342.95 & 3486 & & 3485 \\ \({}^{3}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 3352.15 & 3475 & & 3633 \\ \hline \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3550.56 & 3743 & & 3814 \\ \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3559.61 & 3770 & & 3777 \\ \({}^{4}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3561.13 & 3753 & & 3832 \\ \({}^{4}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3568.32 & 3768 & & 2796 \\ \({}^{4}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 3574.14 & 3757 & & 3747 \\ \hline \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3760.07 & & & \\ \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & 3766.26 & & & \\ \({}^{5}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3767.17 & & & \\ \({}^{5}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & 3772.32 & & & \\ \({}^{5}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & 3776.33 & & & \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 2933.33 & 3041 & 2949 & 3036 \\ \({}^{1}\)P\({}_{3/2}\) 3/2\({}^{+}\) & & 2957.94 & 3040 & 2952 & 3112 \\ \({}^{1}\)P\({}_{3/2}\) 3/2\({}^{+}\) & & 2978.85 & 3043 & 2964 & 3061 \\ \({}^{1}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & 2967.90 & 3023 & 2942 & 2993 \\ \({}^{1}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & 3019.35 & 3038 & 2963 & 2968 \\ \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 3051.86 & 3013 & 2943 & 2909 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 3233.06 & 3370 & & 3376 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{+}\) & & 3246.75 & 3364 & & 3398 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{+}\) & & 3258.79 & 3366 & & 3442 \\ \({}^{2}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & 3269.13 & 3349 & & 3316 \\ \({}^{2}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & 3281.56 & 3365 & & 3339 \\ \({}^{2}\)P\({}_{3/2}\) 7/2\({}^{+}\) & & 3299.62 & 3342 & & 3265 \\ \hline \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & 3181.42 & & & \\ \({}^{3}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & 3400.12 & & & \\ \({}^{3}\)P\({}_{1/2}\ \begin{table} \begin{tabular}{c c c c c c c} State \(J^{\pi}\) & Baryon & Mass & Ours & EFG [45] & Ref.[53] & Ref.[49] \\ \hline \({}^{1}\)S\({}_{1/2}\) & 1/2\({}^{+}\) & 5810.56 & 5801.27 & 5808 & 5811 & 5811 \\ \({}^{1}\)S\({}_{3/2}\) & 3/2\({}^{+}\) & 5830.32 & 5828.25 & 5834 & 5832 & 5830 \\ \({}^{2}\)S\({}_{1/2}\) & 1/2\({}^{+}\) & & 6167.69 & 6213 & 6282 & 6273 \\ \({}^{2}\)S\({}_{3/2}\) & 3/2\({}^{+}\) & & 6171.06 & 6226 & 6278 & 6291 \\ \({}^{3}\)S\({}_{1/2}\) & 1/2\({}^{+}\) & & 6458.62 & 6575 & 6605 & 6707 \\ \({}^{3}\)S\({}_{3/2}\) & 3/2\({}^{+}\) & & 6459.62 & 6583 & 6614 & 6720 \\ \({}^{4}\)S\({}_{1/2}\) & 1/2\({}^{+}\) & & 6711.06 & 6869 & 6927 & 7113 \\ \({}^{4}\)S\({}_{3/2}\) & 3/2\({}^{+}\) & & 6711.14 & 6876 & 6933 & 7124 \\ \({}^{5}\)S\({}_{1/2}\) & 1/2\({}^{+}\) & & 6937.48 & 7124 & 7231 & 7497 \\ \({}^{5}\)S\({}_{3/2}\) & 3/2\({}^{+}\) & & 6937.70 & 7129 & 7235 & 7506 \\ \hline \({}^{1}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6048.86 & 6905 & 6104 & \\ \({}^{1}\)P\({}_{2/2}\) & 1/2\({}^{-}\) & & 6070.13 & 6101 & 6106 & \\ \({}^{1}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6076.71 & 6087 & 6100 & 6105 \\ \({}^{1}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6087.21 & 6096 & 6102 & \\ \({}^{1}\)P\({}_{3/2}\) & 5/2\({}^{-}\) & \(\Sigma_{6}(6097)^{-}\) & 6098.0 & 6099.56 & 6084 & 6097 & 6118 \\ \hline \({}^{2}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6371.29 & 6430 & 6355 & & \\ \({}^{2}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6379.93 & 6410 & 6356 & & \\ \({}^{2}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6382.02 & 6124 & 6353 & 6306 \\ \({}^{2}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6387.63 & 6430 & 6354 & & \\ \({}^{2}\)P\({}_{3/2}\) & 5/2\({}^{-}\) & & 6392.96 & 6421 & 6351 & 6489 \\ \hline \({}^{3}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6638.42 & 6742 & 6578 & & \\ \({}^{3}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6631.07 & 6756 & 6579 & & \\ \({}^{3}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6644.00 & 6736 & 6577 & 6884 \\ \({}^{3}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6647.43 & 6742 & 6577 & & \\ \({}^{3}\)P\({}_{3/2}\) & 5/2\({}^{-}\) & & 6503.38 & 6732 & 6575 & 6840 \\ \hline \({}^{4}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6873.75 & 7008 & 6778 & & \\ \({}^{4}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 6876.66 & 7024 & 6779 & & \\ \({}^{4}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6877.15 & 7003 & 6777 & 7242 \\ \({}^{4}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 6879.45 & 7009 & 6778 & & \\ \({}^{4}\)P\({}_{3/2}\) & 5/2\({}^{-}\) & & 6881.31 & 6999 & 6776 & 7174 \\ \hline \({}^{5}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 7087.08 & & & & \\ \({}^{5}\)P\({}_{1/2}\) & 1/2\({}^{-}\) & & 7080.06 & & & & \\ \({}^{5}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 7089.35 & & & & \\ \({}^{5}\)P\({}_{3/2}\) & 3/2\({}^{-}\) & & 7091.01 & & & & \\ \({}^{5}\)P\({}_{3/2}\) & 5/2\({}^{-}\) & & 7092.30 & & & & 783 \\ \hline \({}^{1}\)P\({}_{1/2}\) & 1/2\({}^{+}\) & & 6285.89 & 6311 & 6303 & & \\ \({}^{1}\)P\({}_{3/2}\) & 3/2\({}^{+}\) & & 6293.79 & 6285 & 6298 & & \\ \({}^{1}\)P\({}_{3/2}\) & 3/2\({}^{+}\) & & 6300.50 & 6326 & 6300 & & \\ \({}^{1}\)P\({}_{3/2}\) & 5/2\({}^{+}\) & & 6306.62 & 6270 & 6294 & 6386 \\ \({}^{1}\)P\({}_{3/2}\) & 5/2\({}^{+}\) & & 6313.51 & 6284 & 6295 & & \\ \({}^{1}\)P\({}_{3/2}\) & 7/2\({}^{+}\) & & 6323.94 & 6260 & 6290 & 6393 \\ \hline \({}^{2}\)P\({}_{1/2}\) & 1/2\({}^{+}\) & & 6506.15 & 6636 & 6533 & & \\ \({}^{2}\)P\({}_{3/2}\) & 3/2\({}^{+}\) & & 6570.54 & 6612 & 6529 & & \\ \({}^{2}\)P\({}_{3/2}\) & 3/2\({}^{+}\) & & 6574.41 & 6647 & 6530 & & \\ \({}^{2}\)P\({}_{3/2}\) & 5/2\({}^{+}\) & & 6577.73 & 6598 & 6526 & 6778 \\ \({}^{2}\)P\({}_{3/2}\) & 5/2\({}^{+}\) & & 6581.72 & 6612 & 6527 & & \\ \({}^{2}\)P\({}_{3/2}\) & 7/2\({}^{+}\) & & 6587.52 & 6590 & 6524 & 6751 \\ \hline \({}^{3}\)P\({}_{1/2}\) & 1/2\({}^{+}\) \(1S-\)wave states \(\Xi^{\prime}_{b}(5935)^{-}\) and \(\Xi^{*}_{b}(5955)^{-}\) in Table 17 are in good agreement with other theoretical predictions as well as experimental measurements (see [8]). The \(\Xi_{b}(6227)\) baryons, which in our model are identified with the second excitation of \(\Xi_{b}(6227)\) to \(L=1\) with the \(J^{P}=1/2^{-}\), \[\Xi^{\prime}_{b}:\bar{M}=6185.62\mathrm{MeV},\ a_{1}=9.85\mathrm{ MeV},a_{2}=9.41\mathrm{MeV},b=4.94\mathrm{MeV},c=1.49\mathrm{MeV}. \tag{38}\] \[M(\Xi^{\prime}_{b},1P):6215.82\mathrm{MeV},6233.90\mathrm{MeV},6 239.61\mathrm{MeV},6248.59\mathrm{MeV},6259.13\mathrm{MeV}. \tag{39}\] However, its predicted mass is compatible with the experimental value, closer to the second state or mixed with the first state. The other same conclusion holds for the \(\Xi^{\prime}_{b}\) masses, as shown in Table 18 and Table 19. Also, about discussing the \(\Xi_{b}(6227)\) baryons in different models in [48; 55] and well-matched experiment. ## VII Summary In this paper, these discoveries greatly stimulate interest of people as to study of the mass spectra of the heavy baryons and internal structure. By comparing with the discovered experimental data of the singly heavy baryons with predictions of the existing theoretical models, the internal interaction of hadrons and the structure of the \(\Sigma_{Q}\), \(\Xi^{\prime}_{Q}\), \(\Omega_{Q}(Q=c,b)\) are being explored. \begin{table} \begin{tabular}{c c c c c} \hline \hline State: & \(a_{1}\) & \(a_{2}\) & b & c \\ \hline 1S & & & & & 47.86 \\ 2S & & & & & 5.98 \\ 3S & & & & & 1.77 \\ 4S & & & & & 0.75 \\ 5S & & & & & 0.38 \\ \hline 1P & 30.64 & 29.28 & 15.35 & 4.59 \\ 2P & 13.62 & 13.01 & 4.55 & 1.36 \\ 3P & 7.66 & 7.32 & 1.92 & 0.57 \\ 4P & 4.90 & 4.68 & 0.98 & 0.29 \\ 5P & 3.40 & 3.25 & 0.57 & 0.17 \\ \hline 1D & 13.62 & 13.01 & 0.91 & 1.10 \\ 2D & 7.66 & 7.32 & 0.38 & 0.47 \\ 3D & 4.90 & 4.68 & 0.20 & 0.24 \\ 4D & 3.40 & 3.25 & 0.11 & 0.14 \\ 5D & 2.50 & 2.39 & 0.07 & 0.09 \\ \hline \hline \end{tabular} \end{table} Table 17: The spin coupling parameters of the baryon \(\Xi^{\prime}_{c}\). In this work, we analyze the Regge trajectory and the Hamiltonian in quark-diquark picture. Using the \(Jls\) mixing scheme to study the \(S-\)wave/\(P-\)wave/\(D-\)wave masses for the \(\Sigma_{Q}\), \(\Xi^{\prime}_{Q}\), \(\Omega_{Q}(Q=c,b)\), and establish a simple mass scale relation to determine the parameters \(a_{1}\), \(a_{2}\), \(b\), \(c\). We analyze the mass spectra of the discovered experimental data in PDG for the \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\), and then we further predict the mass spectra of several unobserved the baryons \(\Sigma_{c}/\Sigma_{b}\), \(\Xi^{\prime}_{c}/\Xi^{\prime}_{b}\) and \(\Omega_{c}/\Omega_{b}\). In addition, we determine the \(\Omega_{c}(3185)^{0}\) is \(2S\) state and the \(\Omega_{c}(3327)^{0}\) is \(1D\) state with the parity quantum number \(J^{P}=1/2^{+}\) and \(J^{P}=3/2^{+}\), respectively. To this end, we identify the spin-average mass \(\bar{M}\) and the spin coupling parameters (\(a_{1}\), \(a_{2}\), \(b\), \(c\)) as the effective parameters, and consider the calculated and measured mass of the baryons by matching, which means that the central potential plus the spin interaction describes the main characteristics of most established baryons. Because of their connection to the interquark potential, these knowledge of spin coupling is useful and forms the basis for further understanding of QCD interactions within the hadrons. ## Appendix A \(S-\)wave Analyzing \(S-\)wave masses splitting with the the orbital angular momentum \(L=0\) for the singly heavy baryon are considered in the one heavy quark-light diquark approximation, the one heavy \begin{table} \begin{tabular}{c c c c c} \hline \hline State: & \(a_{1}\) & \(a_{2}\) & b & c \\ \hline 1s & & & & 15.38 \\ 2s & & & & 1.92 \\ 3s & & & & 0.57 \\ 4s & & & & 0.24 \\ 5s & & & & 0.12 \\ \hline 1P & 9.85 & 9.41 & 4.94 & 1.48 \\ 2P & 4.38 & 4.18 & 1.46 & 0.44 \\ 3P & 2.46 & 2.35 & 0.62 & 0.18 \\ 4P & 1.58 & 1.51 & 0.32 & 0.09 \\ 5P & 1.09 & 1.05 & 0.18 & 0.05 \\ \hline 1D & 4.38 & 4.18 & 0.29 & 0.35 \\ 2D & 2.46 & 2.35 & 0.12 & 0.15 \\ 3D & 1.58 & 1.51 & 0.06 & 0.08 \\ 4D & 1.09 & 1.05 & 0.04 & 0.04 \\ 5D & 0.80 & 0.77 & 0.02 & 0.03 \\ \hline \hline \end{tabular} \end{table} Table 12: The spin coupling parameters of the baryon \(\Xi^{\prime}_{b}\). \begin{table} \begin{tabular}{|c c c c c c c|} \hline State \(J^{p}\) & Baryon & Mass & Ours & EFG [45] & Ref.[52] & Ref.[46] \\ \hline \({}^{1}\)S\({}_{1/2}\) 1/2\({}^{+}\) & \(\Xi_{c}^{0}\) & 2578.70 & 2575.23 & 2579 & 2579 & 2471 \\ \({}^{1}\)S\({}_{1/2}\) 3/2\({}^{+}\) & \(\Xi_{c}(2645)^{0}\) & 2646.16 & 2647.02 & 2649 & 2649 & 2647 \\ \({}^{2}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 3014.28 & 2983 & 2977 & 2937 \\ \({}^{2}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3023.25 & 3026 & 3007 & 3004 \\ \({}^{3}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 3334.21 & 3377 & 3215 & 3303 \\ \({}^{3}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3336.87 & 3396 & 3236 & 3338 \\ \({}^{4}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 3605.41 & 3695 & 3626 \\ \({}^{4}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3606.54 & 3709 & & 3646 \\ \({}^{5}\)S\({}_{1/2}\) 1/2\({}^{+}\) & & & 3845.81 & 3978 & & 3921 \\ \({}^{5}\)S\({}_{1/2}\) 3/2\({}^{+}\) & & & 3846.39 & 3989 & & 3934 \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 2833.11 & 2854 & 2839 & 2877 \\ \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 2887.1 & 2936 & 2900 & 2834 \\ \({}^{1}\)P\({}_{2/2}\) 3/2\({}^{-}\) & \(\Xi_{c}(2923)^{0}\) & 2923.04 & 2907.21 & 2912 & 2921 & 2899 \\ \({}^{1}\)P\({}_{3/2}\) 3/2\({}^{-}\) & \(\Xi_{c}(2930)^{0}\) & 2938.55 & 2935.17 & 2935 & 2932 & 2856 \\ \({}^{1}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 2968.02 & 2929 & 2927 & 2798 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3218.35 & 3267 & 3094 & 3222 \\ \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3241.33 & 3113 & 3144 & 3189 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3246.89 & 3293 & 3172 & 3239 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3261.82 & 3311 & 3165 & 3206 \\ \({}^{2}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 3276.02 & 3303 & 3170 & 3162 \\ \hline \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3516.01 & 3598 & & 3541 \\ \({}^{3}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3528.40 & 3630 & & 3512 \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3530.85 & 3613 & & 3561 \\ \({}^{3}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3539.99 & 3628 & & 3528 \\ \({}^{3}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 3547.85 & 3619 & & 3844 \\ \hline \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3770.84 & 3887 & & 3837 \\ \({}^{4}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3778.57 & 3912 & & 3808 \\ \({}^{4}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3779.87 & 3898 & & 3851 \\ \({}^{4}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 3786.01 & 3911 & & 3823 \\ \({}^{4}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 3790.99 & 3902 & & 3784 \\ \hline \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 3998.38 & & & \\ \({}^{5}\)P\({}_{1/2}\) 1/2\({}^{-}\) & & & 4003.67 & & & \\ \({}^{5}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 4004.44 & & & \\ \({}^{5}\)P\({}_{3/2}\) 3/2\({}^{-}\) & & & 4008.84 & & & \\ \({}^{5}\)P\({}_{3/2}\) 5/2\({}^{-}\) & & & 4012.27 & & & \\ \hline \({}^{1}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & & 3111.88 & 3163 & 3075 & 3147 \\ \({}^{1}\)P\({}_{3/2}\) 3/2\({}^{+}\) & \(\Xi_{c}(3123)^{+}\) & 3122.9 & 3128.64 & 3160 & 3089 & 3109 \\ \({}^{1}\)P\({}_{3/2}\) 3/2\({}^{+}\) & & & 3150.77 & 3167 & 3081 & 3090 \\ \({}^{1}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & & 3167.05 & 3133 & 3091 & 3008 \\ \({}^{1}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & & 3153.58 & 3166 & 3077 & 30839 \\ \({}^{1}\)P\({}_{1/2}\) 7/2\({}^{+}\) & & & 3213.14 & 3147 & 3078 & 2995 \\ \hline \({}^{2}\)P\({}_{1/2}\) 1/2\({}^{+}\) & & & 3430.60 & 3505 & & 3470 \\ \({}^{2}\)P\({}_{1/2}\) 3/2\({}^{+}\) & & & 3442.30 & 3497 & & 3417 \\ \({}^{2}\)P\({}_{3/2}\) 3/2\({}^{+}\) & & & 3452.58 & 3506 & & 3434 \\ \({}^{2}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & & 3461.41 & 3493 & & 3701 \\ \({}^{2}\)P\({}_{3/2}\) 5/2\({}^{+}\) & & & 3472.04 & 3504 & & 3388 \\ \({}^{2}\)P\({}_{3/2}\) 7/2\({}^{+}\) & & & 3487.47 & 3486 & & 3330 \begin{table} \begin{tabular}{c c c c c c c} \hline \hline State \(J^{F}\) & Baryon & Mass & Ours & EFG [45] & Ref.[48] & Ref.[49] \\ \hline \({}^{11}\)S\({}_{1/2}\) / \(1^{2}\) & \(\Xi_{5}(9935)^{-}\) & 5935.02 & 5930.03 & 5936 & 5935 & 5935 \\ \({}^{11}\)S\({}_{3/2}\) / \(3^{2}\) & \(\Xi_{5}(9955)^{-}\) & 5955.33 & 5953.08 & 5963 & 5958 & \\ \({}^{21}\)S\({}_{1/2}\) / \(1^{2}\) & & & 6314.53 & 6329 & 6328 & 6329 \\ \({}^{21}\)S\({}_{3/2}\) / \(3^{2}\) & & & 6344.41 & 6342 & 6343 & \\ \({}^{3}\)S\({}_{1/2}\) / \(1^{2}\) & & & 6696.60 & 6887 & 6625 & 6700 \\ \({}^{3}\)S\({}_{3/2}\) / \(3^{2}\) & & & 6670.46 & 6695 & 6634 & \\ \({}^{41}\)S\({}_{1/2}\) / \(1^{2}\) & & & 653.79 & 6978 & 6092 & 7051 \\ \({}^{41}\)S\({}_{3/2}\) / \(3^{2}\) & & & 6554.15 & 6984 & 6097 & \\ \({}^{51}\)S\({}_{3/2}\) / \(1^{2}\) & & & 7208.35 & 7229 & 7161 & 7386 \\ \({}^{5}\)S\({}_{3/2}\) / \(3^{2}\) & & & 7208.53 & 7234 & 7165 & \\ \hline \({}^{12}\)P\({}_{1/2}\) / \(1^{2}\) & \(\Xi_{5}(6227)^{-}\) & 6227.9 & 6233.90 & 6233 & 6237 & \\ \({}^{12}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6239.61 & 6224 & 6232 & 6229 \\ \({}^{12}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6248.59 & 6234 & 6234 & \\ \({}^{14}\)P\({}_{3/2}\) / \(5/2^{-}\) & & & 6259.13 & 6226 & 6229 & \\ \hline \({}^{2}\)P\({}_{1/2}\) / \(1^{2}\) & & & 6574.81 & 6604 & 6491 & \\ \({}^{21}\)P\({}_{1/2}\) / \(1^{2}\) & & & 6582.19 & 6611 & 6195 & \\ \({}^{2}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6539.89 & 6598 & 6492 & 6005 \\ \({}^{21}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6588.77 & 605 & 6493 & \\ \({}^{24}\)P\({}_{3/2}\) / \(5/2^{-}\) & & & 6593.33 & 6596 & 6490 & \\ \hline \({}^{3}\)P\({}_{1/2}\) / \(1^{2}\) & & & 6874.07 & 6905 & 6731 & \\ \({}^{3}\)P\({}_{1/2}\) / \(1^{2}\) & & & 6878.04 & 6906 & 6732 & \\ \({}^{3}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6878.53 & 6897 & 6729 & 6961 \\ \({}^{3}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6881.77 & 6900 & 6730 & \\ \({}^{3}\)P\({}_{3/2}\) / \(5/2^{-}\) & & & 6884.27 & 6897 & 6728 & \\ \hline \({}^{4}\)P\({}_{1/2}\) / \(1^{2}\) & & 7138.00 & 7164 & 6949 & \\ \({}^{4}\)P\({}_{1/2}\) / \(1^{2}\) & & & 7140.48 & 7174 & 6950 & \\ \({}^{4}\)P\({}_{3/2}\) / \(3^{2}\) & & & 7140.90 & 7159 & 6948 & 7299 \\ \({}^{4}\)P\({}_{3/2}\) / \(3^{2}\) & & & 7142.87 & 7163 & 6949 & \\ \({}^{4}\)P\({}_{3/2}\) / \(5/2^{-}\) & & & 7144.47 & 7156 & 6947 & \\ \hline \({}^{5}\)P\({}_{1/2}\) / \(1^{2}\) & & & 7377.26 & & & \\ \({}^{5}\)P\({}_{1/2}\) / \(1^{2}\) & & & 7378.96 & & & \\ \({}^{5}\)P\({}_{3/2}\) / \(3^{2}\) & & & 7379.20 & & & \\ \({}^{5}\)P\({}_{3/2}\) / \(3^{2}\) & & & 7380.61 & & & \\ \({}^{5}\)P\({}_{3/2}\) / \(3^{2}\) & & & 7381.72 & & & \\ \hline \({}^{12}\)P\({}_{1/2}\) / \(1^{2}\) & & & 6480.78 & 6447 & 6380 & \\ \({}^{12}\)P\({}_{3/2}\) / \(3^{2}\) & & & 6487.54 & 6431 & 6375 & \\ \({}^{14}\)D\({}_{3/2}\) / \(3^{2}\) & & & 6493.27 & 6459 & 6377 & \\ \({}^{12}\)D\({}_{3/2}\) / \(5^{2}\) & & & 6985.50 & 6420 & 6371 & 6510 \\ \({}^{12}\)D\({}_{3/2}\) / \(5^{2}\) & & & 6504.38 & 6432 & 6373 & \\ \({}^{1}\)D\({}_{3/2}\) / \(1^{2}\) & & & 6513.30 & 6141 & 6368 & \\ \hline \({}^{21}\)D\({}_{1/2}\) / \(1^{2}\) & & & 6794.07 & 6767 & 6632 & \\ \({}^{22}\)D\({}_{3/2}\) / \(3^{2}\) & & & 6797.83 & 6751 & 6628 & \\ \({}^{2}\)D\({}_{3/2}\) / \(3^{2+}\) & & & 6801.13 & 6775 & 6630 & \\ \({}^{21}\)D\({}_{3/2}\) / \(5^{2}\) & & & 6803.97 & 6740 & 6625 & 6751 \\ \({}^{2}\)D\({}_{3/2}\) / \(5^{2}\) & & & 6807.38 & 6751 & 6626 & \\ \({}^{21}\)D\({}_{3/2}\) / \(7^{2+}\) & & & 6812.33 & 6736 & 6621 & \\ \hline \({}^{3}\)P\({}_{1/2}\) / \(1^{2}\) & & & 7067.14 & & 6861 & \\ \({}^{3}\)P\({}_{1/2}\) / \(3^{2+}\) & & & 7069.53 & & 6859 & \\ \({}^{3}\)P\({}_{3/2}\) / \(3^{2}\) & & & 7017.67 & & 6800 & \\ \({}^{3}\)P\({}_{3/2}\) / \(5^{2}\) & & & quark \(Q\) spin \(S_{Q}=1/2\) and diquark spin \(S_{d}=1\), respectively. This, Therefore, can be two kinds of the total spin \(S\), one is \(1/2\) and the other is \(3/2\). In the scheme of \(LS\) coupling, note that the total angular momentum \(J=S+L\). Coupling of \(L=0\) with the spin \(S=1/2\) gives states with the total angular momentum \(J=1/2\), while coupling with \(S=3/2\) leads to states the angular momentum \(J=3/2\). We consider the \(S\)-wave(\(L=0\)) states in \(Qqq\) baryons case. Then, the first three terms of in Eq. (6) are eliminated, only the last term survives, \[H_{S}^{SD}=c{\bf S}_{d}\cdot{\bf S}_{Q}. \tag{10}\] It is very convenient to analyze spin-spin interaction into the non-trivial terms for the mass splitting, the eigenvalues (two diagonal elements) of \(<{\bf S}_{d}\cdot{\bf S}_{Q}>\) can be obtained, \[<{\bf S}_{d}\cdot{\bf S}_{Q}>=[S(S+1)-S_{Q}(S_{Q}+1)-S_{d}(S_{d}+1)]/2, \tag{11}\] \[<{\bf S}_{d}\cdot{\bf S}_{Q}>=\left[\begin{array}{cc}-1&0\\ 0&\frac{1}{2}\end{array}\right], \tag{12}\] combining with Eqs. (5) and (12), the \(S\)-wave masses are, \[M(S)=\bar{M}_{L}+c\left[\begin{array}{cc}-1&0\\ 0&\frac{1}{2}\end{array}\right]. \tag{13}\] ## Appendix B \(P\)-wave Let us consider the \(P-\)wave system with the the orbital angular momentum \(L=1\), the diqurk spin \(S_{d}=1\) can be coupled with the heavy quark spin \(S_{Q}=1/2\) give five states with the total spin \(J=1/2,3/2\) or \(1/2^{\prime},3/2^{\prime},5/2\) with negative parity \(P=-1\). The matrix elements of \({\bf L}\cdot{\bf S}_{d}\), \({\bf L}\cdot{\bf S}_{Q}\), \(S_{12}\), \({\bf S}_{d}\cdot{\bf S}_{Q}\) in Eq. (6) in the \(L-S\) basis can be constructed as a linear combinations states \(|S_{d3},S_{Q3},L_{3}\rangle\) of the third component of respective angular momentum, \[|^{2}P_{1/2},J_{3} = 1/2\rangle=\frac{\sqrt{2}}{3}|1,-\frac{1}{2},0\rangle-\frac{1}{ 3}|0,\frac{1}{2},0\rangle-\frac{\sqrt{2}}{3}|0,-\frac{1}{2},1\rangle+\frac{2} {3}|-1,\frac{1}{2},1\rangle,\] \[|^{4}P_{1/2},J_{3} = 1/2\rangle=\frac{1}{\sqrt{2}}|1,\frac{1}{2},-1\rangle-\frac{1}{ 3}|1,-\frac{1}{2},0\rangle-\frac{\sqrt{2}}{3}|0,\frac{1}{2},0\rangle+\frac{1} {3}|0,-\frac{1}{2},1\rangle+\frac{1}{3\sqrt{2}}|-1,\frac{1}{2},1\rangle,\] \[|^{2}P_{3/2},J_{3} = 3/2\rangle=\sqrt{\frac{2}{3}}|1,-\frac{1}{2},1\rangle-\sqrt{ \frac{1}{3}}|0,\frac{1}{2},1\rangle,\] \[|^{4}P_{3/2},J_{3} = 3/2\rangle=\sqrt{\frac{3}{5}}|1,\frac{1}{2},0\rangle-\sqrt{\frac{ 2}{15}}|1,-\frac{1}{2},1\rangle-\frac{2}{\sqrt{15}}|0,\frac{1}{2},1\rangle,\] \[|^{4}P_{5/2},J_{3} = 5/2\rangle=|1,\frac{1}{2},1\rangle. \tag{14}\] The matrix elements of \(\langle{\bf L}\cdot{\bf S}_{i}\rangle(i=d,Q)\), \(\langle S_{12}\rangle\), \(\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle\) are given by \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{1}{2}} = \left[\begin{array}{cc}-\frac{4}{3}&-\frac{\sqrt{2}}{3}\\ -\frac{\sqrt{2}}{3}&-\frac{5}{3}\end{array}\right],\langle{\bf L}\cdot{\bf S}_ {Q}\rangle_{J=\frac{1}{2}}=\left[\begin{array}{cc}\frac{1}{3}&\frac{\sqrt{2} }{3}\\ \frac{\sqrt{2}}{3}&-\frac{5}{6}\end{array}\right],\langle S_{12}\rangle_{J= \frac{1}{2}}=\left[\begin{array}{cc}0&\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&-1\end{array}\right],\] \[\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle_{J=\frac{1}{2}} = \left[\begin{array}{cc}-1&0\\ 0&\frac{1}{2}\end{array}\right],\] \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{3}{2}} = \left[\begin{array}{cc}\frac{2}{3}&\frac{\sqrt{5}}{3}\\ -\frac{\sqrt{5}}{3}&\frac{2}{3}\end{array}\right],\langle{\bf L}\cdot{\bf S}_ {Q}\rangle_{J=\frac{3}{2}}=\left[\begin{array}{cc}-\frac{1}{6}&\frac{\sqrt{5 }}{3}\\ \frac{\sqrt{5}}{3}&-\frac{1}{3}\end{array}\right],\langle S_{12}\rangle_{J= \frac{3}{2}}=\left[\begin{array}{cc}0&-\frac{\sqrt{5}}{10}\\ -\frac{\sqrt{5}}{10}&\frac{4}{5}\end{array}\right],\] \[\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle_{J=\frac{3}{2}} = \left[\begin{array}{cc}-1&0\\ 0&\frac{1}{2}\end{array}\right],\] \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{5}{2}} = 1,\quad\langle{\bf L}\cdot{\bf S}_{Q}\rangle_{J=\frac{5}{2}}= \frac{1}{2},\quad\langle S_{12}\rangle_{J=\frac{5}{2}}=-\frac{1}{5},\quad \langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle_{J=\frac{5}{2}}=\frac{1}{2}, \tag{101}\] the matrix forms of these mass shift interactions are \[\Delta{\cal M}_{J=1/2} = \left[\begin{array}{cc}\frac{1}{3}(a_{2}-4a_{1})&\frac{\sqrt{2} }{3}(a_{2}-a_{1})+\frac{b}{\sqrt{2}}\\ \frac{\sqrt{2}}{3}(a_{2}-a_{1})+\frac{b}{\sqrt{2}}&-\frac{5}{3}(a_{1}+\frac{1}{ 2}a_{2})-b\end{array}\right] \tag{102}\] \[+\left[\begin{array}{cc}-c&0\\ 0&\frac{1}{2}c\end{array}\right],\] \[\Delta{\cal M}_{J=3/2} = \left[\begin{array}{cc}\frac{2}{3}a_{1}-\frac{1}{6}a_{2}&\frac{ \sqrt{5}}{3}(a_{2}-a_{1})-\frac{b}{2\sqrt{5}}\\ \frac{\sqrt{5}}{3}(a_{2}-a_{1})-\frac{b}{2\sqrt{5}}&-\frac{1}{3}(2a_{1}+a_{2}) +\frac{4b}{5}\end{array}\right] \tag{103}\] \[+\left[\begin{array}{cc}-c&0\\ 0&\frac{1}{2}c\end{array}\right],\] \[\Delta{\cal M}_{J=5/2} = a_{1}+\frac{1}{2}a_{2}-\frac{b}{5}+\frac{c}{2}. \tag{104}\] Diagonalizing the mass shift operator \(a_{1}{\bf L}\cdot{\bf S}_{d}+a_{2}{\bf L}\cdot{\bf S}_{Q}+bS_{12}\), one can compute the mass shifts \(\Delta M\), \[\Delta M(J = 1/2,0^{\prime})=\frac{1}{4}\left(-6a_{1}-a_{2}-2b-\sqrt{\Delta_{ 1}(a_{1},a_{2},b)}\right)+c\Delta_{3}^{+}(a_{1},a_{2},b),\] \[\Delta M(J = 1/2,1^{\prime})=\frac{1}{4}\left(-6a_{1}-a_{2}-2b+\sqrt{\Delta_{ 1}(a_{1},a_{2},b)}\right)+c\Delta_{3}^{+}(a_{1},a_{2},b),\] \[\Delta M(J = 3/2,1^{\prime})=\frac{1}{20}\left(-5a_{2}+8b-\sqrt{\Delta_{ 2}(a_{1},a_{2},b)}\right)+c\Delta_{4}^{+}(a_{1},a_{2},b),\] \[\Delta M(J = 3/2,1^{\prime})=\frac{1}{20}\left(-5a_{2}+8b+\sqrt{\Delta_{ 2}(a_{1},a_{2},b)}\right)+c\Delta_{4}^{+}(a_{1},a_{2},b),\] \[\Delta M(J = 5/2,2^{\prime})=a_{1}+\frac{a_{2}}{2}-\frac{b}{5}+\frac{c}{2}, \tag{105}\] where six functions \(\Delta_{1,2}(a_{1},a_{2},b),\Delta_{3}^{\pm}(a_{1},a_{2},b)\) and \(\Delta_{4}^{\pm}(a_{1},a_{2},b)\) are defined by \[\Delta_{1}(a_{1},a_{2},b) = 4(a_{1})^{2}-8a_{1}b+12b^{2}-4a_{1}a_{2}+20ba_{1}+9(a_{2})^{2},\] \[\Delta_{2}(a_{1},a_{2},b) = 400(a_{1})^{2}-80a_{1}b+84b^{2}-400a_{1}a_{2}-160ba_{1}+225(a_{2})^{ 2},\] \[\Delta_{3}^{+}(a_{1},a_{2},b) = \frac{4-(-2-\frac{7a_{2}}{a_{1}}-\frac{6b}{a_{1}}+\frac{3}{a_{1}} \sqrt{\Delta_{1}(a_{1},a_{2},b)})^{2}/(-2+\frac{2a_{2}}{a_{1}}+\frac{3b}{a_{1}} )^{2}}{8+(-2-\frac{7a_{2}}{a_{1}}-\frac{6b}{a_{1}}+\frac{3}{a_{1}}\sqrt{\Delta_ {1}(a_{1},a_{2},b)})^{2}/(-2+\frac{2a_{2}}{a_{1}}+\frac{3b}{a_{1}})^{2}},\] \[\Delta_{3}^{-}(a_{1},a_{2},b) = \Delta_{3}^{+}\left(\sqrt{\Delta_{1}}\rightarrow-\sqrt{\Delta_{1 }}\right).\] \[\Delta_{4}^{+}(a_{1},a_{2},b) = \frac{10-(40+\frac{5a_{2}}{a_{1}}-\frac{24b}{a_{1}}-\frac{3}{a_{1 }}\sqrt{\Delta_{2}(a_{1},a_{2},b)})^{2}/(10-\frac{10a_{2}}{a_{1}}+\frac{3b}{a_ {1}})^{2}}{20+(40+\frac{5a_{2}}{a_{1}}-\frac{24b}{a_{1}}-\frac{3}{a_{1}}\sqrt{ \Delta_{2}(a_{1},a_{2},b)})^{2}/(10-\frac{10a_{2}}{a_{1}}+\frac{3b}{a_{1}})^{ 2}},\] \[\Delta_{4}^{-}(a_{1},a_{2},b) = \Delta_{4}^{+}\left(\sqrt{\Delta_{2}}\rightarrow-\sqrt{\Delta_{ 2}}\right), \tag{100}\] with \(\Delta_{3,4}^{-}(a_{1},a_{2},b)\) obtained from \(\Delta_{3,4}^{+}(a_{1},a_{2},b)\) by merely replacing \(\sqrt{\Delta_{1,2}}\rightarrow-\sqrt{\Delta_{1,2}}\). ## Appendix C \(D-\)wave For analyzing \(D-\)wave system, the diqurk spin \(S_{d}=1\) can be coupled with the heavy quark spin \(S_{Q}=1/2\) to determine the total spin \(S=1/2,3/2\). Coupling of the the orbital angular momentum \(L=2\) give six states with the total spin \(J=1/2,3/2,5/2\) or \(3/2^{\prime},5/2^{\prime},7/2\) with positive parity \(P=+1\), the relevant linear combinations of six basis states are, \[|^{4}D_{1/2},J_{3} = 1/2\rangle=\frac{1}{\sqrt{10}}|1,\frac{1}{2},-1\rangle-\frac{1}{ \sqrt{15}}|1,-\frac{1}{2},0\rangle-\sqrt{\frac{2}{15}}|0,\frac{1}{2},0\rangle+ \frac{1}{\sqrt{5}}|0,-\frac{1}{2},1\rangle+\frac{1}{\sqrt{10}}|-1,\frac{1}{2},1\rangle\] \[- \sqrt{\frac{2}{5}}|-1,-\frac{1}{2},2\rangle,\] \[|^{2}D_{3/2},J_{3} = 3/2\rangle=\sqrt{\frac{2}{15}}|1,-\frac{1}{2},1\rangle-\frac{1}{ \sqrt{15}}|0,\frac{1}{2},1\rangle-\frac{2}{\sqrt{15}}|0,-\frac{1}{2},2\rangle+ \sqrt{\frac{8}{15}}|-1,\frac{1}{2},2\rangle,\] \[|^{4}D_{3/2},J_{3} = 3/2\rangle=\frac{1}{\sqrt{5}}|1,\frac{1}{2},0\rangle-\sqrt{\frac {2}{15}}|1,\frac{1}{2},1\rangle-\frac{2}{\sqrt{15}}|0,\frac{1}{2},1\rangle+ \frac{2}{\sqrt{15}}|0,-\frac{1}{2},2\rangle+\sqrt{\frac{2}{15}}|-1,\frac{1}{2},2\rangle,\] \[|^{2}D_{5/2},J_{3} = 5/2\rangle=\sqrt{\frac{2}{3}}|1,-\frac{1}{2},2\rangle-\sqrt{ \frac{1}{3}}|0,\frac{1}{2},2\rangle,\] \[|^{4}D_{5/2},J_{3} = 5/2\rangle=\frac{3}{\sqrt{21}}|1,\frac{1}{2},1\rangle-\frac{2}{ \sqrt{21}}|1,-\frac{1}{2},2\rangle-\frac{2\sqrt{2}}{\sqrt{21}}|0,\frac{1}{2},2\rangle,\] \[|^{4}D_{7/2},J_{3} = 7/2\rangle=|1,\frac{1}{2},2\rangle. \tag{101}\] The matrix elements of \(\langle{\bf L}\cdot{\bf S}_{i}\rangle(i=d,Q)\), \(\langle S_{12}\rangle\), \(\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle\) are \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{1}{2}} = -3,\quad\langle{\bf L}\cdot{\bf S}_{Q}\rangle_{J=\frac{1}{2}}=-\frac{3}{2 },\quad\langle S_{12}\rangle_{J=\frac{1}{2}}=-1,\quad\langle{\bf S}_{d}\cdot{ \bf S}_{Q}\rangle_{J=\frac{1}{2}}=\frac{1}{2},\] \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{3}{2}} = \left[\begin{array}{cc}-2&-1\\ -1&-2\end{array}\right],\langle{\bf L}\cdot{\bf S}_{Q}\rangle_{J=\frac{3}{2}}= \left[\begin{array}{cc}\frac{1}{2}&1\\ 1&-1\end{array}\right],\langle S_{12}\rangle_{J=\frac{3}{2}}=\left[\begin{array} []{cc}0&\frac{1}{2}\\ \frac{1}{2}&0\end{array}\right],\] \[\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle_{J=\frac{3}{2}}= \left[\begin{array}{cc}-1&0\\ 0&\frac{1}{2}\end{array}\right],\] \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{5}{2}} = \tag{104}\] \[\langle{\bf S}_{d}\cdot{\bf S}_{Q}\rangle_{J=\frac{5}{2}}=\left[ \begin{array}{cc}-1&0\\ 0&\frac{1}{2}\end{array}\right],\] \[\langle{\bf L}\cdot{\bf S}_{d}\rangle_{J=\frac{7}{2}} = 2,\quad\langle{\bf L}\cdot{\bf S}_{Q}\rangle_{J=\frac{7}{2}}=1, \quad\langle S_{12}\rangle_{J=\frac{7}{2}}=-\frac{2}{7},\quad\langle{\bf S}_{ d}\cdot{\bf S}_{Q}\rangle_{J=\frac{7}{2}}=\frac{1}{2},\] the matrix forms of these mass shift interactions are \[\Delta{\cal M}_{J=1/2} = -3a_{1}-\frac{3a_{2}}{2}-b+\frac{c}{2},\] \[\Delta{\cal M}_{J=3/2} = \left[\begin{array}{cc}-2a_{1}+\frac{1}{2}a_{2}&-a_{1}+a_{2}+ \frac{1}{2}b\\ -a_{1}+a_{2}+\frac{1}{2}b&-2a_{1}-a_{2}\end{array}\right]+\left[\begin{array} []{cc}-c&0\\ 0&\frac{1}{2}c\end{array}\right],\] \[\Delta{\cal M}_{J=5/2} = \left[\begin{array}{cc}\frac{4}{3}a_{1}-\frac{1}{3}a_{2}&- \frac{\sqrt{14}}{3}a_{1}+\frac{\sqrt{14}}{3}a_{2}-\frac{\sqrt{14}}{14}b\\ -\frac{\sqrt{14}}{3}a_{1}+\frac{\sqrt{14}}{3}a_{2}-\frac{\sqrt{14}}{14}b&- \frac{1}{3}a_{1}-\frac{1}{6}a_{2}+\frac{5}{7}b\end{array}\right]+\left[ \begin{array}{cc}-c&0\\ 0&\frac{1}{2}c\end{array}\right],\] \[\Delta{\cal M}_{J=7/2} = 2a_{1}+a_{2}-\frac{2}{7}b+\frac{1}{2}c. \tag{105}\] Diagonalizing the mass shift operator \(a_{1}{\bf L}\cdot{\bf S}_{d}+a_{2}{\bf L}\cdot{\bf S}_{Q}+bS_{12}\), one can compute the six mass shifts \(\Delta M\), \[\Delta M(J = 1/2,0^{\prime})=-3a_{1}-\frac{3a_{2}}{2}-b+\frac{c}{2},\] \[\Delta M(J = 3/2,1^{\prime})=\frac{1}{4}\left(-8a_{1}-a_{2}-\sqrt{\Theta_{1 }(a_{1},a_{2},b)}\right)+c\Theta_{3}^{+}(a_{1},a_{2},b),\] \[\Delta M(J = 3/2,1^{\prime})=\frac{1}{4}\left(-8a_{1}-a_{2}+\sqrt{\Theta_{1 }(a_{1},a_{2},b)}\right)+c\Theta_{3}^{-}(a_{1},a_{2},b),\] \[\Delta M(J = 5/2,2^{\prime})=\frac{1}{28}\left(14a_{1}-7a_{2}+10b-\sqrt{ \Theta_{2}(a_{1},a_{2},b)}\right)+c\Theta_{4}^{+}(a_{1},a_{2},b),\] \[\Delta M(J = 5/2,2^{\prime})=\frac{1}{28}\left(14a_{1}-7a_{2}+10b+\sqrt{ \Theta_{2}(a_{1},a_{2},b)}\right)+c\Theta_{4}^{-}(a_{1},a_{2},b),\] \[\Delta M(J = 7/2,2^{\prime})=2a_{1}+a_{2}-\frac{2}{7}b+\frac{c}{2},\] where six functions \(\Theta_{1,2}(a_{1},a_{2},b),\Theta_{3}^{\pm}(a_{1},a_{2},b)\) and \(\Theta_{4}^{\pm}(a_{1},a_{2},b)\) are defined by \[\Theta_{1}(a_{1},a_{2},b) = 16(a_{1})^{2}-32a_{1}a_{2}+25(a_{2})^{2}-16a_{1}b+16a_{2}b+4b^{2},\] \[\Theta_{2}(a_{1},a_{2},b) = 1764(a_{1})^{2}-2548a_{1}a_{2}+1225(a_{2})^{2}+56a_{1}b-476a_{2}b +156b^{2},\] \[\Theta_{3}^{+}(a_{1},a_{2},b) = \frac{2-(\frac{3a_{2}}{a_{1}}-\frac{1}{a_{1}}\sqrt{\Theta_{1}(a_{ 1},a_{2},b)})^{2}/(2-\frac{2a_{2}}{a_{1}}-\frac{b}{a_{1}})^{2}}{4+(\frac{3a_{2 }}{a_{1}}-\frac{1}{a_{1}}\sqrt{\Theta_{1}(a_{1},a_{2},b)})^{2}/(2-\frac{2a_{2 }}{a_{1}}-\frac{b}{a_{1}})^{2}},\] \[\Theta_{3}^{-}(a_{1},a_{2},b) = \Theta_{3}^{+}\left(\sqrt{\Theta_{1}}\to-\sqrt{\Theta_{1}}\right).\] \[\Theta_{4}^{+}(a_{1},a_{2},b) = \frac{28-(70-\frac{7a_{2}}{a_{1}}-\frac{30b}{a_{1}}-\frac{3}{a_{ 1}}\sqrt{\Theta_{2}(a_{1},a_{2},b)})^{2}/(2-\frac{2a_{2}}{a_{1}}-\frac{b}{a_{ 1}})^{2}}{56+(70-\frac{7a_{2}}{a_{1}}-\frac{30b}{a_{1}}-\frac{3}{a_{1}}\sqrt{ \Theta_{2}(a_{1},a_{2},b)})^{2}/(2-\frac{2a_{2}}{a_{1}}-\frac{b}{a_{1}})^{2}},\] \[\Theta_{4}^{-}(a_{1},a_{2},b) = \Theta_{4}^{+}\left(\sqrt{\Theta_{2}}\to-\sqrt{\Theta_{2}}\right), \tag{106}\] with \(\Theta^{-}_{3,4}(a_{1},a_{2},b)\) obtained from \(\Theta^{+}_{3,4}(a_{1},a_{2},b)\) by merely replacing \(\sqrt{\Theta_{1,2}}\to-\sqrt{\Theta_{1,2}}\).
2303.04770
Impact of Changing Stellar and Planetary Magnetic Fields on (Exo)planetary Environments and Atmospheric Mass Loss
The magnetic activity of a star -- which modulates the stellar wind outflow -- shapes the immediate environments of orbiting planets and induces atmospheric loss thereby impacting their habitability. We perform a detailed parameter space study using three dimensional magnetohydrodynamic simulations to understand the effect of changing stellar wind magnetic field and planetary magnetic field strengths on planetary magnetospheric topology and atmospheric losses. It is observed that the relative strengths of stellar and planetary magnetic fields play a significant role in determining the steady state magnetospheric configuration and atmospheric erosion. When the stellar field is strengthened or the planetary field is weakened, stellar magnetic field accumulation occurs at the day-side of the planet which forces the magnetopause to shift closer to its surface. The magnetotail opens up leading to the formation of Alfv\'{e}n wings in the night-side wake region. We demonstrate how reconnection processes and wind conditions lead to the bifurcation of the magnetotail current sheet. With increasing stellar wind magnetic field strength, the day-side reconnection point approaches the planet thereby enhancing mass loss. We establish an analytic equation which successfully captures the modeled mass-loss rate variations of planets with changing magnetic field strengths. Our results are relevant for understanding how the interplay of stellar and planetary magnetism influence (exo)planetary environments and their habitability in star-planet systems with differing relative magnetic field strengths, or in a single star-planet system over the course of their evolution with age.
Sakshi Gupta, Arnab Basak, Dibyendu Nandy
2023-03-08T18:09:18Z
http://arxiv.org/abs/2303.04770v2
Impact of Changing Stellar and Planetary Magnetic Fields on (Exo)planetary Environments and Atmospheric Mass Loss ###### Abstract The magnetic activity of a star - which modulates the stellar wind outflow - shapes the immediate environments of orbiting planets and induces atmospheric loss thereby impacting their habitability. We perform a detailed parameter space study using three dimensional magnetohydrodynamic simulations to understand the effect of changing stellar wind magnetic field and planetary magnetic field strengths on planetary magnetospheric topology and atmospheric losses. It is observed that the relative strengths of stellar and planetary magnetic fields play a significant role in determining the steady state magnetospheric configuration and atmospheric erosion. When the stellar field is strengthened or the planetary field is weakened, stellar magnetic field accumulation occurs at the day-side of the planet which forces the magnetopause to shift closer to its surface. The magnetotail opens up leading to the formation of Alfven wings in the night-side wake region. We demonstrate how reconnection processes and wind conditions lead to the bifurcation of the magnetotail current sheet. With increasing stellar wind magnetic field strength, the day-side reconnection point approaches the planet thereby enhancing mass loss. We establish an analytic equation which successfully captures the modeled mass-loss rate variations of planets with changing magnetic field strengths. Our results are relevant for understanding how the interplay of stellar and planetary magnetism influence (exo)planetary environments and their habitability in star-planet systems with differing relative magnetic field strengths, or in a single star-planet system over the course of their evolution with age. Unified Astronomy Tesaurus concepts: Stellar winds (1636); Magnetohydrodynamical simulations (1966); Planetary magnetospheres (997); Exoplanet atmosphere (487); Exoplanet atmospheric evolution (2308); Star-planet interactions (2177); Stellar magnetic fields (1610) 0000-0002-4880-7888]Sakshi Gupta 0000-0002-4888-7888]Arnab Basak 0000-0002-4888-7888]Dibyendu Nandy ## 1 Introduction A star interacts with the planets it hosts via magnetized stellar wind which directly impacts their magnetospheric configuration and atmospheric loss. Gravitational (tidal) interactions can be neglected if the planet lies far away (as in the case for Sun-Earth system) and in such a scenario, the stellar wind independently affects the planetary environment without the possibility of the coronal structure of the star getting modified. The evolution of planetary atmospheres and the nature of its interaction with the stellar wind are profoundly affected by changes in their respective magnetic field strengths (Cohen et al., 2015; Basak and Nandy, 2021). Discovery of several exoplanets (Mayor and Queloz, 1995; Pepe et al., 2014; Lunine et al., 2009) has sparked interest in exploring the signatures of life-sustaining conditions and understanding habitability from the perspective of the origin and evolution of planetary atmospheres (Lammer et al., 2009; Lammer, 2013; Pollack and Yung, 1980; Cridland et al., 2017). Habitability - in the astrophysical context - depends on the ability of the stellar wind forced planet to hold on to its atmosphere (Alvarado-Gomez et al., 2020) and some other aspects; this work focuses on stellar wind planetary interactions. The age of the star determines its activity which dictates the properties of the outflowing wind. Observations confirm the varying magnetic activity of the Sun during its evolutionary phases (Nandy and Martens, 2007; Vidotto, 2021), as well as variations in field strength of solar system planets and stars outside our solar system (Stevenson, 1983; Kiefer et al., 2017; J.-D. do Nascimento et al., 2016). Exploration of magnetic fields on exoplanets is a research pursuit of high topical interest (Oklopcic et al., 2020; Griessmeier, 2015). With the detection of diverse magnetic activity in a number of star-planet systems, the question that naturally arises is - what is the consequence of magnetic interactions of (exo)planets with their host star? This is a critical question because the answer to this determines whether a planet can retain its atmosphere during its long-term interaction with the harboring star and consequently, its habitability time-scale. We explore how the nature of this interaction is determined by the relative strengths of the stellar wind magnetic field and planetary magnetosphere. Several efforts have been made to understand how stellar winds influence the planetary magnetosphere as well as its atmosphere (Lammer et al., 2012; See et al., 2014; Vidotto and Cleary, 2020; Harbach et al., 2021; Nandy et al., 2021). Variation in stellar magnetic activity (Nandy, 2004; Nandy and Martens, 2007; Tripathi et al., 2021; Brun et al., 2014; Vidotto, 2021; Nandy et al., 2021) introduces variations in stellar radiation (Spina et al., 2020), stellar wind speed (Finley et al., 2018), magnetic field strength of plasma winds (Vidotto et al., 2015) and may lead to magnetic storms as well. As a consequence of these interconnected phenomena, the planetary dipolar field is deformed resulting in a magnetospheric structure that may vary from what we often observe on the Earth (Gallet et al., 2016). In this paper, we perform a thorough parameter space study using numerical modelling for understanding interactions in star-planet systems and the effect of stellar activity evolution on planets with different magnetospheric strengths. The magnetized stellar wind and the planet's inherent magnetic field are projected to have a significant impact on planetary habitability and atmospheric mass loss (Nandy, 2004; Khodachenko et al., 2008; Nandy et al., 2017). We carry out three dimensional global magnetohydrodynamic (MHD) simulations using a broad range of stellar and planetary magnetic field values for providing a comprehensive picture of the interaction process. Most extrasolar giant planets that have been discovered so far are anticipated to have significant ionospheres. To simplify the global simulation and based on prior studies (Koskinen et al., 2010; Yan et al., 2019; Gronoff et al., 2011), we consider the planet to be surrounded by a perfectly conducting plasma atmosphere. We study the impact of increasing stellar wind magnetic field on magnetopause stand-off distance and explore the conditions that lead to the magnetic pile-up in the day-side region. Another significant occurrence we examine is the formation of Alfven wings where the magnetotail opens up when the upstream wind Alfvenic Mach number is low and the current sheet length shortens and bifurcates in the night-side region of the planet. We find a criterion for current sheet bifurcation that is consistent with all cases of our study and establish an analytical relation for the variation of mass loss rate with changing stellar and planetary magnetic fields. This study is relevant for any star-planet system and in those systems in which the magnetic fields of either or both entities vary considerably with time. Our findings are significant for a better understanding of (exo)planetary atmospheres and determining the impact of varying magnetic field strength on habitability. This paper begins with the model description in section 2 in which we give an overview of the theory and numerical setup employed in this study. The detailed findings are presented in section 3 followed by the conclusion in section 4. ## 2 Model Description We adapt the Star-Planet Interaction Module (CESSI-SPIM) developed by Das et al. (2019) for simulating different configurations of stellar wind and planetary magnetospheres. The governing set of resistive MHD equations are given by: \[\partial_{t}\rho+\nabla\cdot(\rho\vec{v})=0 \tag{1}\] \[\partial_{t}\vec{v}+(\vec{v}\cdot\nabla)\vec{v}+\frac{1}{4\pi\rho}\vec{B} \times(\nabla\times\vec{B})+\frac{1}{\rho}\nabla P=\vec{g} \tag{2}\] \[\partial_{t}E+\nabla\cdot[(E+P)\vec{v}-\vec{B}(\vec{v}\cdot\vec{B})+(\eta \cdot\vec{J})\times\vec{B}]=\rho\vec{v}\cdot\vec{g} \tag{3}\] \[\partial_{t}\vec{B}+\nabla\times(\vec{B}\times\vec{v})+\nabla\times(\eta \cdot\vec{J})=0 \tag{4}\] where the symbols \(\rho\), \(v\), \(\vec{B}\), \(\vec{J}\), \(P\), \(E\), and \(\vec{g}\) denote density, velocity, magnetic field, current density, pressure, total energy density and gravitational acceleration due to the planet respectively. The expression for total energy density is given by \[E=\frac{P}{\gamma-1}+\frac{\rho v^{2}}{2}+\frac{B^{2}}{8\pi} \tag{5}\] for an ideal gas equation of state. The computational domain extends from -80 \(R_{p}\) to 200 \(R_{p}\) in the \(x\)-direction, -45 \(R_{p}\) to 45 \(R_{p}\) in the \(y\)-direction and -200 \(R_{p}\) to 200 \(R_{p}\) in the \(z\)-direction, where \(R_{p}\) is the radius of the planet which is located at the origin of the Cartesian box. The region extending from -2 \(R_{p}\) to 2 \(R_{p}\) is resolved by 12 grids in all three directions i.e. 1 \(R_{p}\) is resolved using 3 grids in the planetary vicinity. Keeping in mind our modest computational facility, a combination of stretched and uniform grid types is used in the \(x\) and \(z\) directions. In the \(x\)-direction, the regions extending from -80 \(R_{p}\) to -2 \(R_{p}\) and 2 \(R_{p}\) to 10 \(R_{p}\) are resolved using 156 and 16 grids respectively i.e. 1 \(R_{p}\) is resolved using 2 grids. Grids with stretching ratio 1.005175 are used from 10 \(R_{p}\) to 200 \(R_{p}\). In the \(y\)-direction, the regions from -45 \(R_{p}\) to -2 \(R_{p}\) and from 2 \(R_{p}\) to 45 \(R_{p}\) are resolved by 86 grids each i.e. 1 \(R_{p}\) is resolved using 2 grids. In the \(z\)-direction, the regions from -22 \(R_{p}\) to -2 \(R_{p}\) and 2 \(R_{p}\) to 22 \(R_{p}\) are resolved by 40 grids each i.e. 1 \(R_{p}\) is resolved using 2 grids whereas grids with stretching ratio 1.004958 are used for the regions extending from -200 \(R_{p}\) to -22 \(R_{p}\) and 22 \(R_{p}\) to 200 \(R_{p}\). In this study, an Earth-like planet is considered with similar mass, radius and tilt while the stellar wind is assumed to have a speed of 270 km/s corresponding to quiet times. We perform a set of simulations by varying the magnetic field strength of both the planetary magnetosphere (\(0.1B_{e},0.5B_{e},1B_{e},2B_{e}\,\text{and}\,5B_{e}\)) and stellar wind (0.1 nT, 0.5 nT, 1 nT, 2 nT, 5 nT, 10 nT, 30 nT, 50 nT and 75 nT) with all possible combinations. Here \(B_{e}=3.1\times 10^{4}\) nT denotes surface equatorial magnetic field strength of the Earth's dipolar field. In order to study the effect of stellar activity evolution on planetary atmospheric loss which directly impacts habitability, we initialize a conducting plasma atmosphere surrounding the planet. The density profile in the vicinity of the planet is defined by \[\begin{gathered}\rho_{pl}=10^{6}\rho_{atm}\qquad r\leq R_{p}\\ \rho_{atm}(r)=\rho_{pl}+\frac{(\rho_{amb}-\rho_{pl})}{2}\Big{[} \text{tanh}\Big{\{}9\Big{(}\frac{r}{R_{p}}-2\Big{)}\Big{\}}+1\Big{]}\\ R_{P}\leq r\leq 3R_{p}\,,\end{gathered} \tag{6}\] where \(\rho_{pl}\) and \(\rho_{amb}\) are densities of the planet and ambient medium respectively and \(r\) is the radial distance from the origin. For more details on the justifications for the choice of the above atmospheric profile, interested readers may refer to Sec. 2.1 of Basak & Nandy (2021). The pressure distribution in the atmosphere is evaluated by numerical integration of the equation \[\frac{dP}{dr}=-\rho_{atm}(r)g(r) \tag{7}\] Here, the gravitational field is given by \(g(r)=-\frac{GM_{pl}}{r^{2}}\) where \(M_{pl}\) is the planetary mass. The pressure inside the planet is evaluated by extrapolating the value of pressure at planet-ionosphere boundary. The density and pressure in the region \(r>3R_{pl}\) is initialized to be equal to that in the ambient medium (Table 1). \begin{table} \begin{tabular}{l c c} \hline \hline Physical quantity & Notation & Value used \\ \hline Density in ambient medium & \(\rho_{amb}\) & 1.5 \(\times 10^{-23}\) g cm\({}^{-3}\) \\ Pressure in ambient medium & P\({}_{amb}\) & 2.49 \(\times 10^{-11}\) dyne cm\({}^{-2}\) \\ Density of stellar wind & \(\rho_{sw}\) & 4 \(\rho_{amb}\) \\ Velocity of stellar wind & v\({}_{sw}\) & 2.7 \(\times 10^{7}\) cm s\({}^{-1}\) \\ Adiabatic index & \(\gamma\) & 5/3 \\ Planetary mass & M\({}_{pl}\) & 5.972 \(\times 10^{27}\) g \\ Planetary radius & R\({}_{pl}\) & 6.371 \(\times 10^{8}\) cm \\ Intrinsic planetary magnetic field & B\({}_{pl}\) & 0.1B\({}_{e}\) - 5B\({}_{e}\)\({}^{*}\) \\ Stellar wind magnetic field & B\({}_{sw}\) & 0.1 nT - 75 nT \\ Magnetic diffusivity & \(\eta\) & \(10^{13}\) cm\({}^{2}\) s\({}^{-1}\) \\ Magnetospheric tilt angle & \(\theta_{pl}\) & \(11^{\circ}\) \\ \hline \end{tabular} *_The symbol B\({}_{e}\) represents the magnetic field for the case of the Earth’s dipole._ \end{table} Table 1: Values of physical parameters used in the simulations and their respective notations. The input parameters at the stellar wind injection boundary (at \(x\)= -80 \(R_{p}\) in \(yz\) plane) are obtained by solving the Rankine-Hugoniot magnetized jump conditions for the given shock velocity. A southward oriented (SIMF) stellar wind is injected with density \(\rho_{sw}\) = 4 \(\rho_{amb}\). For all other boundary faces of the Cartesian box, force-free outflow boundary condition is implemented. The magnetic fields of the wind and (or) planet are varied in each run and the steady state configuration of the planetary magnetosphere and atmospheric loss are analyzed. ## 3 Results and Discussion When the stellar wind impinges upon the planetary magnetosphere, a dynamic pressure balance is achieved at the day-side of the planet resulting in the formation of a magnetopause which inhibits the penetration of stellar plasma and protects the inner atmosphere from erosion. The balance between thermal, magnetic and dynamic pressures determines the shape and location of the magnetopause. An earlier study by Basak and Nandy (2021) considers the magnetopause stand-off distance (\(R_{mp}\)) to be the location of balance between the dynamic \(P_{d}\) (incoming wind) and magnetic pressure \(P_{m}\) (near the planet) curves. This approach works well if the stellar wind magnetic field (\(B_{sw}\)) is weak since the magnetic pressure of the incoming wind is very small and may be neglected in the calculations. However, when the stellar magnetic field is relatively stronger, it accumulates outside the magnetopause and the magnetic pressure in the magnetosheath region cannot be ignored any more. Our study shows that the point of intersection between \(P_{m}\) and \(P_{d}\) migrates outward nearer to the bowshock as \(B_{sw}\) is increased (Figure 1). For a strongly magnetized wind, it is found that the dynamic pressure \(P_{d}\) is sufficiently high even after this intersection point which indicates that stellar wind plasma is able to penetrate beyond this location and therefore, it cannot be regarded as the magnetopause stand-off distance. Thus, we need a different approach for evaluating the magnetopause stand-off distance for the case of strongly magnetized winds. It has been shown in previous studies (Schield, 1969; Shue and Chao, 2013; Lu et al., 2015) that at the magnetopause, the sum of magnetic and thermal pressures (denoted by \(P_{th+m}\) henceforth) on the planetary side balances the incoming stellar wind (see also Shue and Chao, 2013). These considerations result in the relation \[\left[P_{th}+P_{m}\right]_{\rm planet-side}=\left[P_{th}+P_{m}\right]_{\rm star -side}. \tag{8}\] The distance from the planet where the \(P_{th+m}\) curve peaks and its gradient becomes zero along the subsolar line is then considered to be the magnetopause stand-off distance. The intersection of \(P_{th}\) and \(P_{d}\) curves gives the location of bow shock stand-off distance. Figure 1 shows the pressure balance for obtaining bow shock distance (\(R_{bs}\)) and the magnetopause stand-off distance (\(R_{mp}\)) for the case of planetary magnetosphere \(B_{p}=B_{e}\) (\(B_{e}=3.1\times 10^{4}\) nT) with varying stellar wind magnetic field. Note that only southward interplanetary magnetic field (SIMF) is considered for this study. The total thermal pressure (\(P_{th}\)) are estimated from the input parameters (Mignone et al., 2007) while dynamic pressure of stellar wind (\(P_{d}=\frac{\rho_{sw}v^{2}}{2}\)) and magnetic pressure (\(P_{m}=\frac{B^{2}}{8\pi}\)) are evaluated from evolving simulation variables. As the stellar wind magnetic field strength is increased, the magnetopause moves closer to the planet and the bow shock moves further away. For weakly magnetized stellar wind as shown in Figure 1(a)-(c), the stellar field accumulation is not significant and the intersection of \(P_{d}\) and \(P_{m}\) gives the \(R_{mp}\) location as depicted in Basak and Nandy (2021). As \(B_{sw}\) increases further, it is evident that stellar wind magnetic pressure builds up significantly in front of the planet (Figure 1 (d)-(f)). With the increase in \(B_{sw}\), both thermal and magnetic pressure increase in the magnetosheath region initially up to some distance. However, the enhancement in magnetic pressure gets prominent gradually due to the accumulation of stellar wind magnetic field outside the magnetopause. Just after the magnetopause, magnetic pressure drops to nearly zero representing the day-side reconnection \(X\) point. Thermal pressure rapidly increases near the magnetopause due to energy conversion in the magnetic reconnection region. (see e.g. Lu et al., 2015) Pressure balance for different planetary magnetic field strength (\(B_{p}\)) keeping the stellar wind magnetic field constant reveals that with decreasing \(B_{p}\) both bow shock and magnetopause move towards the planet (Basak and Nandy, 2021). This implies that for lower values of \(B_{p}\), stellar wind is able to penetrate closer to the planet resulting in greater atmospheric loss. We find significant increment of magnetic pressure in the magnetosheath region for the case of weaker planetary magnetic field as compared to stronger field cases. Thus, either strengthening the stellar wind magnetic field or weakening the planetary magnetic field leads to magnetic field accumulation at the day side of the planet. Hence, the ratio of the intrinsic planetary magnetic field and stellar wind magnetic fields is an essential factor which determines the dynamics of the star-planet interaction. This is in keeping with expectations. The interaction of plasma winds and planetary magnetosphere results in certain deformation in the planetary dipolar field. This deformation leads to a tear-drop shaped magnetospheric structure that we usually observe for the Earth. A series of simulation results of interaction of planetary magnetosphere with stellar wind magnetic field is shown in Figure 2. In each subplot, magnetic field lines (white streamlines) are traced over the current density (colormap) in the \(y=0\) plane with stellar wind plasma flowing from left to right. In plots (a)-(c), the simulations show the formation of extended magnetotail structures for varying planetary magnetic field \(B_{p}=0.1\ B_{e}\), 1 \(B_{e}\), and 5 \(B_{e}\) respectively. Stellar wind magnetic field strength is fixed at \(B_{sw}=2\) nT for all cases. As \(B_{sw}\) is increased to 10 nT [panels (d)-(f)] for the same set of planetary magnetic field values, the magnetotail begins to open up and on further increase of \(B_{sw}\) to 50 nT [panels (g)-(i)] the lobes of the tail open up even further (Turpenney et al., 2020). The Alfven Mach number is an important control parameter that governs the behaviour of plasma embedded in magnetic field. With increasing stellar wind magnetic field strength, the upstream Alfven Mach number decreases which leads to the appearance of Alfven wings across the planet (Ridley, 2007). Figure 3 illustrates the sub-Alfvenic plasma interaction with the magnetized planet and the formation of three dimensional Alfven wings in the planetary magnetotail as shown by the region enclosed by Alfvenic Mach number \(M_{A}\) isosurface (\(M_{A}\leq 0.3\)). When stellar wind magnetic field is strong, field lines bend but do not strongly drape the planet. Chane et al. (2012) have shown the observational evidence of Alfven wings formation on the Earth during 24 and 25 May 2002. Several moons in our solar system including Ganymede, Io, Europa, and Enceladus have been observed to form Alfven wings. Saur, J. et al. (2013) have found signatures of sub-Alfvenic plasma interactions in several star-(exo)planet systems. With decreasing upstream Alfven Mach number, wings form at larger angles from the equatorial plane and the length of the current sheet becomes shorter i.e., reconnection takes place closer to the planet in the magnetotail region as shown in column-wise panels of Figure 2. The row-wise panels of Figure 2 show that for a given stellar wind magnetic field strength, Alfven wings are more prevalent in case of weaker planetary magnetic field. Here, we present only a few simulation results to show how the magnetotail opens up. A similar pattern is found for other simulations that involve all possible combinations of parameters (described in Table 1). Vernisse et al. (2017) have shown that interactions which involve either a weakly magnetized obstacle or sub-Alfvenic upstream plasma velocities or both can lead to the dominance of an Alfven wing structure - and this is corroborated by our simulations. To analyze the opening up of the magnetotail in greater detail, we locate the distance from the planet where the current sheet bifurcates in the night-side region. Figure 4(a) shows the current density plot in \(xz\)-plane (left) and corresponding current density magnitude (\(J_{mag}\)) variation along the subsolar line (right). Since the current sheet has a finite thickness along \(y\) and \(z\) directions, we consider a hypothetical rectangular box outside the planet's atmosphere encapsulating the current sheet and extending from -0.5 \(R_{p}\) to 0.5 \(R_{p}\) along both \(y\) and \(z\) directions. Thereafter, the spatially averaged \(J_{mag}\) is calculated on \(yz\) slices of the cube and its variation along \(x\)-direction is plotted. We find that the current sheet starts to bifurcate typically when the current density magnitude drops off to about 45% - 50% of its peak value. These are represented by line B and line A respectively in Figure 4(a). This result is robust considering all different sets (Table 1) of our simulations. The current sheet, extending from the planet to its bifurcation point on the night-side region, is hereby referred to as the current sheet length. Variation of current sheet length with planetary & stellar wind magnetic field is shown in Figure 4(b). As the planetary magnetic field \(B_{p}\) increases, the bifurcation point recedes away from the planet. For \(B_{p}=2B_{e}\) we do not see any bifurcation within the simulation domain until \(B_{sw}=30\) nT. For a stronger planetary magnetic field of \(B_{p}=5B_{e}\), bifurcation starts around \(B_{sw}\)= 50 nT onwards. It is evident from Figure 4(b) that the current sheet bifurcation point shifts nearer to the planet when either the stellar wind magnetic field increases or the planetary magnetic field weakens. We also investigate how the variations in stellar wind and planetary magnetic field strengths impact the mass-loss rate of planetary atmospheres. To compute the total mass-loss rate, we consider a cube of edge length 6.6 \(R_{p}\) (extending from -3.3 \(R_{p}\) to 3.3 \(R_{p}\) in all three directions) with its origin coinciding with the centre of the planet such that the whole planet including its atmosphere is enclosed within the cube. Mass loss rates are computed across all six faces of the cube which contribute to the total mass loss. The rate of atmospheric mass loss depends on the extent of stellar wind penetration into the planetary atmosphere. This is essentially denoted by a point at the dayside -- the reconnection \(X\) point -- beyond which planetary magnetic field lines merge with the stellar field. If the magnetic field strength of the stellar wind is increased gradually keeping that of the planet fixed, the \(X\)-point is expected to shift closer to the planet inducing greater atmospheric loss. However, the distance of the \(X\)-point from the planet does not decrease continually with increasing stellar field. The reconnection neutral point saturates at a certain level and then onwards, the stellar field starts accumulating at the day-side [Figure 1 (d)-(f)] similar to that of an imposed magnetosphere; at this stage mass loss rate attains a limiting value. Figure 5 (a) confirms increase in atmospheric mass-loss rate with increasing stellar wind magnetic field strength. Figure 5 (b) shows the plot of the \(X\)-point distance from the planetary surface with increasing stellar wind magnetic field for different values of the planetary field. This signifies that for a weak planetary field, the \(X\)-point distance saturates to a steady value at lower stellar magnetic fields while for a stronger planetary field, the saturation occurs for higher values of the stellar wind field. Figure 5 (a) shows that for a weaker planetary field, the loss rate increases initially and then saturates at lower values of the stellar wind field while for the stronger planetary field, the loss rate requires higher values of stellar wind field to attain saturation. The atmospheric loss rate is found to have a strong negative correlation with the X-point distance from the planet. The Pearson correlation coefficient values are -0.9800, -0.9683, -0.9753 and -0.9659 for planetary magnetic fields \(0.5B_{e}\), \(1B_{e}\), \(2B_{e}\), and \(5B_{e}\) respectively. Figure 5(a) also indicates that for a conducting plasma atmosphere, as considered in the present study, the mass loss rate increases with increasing strength of the planetary magnetosphere. This is because a stronger magnetosphere provides a larger interaction area with the ambient medium and therefore, may lead to greater loss when it interacts with the stellar wind. Moreover, as the atmospheric matter in our study is not considered to be charge neutral, it is expected that the atmospheric plasma will take part in magnetic reconnections and be lost in the process. Earlier studies show that the atmospheric matter stretches out and escapes through polar caps and cusps during reconnections (Gunell et al., 2018; Egan et al., 2019; Sakata et al., 2020; Carolan et al., 2021) resulting in a larger atmospheric mass loss for a stronger magnetosphere. Gunell et al. (2018) shows that an intrinsic magnetic field does not necessarily protect a planet against atmospheric escape and the escape rate can be higher even for a highly magnetized planet. Therefore the role of intrinsic magnetic field in the protection of planetary atmosphere is a complex question. ### Semi-Analytical Expression for Planetary Mass Loss Rates Our study shows that the global dynamics of the planet, current sheet length and atmospheric mass loss rates are significantly influenced by the relative strength of the planetary and stellar wind magnetic fields. For different sets (Table 1) of our simulations, Figure 6 shows that ratio of planetary to stellar wind magnetic fields is an important factor in governing atmospheric mass loss rates. Based on theoretical considerations we establish an analytical relationship between these two in order to ascertain the dependence of mass loss rate on the variation of planetary and stellar wind magnetic fields. The unpeturbed planetary dipolar magnetic field falls as \(\frac{1}{r^{3}}\), thus pressure balance at magnetopause can be written as \[p_{dyn\,sw}+\frac{B_{sw}^{2}}{2\mu_{0}}+p_{th\,sw}=\frac{k_{m}^{2}B_{p}^{2}R_ {p}^{6}}{2\mu_{0}R_{mp}^{6}}+p_{th\,p}\,, \tag{9}\] where dynamic pressure due to stellar wind, thermal pressure due to stellar wind and thermal pressure due to planetary magnetosphere at magnetopause are depicted by \(p_{dyn\,sw}\) and \(p_{th\,sw}\), \(p_{th\,p}\) respectively. The symbols \(B_{sw}\) and \(B_{p}\) represent the stellar wind and planetary magnetic field strengths respectively while \(R_{mp}\) is the magnetopause standoff distance in terms of planetary radius \(R_{p}\). The factor \(k_{m}\) is a measure of the compression of the dipolar magnetic field by the stellar wind at the magnetopause (Mead and Beard, 1964; Tsyganenko, 2005). From equation 9, the magnetopause stand-off distance can be expressed as \[\Big{(}\frac{R_{mp}}{R_{p}}\Big{)}=\Big{[}\frac{k_{m}^{2}(\frac{B_{p}}{B_{sw}} )^{2}}{1+\frac{C}{B_{p}^{2}}(\frac{B_{p}}{B_{sw}})^{2}}\Big{]}^{1/6}\,, \tag{10}\] where \(C=2\mu_{0}(p_{dyn\,sw}+p_{th\,sw}-p_{th\,p})\). As pointed out in section 3, the atmospheric mass loss rate (\(\dot{M}\)) is linearly anti-correlated with the day side X-point distance (\(X_{point}\)) and can be expressed as \(\dot{M}=-AX_{point}+B\) where \(A\) and \(B\) are fitting parameters that need to be determined. Moreover, in case of an SIMF the magnetospheric stand-off distance, \(R_{mp}\) is approximately equal to the day side reconnection point (\(X_{point}\)). Therefore, equation 10 can be expressed as \[\dot{M}\simeq-AR_{p}\Big{[}\frac{(\frac{B_{p}}{B_{sw}})^{2}}{1+\frac{C}{B_{p}^ {2}}(\frac{B_{p}}{B_{sw}})^{2}}\Big{]}^{1/6}+B \tag{11}\] Equation 11 gives an analytical relationship between mass loss rate and the ratio of planetary and stellar wind magnetic field strengths. Figure 6 shows that our modelled output of atmospheric mass loss rate fits reasonably well with the analytical expression given by 11. The value of \(R_{square}\) (goodness of fit) for the modelled fit is greater than 0.99 for all instances of planetary magnetic fields. ## 4 Conclusions In this study, we have used 3D global magnetohydrodynamic modelling to explore the effect of varying stellar and planetary magnetic field strengths on the steady state magnetospheric configuration and atmospheric loss rates of planets. This study is important for understanding how the magnetic activity evolution of stars affects the environment and habitability of planets with different magnetospheric strengths. Our results are therefore applicable to a wide domain of far-out (exo)planetary systems. Computing pressure balance at the day-side of the planet illustrates that either strengthening the stellar wind magnetic field or weakening the planetary magnetosphere results in stellar field accumulation in front of the planet similar to that of an imposed magnetosphere. In such a scenario, the magnetopause stand-off distance cannot be obtained using the conventional procedure of balancing the dynamic pressure of the incoming wind and magnetic pressure close to the planet but rather the sum of both magnetic and thermal pressures need to be considered. It is found that for a particular strength of planetary magnetopshere, increasing the magnetic field of stellar wind makes the bowshock shift away from the planet while the magnetopause is formed closer, implying greater wind penetration. We find that the relative strength of planetary and stellar wind magnetic fields plays a critical role in determining the magnetic reconnection in the magnetotail region. The long extended planetary magnetotail which exists for a moderately magnetized stellar wind or strong intrinsic planetary magnetic field ceases to exist for cases when the stellar wind magnetic field becomes extremely strong (i.e. the Alfvenic Mach number decreases) or the planetary magnetic field becomes very weak. This leads to the formation of Alfven wings in the night-side wake region. Our findings suggest that the magnetotail begins to open up and the point of bifurcation shifts closer to the planet when either the magnetic strength of the stellar wind is increased or the intrinsic planetary magnetic field is reduced. We observe that the magnetotail current sheet starts to bifurcate when current density magnitude drops to about 45-50 % of its maximum value. We also find that the atmospheric mass-loss rate increases with increase in stellar magnetic field due to greater wind penetration close to the planet. The atmospheric loss rate saturates at lower values of stellar wind magnetic field for a weak planetary field. In the case of a stronger planetary field, however, a stronger stellar wind magnetic field is required for the saturation of the atmospheric loss rate. The variation of the day-side \(X\)-point distance from the planet with stellar wind magnetic field provides a thorough explanation for the saturation of atmospheric loss rate since the day side \(X\)-point distance from the planet and atmospheric loss rate are found to have a strong negative correlation. By modelling the impact of planetary magnetic field on atmospheric escape processes, we corroborate the study by Gunell et al. (2018) that the escape rate can be higher for strongly magnetized planets. We establish an analytical relationship between the mass loss rate and the ratio of planetary and stellar wind magnetic fields. Our study shows that the global dynamics of the planet, current sheet length, and atmospheric mass loss rate are significantly influenced by the relative magnetic field strengths of the star and the planet. The results of our numerical simulation indicate that the analytical expression derived in this study describes quite well the modeled atmospheric mass loss rates. The detailed parameter space study presented in this paper will be helpful for identifying stellar and planetary conditions that lead to a particular magnetospheric configuration in solar and exoplanetary systems. The results also illustrate the impact of magnetic field variability on atmospheric loss rates which is relevant for understanding the habitability of (exo)planets. ## 5 Acknowledgments The authors thank Souvik Roy, Soumyaranjan Dash and Chitradeep Saha for useful discussions. S.G. acknowledges fellowship support from University Grants Commission, Goverment of India. The development of the Star-planet Interaction module (CESSI-SPIM) along with simulation runs were carried out at the Center of Excellence in Space Sciences India (CESSI) which is funded by IISER Kolkata, Ministry of Education, Government of India.
2305.00698
Remarks on Interior Regularity Criteria Without Pressure for the Navier-Stokes Equations
In this note we investigate interior regularity criteria for suitable weak solutions to the 3D Naiver-Stokes equations, and obtain the solutions are regular in the interior if the $L^p_tL_x^q(Q_1)$ norm of the velocity is sufficiently small, where $1\leq \frac{2}{p}+\frac{3}{q}<2$ and $2\leq p\leq \infty$. It improves the recent result of $p,q>2 $ by Kwon \cite{Kwon} (J. Differential Equations 357 (2023), 1--31.), and also generalizes Chae-Wolf's $L_t^\infty L_x^{\frac32+}$ criterion \cite{CW2017} (Arch. Ration. Mech. Anal. 225 (2017), no. 1, 549--572.).
Shuai Li, Wendong Wang, Daoguo Zhou
2023-05-01T07:44:06Z
http://arxiv.org/abs/2305.00698v1
# Remarks on interior regularity criteria without pressure for the Navier-Stokes equations ###### Abstract. In this note we investigate interior regularity criteria for suitable weak solutions to the 3D Naiver-Stokes equations, and obtain the solutions are regular in the interior if the \(L^{\prime}_{t}L^{q}_{x}(Q_{1})\) norm of the velocity is sufficiently small, where \(1\leq\frac{2}{p}+\frac{3}{q}<2\) and \(2\leq p\leq\infty\). It improves the recent result of \(p,q>2\) by Kwon [15] (J. Differential Equations 357 (2023), 1-31.), and also generalizes Chae-Wolf's \(L^{\infty}_{t}L^{\frac{3}{2}+}_{x}\) criterion [3] (Arch. Ration. Mech. Anal. 225 (2017), no. 1, 549-572.). Key words and phrases:Navier-Stokes equations, Interior Regularity, Suitable weak solutions 2010 Mathematics Subject Classification: Primary 76D03; 76D05;; Secondary 35B33; 35Q35 ## 1. Introduction Consider the 3D Navier-Stokes equations describing viscous incompressible fluid in \(\mathbb{R}^{3}\times(0,T)\): \[\left\{\begin{array}{l}\partial_{t}u-\Delta u+u\cdot\nabla u+ \nabla\pi=0,\\ \operatorname{div}u=0\end{array}\right. \tag{1.1}\] with a smooth and rapidly decaying solenoidal initial vector field \(u(x,0)=u_{0}(x)\) in \(\mathbb{R}^{3}\). Here \(u(x,t)\) denotes the velocity of the fluid and the scalar function \(\pi(x,t)\) denotes the pressure. In a seminal paper [18], Leray proved the global existence of weak solutions with finite energy to the Navier-Stokes equations in three dimensions. See also the global existence of weak solution in a bounded domain by Hopf [13]. However, the regularity of weak solutions is still an outstanding open problem in mathematical fluid mechanics. One type of condition ensuring regularity is that \[\|u\|_{L^{p}((0,T);L^{q}(\mathbb{R}^{3}))}<+\infty,\quad\frac{2}{p}+\frac{3}{q }=1,\quad q\in[3,+\infty], \tag{1.2}\] and we refer to Ladyzenskaja [16], Prodi [19], Serrin [23] Struwe [24] and the references therein. The endpoint case of \(p=\infty,q=3\) is highly nontrivial, which was resolved by Escauriaza-Seregin-Sverak in [6]. In a series of papers [20, 21], Scheffer began the partial regularity theory of the Navier-Stoeks equations. Caffarelli, Kohn and Nirenberg [2] improved the results of Scheffer by proving that the set \(\mathcal{S}\) of possible interior singular points of a suitable weak solution is of one-dimensional parabolic Hausdorff measure zero, i.e. \(\mathcal{P}^{1}(\mathcal{S})=0\), which rests on the following two \(\varepsilon-\)regularity criteria for suitable weak solutions to (1.1). There is an absolute constant \(\varepsilon>0\) such that \(u\) is regular at \((0,0)\) if one of the following conditions holds: \[\|u\|_{L^{3}(Q_{1})}+\|u\pi\|_{L^{1}(Q_{1})}+\|\pi\|_{L^{\frac{1}{2}}L^{1}(Q_{ 1})}\leq\varepsilon, \tag{1.3}\] ###### Contents * 1 Introduction * 2 Preliminaries * 3 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.2 The Navier-Stokes equations * 3.3 The Navier-Stokes equations * 3.4 The Navier-Stokes equations * 3.5 The Navier-Stokes equations * 3.6 The Navier-Stokes equations * 3.7 The Navier-Stokes equations * 3.8 The Navier-Stokes equations * 3.9 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.2 The Navier-Stokes equations * 3.3 The Navier-Stokes equations * 3.4 The Navier-Stokes equations * 3.5 The Navier-Stokes equations * 3.6 The Navier-Stokes equations * 3.7 The Navier-Stokes equations * 3.8 The Navier-Stokes equations * 3.9 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.2 The Navier-Stokes equations * 3.3 The Navier-Stokes equations * 3.4 The Navier-Stokes equations * 3.5 The Navier-Stokes equations * 3.6 The Navier-Stokes equations * 3.7 The Navier-Stokes equations * 3.8 The Navier-Stokes equations * 3.9 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.2 The Navier-Stokes equations * 3.3 The Navier-Stokes equations * 3.4 The Navier-Stokes equations * 3.5 The Navier-Stokes equations * 3.6 The Navier-Stokes equations * 3.7 The Navier-Stokes equations * 3.8 The Navier-Stokes equations * 3.9 The Navier-Stokes equations * 3.1 The Navier-Stokes equations * 3.2 The Navier-Stokes equations * 3.3 The Navier-Stokes equations * 3.4 The Navier-Stokes equations * 3.5 The Navier-Stokes equations * 3.6 The Navier-Stokes equations * 3.7 The Navier-Stokes equations * 3.8 The Navier-Stokes equations * 3.9 The Navier-Stokes equations * 3.10 The Navier-Stokes equations * 3.11 The Navier-Stokes equations * 3.12 The Navier-Stokes equations * 3.13 The Navier-Stokes equations * 3.14 The Navier-Stokes equations * 3.15 The Navier-Stokes equations * 3.16 The Navier-Stokes equations * 3.17 The Navier-Stokes equations * 3.18 The Navier-Stokes equations * 3.19 The Navier-Stokes equations * 3.20 The Navier-Stokes equations * 3.21 The Navier-Stokes equations * 3.22 The Navier-Stokes equations * 3.23 The Navier-Stokes equations * 3.24 The Navier-Stokes equations * 3.25 The Navier-Stokes equations * 3.26 The Navier-Stokes equations * 3.27 The Navier-Stokes equations * 3.28 The Navier-Stokes equations * 3.29 The Navier-Stokes equations * 3.30 The Navier-Stokes equations * 3.31 The Navier-Stokes equations * 3.32 The Navier-Stokes equations * 3.33 The Navier-Stokes equations * 3.34 The Navier-Stokes equations * 3.35 The Navier-Stokes equations * 3.36 The Navier-Stokes equations * 3.37 The Navier-Stokes equations * 3.38 The Navier-Stokes equations * 3.39 The Navier-Stokes equations * 3.40 The Navier-Stokes equations * 3.41 The Navier-Stokes equations * 3.42 The Navier-Stokes equations * 3.43 The Navier-Stokes equations * 3.44 The Navier-Stokes equations * 3.45 The Navier-Stokes equations * 3.46 The Navier-Stokes equations * 3.47 The Navier-Stokes equations * 3.48 The Navier-Stokes equations * 3.49 The Navier-Stokes equations * 3.50 The Navier-Stokes equations * 3.51 The Navier-Stokes equations * 3.52 The Navier-Stokes equations * 3.53 The Navier-Stokes equations * 3.54 The Navier-Stokes equations * 3.55 The Navier-Stokes equations * 3.56 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.8 The Navier-Stokes equations * 3.6.9 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.4 The Navier-Stokes equations * 3.6.5 The Navier-Stokes equations * 3.6.6 The Navier-Stokes equations * 3.6.7 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3.1 The Navier-Stokes equations * 3.6.1.1 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.2 The Navier-Stokes equations * 3.6.3.2 The Navier-Stokes equations * 3.6.3.3 The Navier-Stokes equations * 3.6.1.1 The Navier-Stokes equations * 3.6.1.2 The Navier- Wolf's result were further generalized by Wang, Wu and Zhou in [28] by proving that \[\|u\|_{L^{q}(Q_{1})}\leq\varepsilon_{0},\quad q>\frac{5}{2}. \tag{1.8}\] Chae-Wolf in [3] also proved the norm of \(u\in L^{\infty}_{t}L^{\frac{3}{2}+}_{x}\) implies the regularity. Recently, Kwon [15] proved the interior regularity under the assumption \[\|u\|_{L^{r}_{t}L^{m}_{x}(Q_{2})}\leq\varepsilon_{0},\quad\text{with}\quad \frac{2}{r}+\frac{3}{m}<2,\quad r,m\in(2,+\infty].\] by using compactness method for the dissipative weak solution, which was introduced by Duchon-Robert in [5] firstly. For more results, we refer to [1, 4, 10, 28] etc. _It's interesting that whether the range of \(p,q\) can be relaxed to the range as in [11]?_ That is to say, \[1\leq\frac{2}{p}+\frac{3}{q}\leq 2,\ 1\leq p,q\leq\infty.\] We investigate this issues int this note, and show that \(p\geq 2\) is sufficient when \(\frac{2}{p}+\frac{3}{q}=2-\). Our main result is as follows. **Theorem 1.2**.: _Suppose that \((u,\pi)\) is a suitable weak solution to (1.1) in \(Q_{1}\). For any \((p,q)\) satisfying \(1\leq 2/p+3/q<2\), \(2\leq p\leq\infty\), there exists a positive constant \(\varepsilon_{0}\) such that if_ \[\|u\|_{L^{p}L^{q}(Q_{1})}\leq\varepsilon_{0},\] _then \(u\) is regular at \((0,0)\)._ **Remark 1.3**.: _We say that \(u\) is regular at a certain point, which means that there exists a neighborhood at this point where \(u\) is bounded. The above theorem improved the result of Kwon [15] by considering the borderline case \(p=2\) and \(q\) is arbitrary, which also generalized Chaewolf's \(u\in L^{\infty}_{t}L^{\frac{3}{2}+}_{x}\) in [3]._ **Remark 1.4**.: _The restriction \(p\geq 2\) in Theorem 1.2 seems to be sharp, which comes from the harmonic part of pressure projection. For example, the term \(\int_{Q_{\rho}}\nabla\pi_{h}\cdot\nabla\nabla\pi_{h}\cdot v_{B}\) (see the term of \(M_{5}\) in (4.57)) in the proof implies \(p\geq 2,\) since the harmonic part \(\nabla\pi_{h,B}\) and \(\nabla\nabla\pi_{h,B}\) can only be control by \(\|u\|_{L^{p}_{t}L^{q}_{x}}\) due to the iteration._ The paper is organized as follows, in Section 2, we introduce some definitions and technical lemmas, especially including the Stokes decomposition of the pressure. In Section 3, we proved a Cacciopolli inequality, which plays an important role in our proof. Theorem 1.2 is proved in Section 4. Throughout this article, \(C\) denotes an absolute constant independent of \(u\) and may be different from line to line. ## 2. Preliminaries: some technical lemmas ### Local pressure projection Let us introduce Wolf's pressure decomposition as in [29, 30]. For a bounded \(C^{2}\)-domain \(G\subset\mathbb{R}^{n}\) and \(1<s<\infty\), for any \(F\in W^{-1,s}(G)\), there exists a unique pair \((v,\pi)\in W^{1,s}_{0}\times L^{s}_{0}(G)\) which solves the steady Navier-Stokes equations in the weak sense due to \(L^{p}-\) theory of the steady Stokes system (see, for example, [7]) \[\left\{\begin{array}{ll}-\Delta v+\nabla\pi=F,&\text{in}\qquad G,\\ \text{div }v=0,&\text{in}\qquad G,\\ v=0,&\text{on}\qquad\partial G,\end{array}\right. \tag{2.9}\] where \(\pi\in L^{s}_{0}(G)\) denotes \[\pi\in L^{s}(G)\quad\text{with}\quad\int_{G}\pi dx=0.\] Define the operator \(E_{G}\) as follows: \[E_{G}:W^{-1,s}(G)\to W^{-1,s}(G),\quad E_{G}(F)=\nabla\pi,\] where \(\nabla\pi\) denotes the gradient functional in \(W^{-1,s}(G)\) defined by \[<\nabla p,\phi>=-\int_{G}p\nabla\cdot\phi dx,\quad\phi\in W^{1,s^{\prime}}_{0} (G).\] The operator \(E_{G}\) is bounded from \(W^{-1,s}(G)\) into itself with \(E_{G}(\nabla\pi)=\nabla\pi\) for all \(\pi\in L^{s}_{0}(G)\), and \[\|\nabla\pi\|_{L^{s}(G)}\leq C\|F\|_{W^{-1,s}(G)}. \tag{2.10}\] The norm of \(E_{G}\) depends only on \(s\) and the geometric properties of \(G\). Specially, the norm of \(E_{G}\) is independent of \(G\), if \(G\) is a ball or an annulus, which is due to the scaling properties of the Stokes equation. Moreover, if \(F\in L^{s}(G)\), by the embedding \(L^{s}(G)\hookrightarrow W^{-1,s}(G)\) and the regularity of elliptic equations, there hold the estimate \[\|\pi\|_{L^{s}(G)}\leq C\|F\|_{L^{s}(G)}. \tag{2.11}\] ### Local suitable weak solutions In this subsection, we define the local suitable weak solution as in [29, 30]. **Definition 2.1**.: _Let \(\Omega\) be a domain in \(\mathbb{R}^{3}\) and let \(Q_{T}:=\Omega\times(-T,0)\). We say \(u\) is a local suitable weak solution to the Navier-Stoeks equations (1.1) in \(Q_{T}\) if (i). \(u\in L^{\infty}_{\rm loc}(-T,0;L^{2}_{\rm loc}(\Omega))\cap L^{2}_{\rm loc}(-T,0;W^{1,2}_{\rm loc}(\Omega))\); (ii). \(u\) is a distributional solution to (1.1), i.e. for every \(\varphi\in C^{\infty}_{c}(\Omega\times(-T,0))\) with \(\nabla\cdot\varphi=0\),_ \[\int\int_{\Omega\times(-T,0)}-u\cdot\partial_{t}\varphi-u\otimes u:\nabla \varphi+\nabla u:\nabla\varphi=0;\] _(iii). For every ball \(B\subset\Omega\) the following energy inequality holds: for almost all \(s\in(-T,0)\) and for all non negative \(\phi\in C^{\infty}_{c}(B\times(-T,0))\),_ \[\int|v_{B}(x,s)|^{2}\phi(x,s)dx+2\int\int|\nabla v_{B}(x,\tau)|^{ 2}\phi(x,\tau)dxd\tau \tag{2.12}\] \[\leq \int\int|v_{B}(x,\tau)|^{2}(\partial_{t}\phi+\Delta\phi)dxd\tau+ \int\int|v_{B}|^{2}(v_{B}-\nabla\pi_{h,B})\cdot\nabla\phi dxd\tau\] \[+2\int\int v_{B}\cdot\nabla\nabla\pi_{h,B}\cdot v_{B}\phi dxd\tau- 2\int\int\nabla\pi_{h,B}\cdot\nabla\nabla\pi_{h,B}\cdot v_{B}\phi dxd\tau\] \[+2\int\int(\pi_{1,B}+\pi_{2,B})v_{B}\cdot\nabla\phi dxd\tau,\] _with \(v_{B}=u+\nabla\pi_{h,B}\). Here,_ \[\nabla\pi_{h,B}=-E_{B}(u),\quad\nabla\pi_{1,B}=-E_{B}(u\cdot\nabla u),\quad \nabla\pi_{2,B}=E_{B}(\Delta u). \tag{2.13}\] Noting the properties of the projection operator \(E_{B}\) and using (2.10), (2.11), there holds \[\|\nabla\pi_{h,B}\|_{L^{s}(B)}\leq C\|u\|_{L^{s}(B)}\quad\text{for}\quad s>1, \tag{2.14}\] \[\|\pi_{1,B}\|_{L^{s^{\prime}}(B)}\leq C\|u\|^{2}_{L^{2s^{\prime}}(B)},\quad \text{for}\quad s^{\prime}>1, \tag{2.15}\] \[\|\pi_{2,B}\|_{L^{2}(B)}\leq C\|\nabla u\|_{L^{2}(B)}. \tag{2.16}\] **Remark 2.2**.: _Wolf [29] proved the existence of a local suitable weak solution to the Navier-Stokes equations (1.1). Chae and Wolf [3] proved that if \((u,\pi)\) is a suitable weak solution to the Navier-Stokes equations (1.1), then \(u\) is a local suitable weak solution in the sense of Definition 2.1._ We also need the following Riesz potential estimate (see, for example, [9, p159]). **Lemma 2.3**.: _Let \(\Omega\) be a bounded domain, \(\mu\in(0,1]\), \(1\leq q\leq\infty\), \(0\leq\delta=1/p-1/q<\mu\) and \(V_{\mu}f(x)=\int_{\Omega}|x-y|^{n(\mu-1)}f(y)dy\), then we have_ \[\|V_{\mu}f(x)\|_{L^{q}(\Omega)}\leq\big{(}\frac{1-\delta}{\mu- \delta}\big{)}^{1-\delta}w_{n}^{1-\mu}|\Omega|^{\mu-\delta}\|f\|_{L^{p}( \Omega)},\] _where \(w_{n}\) is the volume of unit ball in \(\mathbb{R}^{n}\)._ ## 3. Cacciopolli's inequality In this section, we establish a Cacciopolli's inequality for the Navier-Stokes equations in term of velocity only for the value range of p in \([2,\infty]\). **Proposition 3.1**.: _Assume that \((u,\pi)\) is a suitable weak solution to (1.1) in \(Q_{1}\). For any \((p,q)\) satisfying \(1\leq 2/p+3/q<2\) with \(2\leq p\leq\infty\), \(u\in L^{p}_{t}L^{q}_{x}(Q_{1})\). Then the following Cacciopolli's inequality holds true:_ \[\|u\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\frac{3}{4}})}^{2}+\|\nabla u \|_{L^{2}L^{2}(Q_{\frac{3}{4}})}^{2}\leq C\|u\|_{L^{p}L^{q}(Q_{1})}^{2}+C\|u\| _{L^{p}L^{q}(Q_{1})}^{4}+C\|u\|_{L^{p}L^{q}(Q_{1})}^{2}.\] _where \(\alpha=\frac{2}{\frac{2}{p}+\frac{3}{q}}\)._ **Remark 3.2**.: _The case \(2\leq p<3\) is specially, since \(\|u\|_{L^{3}L^{3}}\) can not be controlled by \(\|u\|_{L^{p}L^{q}}\) and energy norm for \(p\in[2,3)\) with respect to time direction \(t\). It is worth mentioning that the energy norms of \(v_{B}=u+\nabla\pi_{h}\) include all indicators for \(p\geq 2\) with respect to time direction \(t\) but it is not involved into the iterative process since \(\nabla\pi_{h}\) is related with the domain \(B_{\rho}\)._ ### Proof of Proposition 3.1: The case of \(p>3\) Firstly, for \(\frac{3}{4}\leq\varrho<\rho\leq 1\), let \(Q_{\rho}=(-\rho^{2},0)\times B_{\rho}\), and \(B_{\rho}=\{x\in\mathbb{R}^{3};|x|\leq\rho\}\). Write \(B=B_{\rho}\) and define \(v_{B}=u+\nabla\pi_{h,B}\) with \(\nabla\pi_{h,B}=-E_{B}(u)\). Choose a cut-off function as \[\phi=1\quad\text{in}\quad Q_{\sigma_{1}}\quad\text{with}\quad \sigma_{1}=\frac{2\varrho+\rho}{3},\] \[\phi=1\quad\text{on}\quad Q_{\sigma_{2}}^{c}\quad\text{with}\quad \sigma_{2}=\frac{\varrho+2\rho}{3},\] and satisfies \[|\nabla\phi|\leq C(\rho-\varrho)^{-1},\quad|\partial_{t}\phi|+| \nabla^{2}\phi|\leq C(\rho-\varrho)^{-2}. \tag{3.17}\] Secondly, choosing \(\varphi=\phi^{2}\) in the local energy inequality (2.12), we have \[\int_{B_{\rho}}|v_{B}(x,s)|^{2}\phi^{2}dx+\int_{Q_{\rho}}|\nabla v_{ B}(x,\tau)|^{2}\phi^{2}dxd\tau \tag{3.18}\] \[\leq C(\rho-r)^{-2}\int_{Q_{\rho}}|v_{B}(x,\tau)|^{2}dxd\tau+C(\rho-r) ^{-1}\int_{Q_{\rho}}|v_{B}|^{2}|v_{B}-\nabla\pi_{h,B}|dxd\tau\] \[+C\int_{Q_{\sigma_{2}}}|v_{B}|^{2}|\nabla\nabla\pi_{h,B}|dxd\tau+C \int_{Q_{\sigma_{2}}}|\nabla\pi_{h,B}||\nabla\nabla\pi_{h,B}||v_{B}|dxd\tau\] \[+C(\rho-r)^{-1}\int_{Q_{\rho}}|\pi_{1,B}+\pi_{2,B}||v_{B}|dxd\tau\] \[:= I_{1}+I_{2}+I_{3}+I_{4}+I_{5},\] where \[\nabla\pi_{1,B}=-E_{B}(u\cdot\nabla u),\quad\mbox{and}\quad\nabla\pi_{2,B}=E_{ B}(\triangle u).\] **Step I: Estimate of local energy inequality via \(\|u\|_{L^{3}(Q_{\rho})}\).** By (2.14), there holds for any \(s\in(1,\overline{6})\), \[\|v_{B}\|_{L^{s}(B_{\rho})}=\|u+\nabla\pi_{h,B}\|_{L^{s}(B_{\rho})}\leq C\|u\|_ {L^{s}(B_{\rho})}, \tag{3.19}\] which yields that \[I_{1}\leq C(\rho-\varrho)^{-2}\int_{Q_{\rho}}|u|^{2}dxd\tau.\] Similarly, by (3.19) and (2.14), the estimate of \(I_{2}\) is that \[I_{2}\leq C(\rho-\varrho)^{-1}\int_{Q_{\rho}}|u|^{3}dxd\tau.\] For the part of \(\nabla\nabla\pi_{h,B}\), noting that \(-\Delta v_{h}+\nabla\pi_{h}=-u\), which implies that \(\Delta\pi_{h}=0\), and \(\pi_{h}\) is harmonic. Recall the estimates of harmonic function (see, for example, [14]): for any \(1\leq p,q\leq\infty\), \(0<\varrho<\rho\), and any harmonic function \(h\), it holds \[\|\nabla^{k}h\|_{L^{q}(B_{\varrho})}\leq\frac{C\varrho^{\frac{3}{q}}}{(\rho- \varrho)^{\frac{3}{p}+k}}\|h\|_{L^{p}(B_{\rho})}. \tag{3.20}\] Using (3.19), (2.14), (3.20) and the Holder's inequality, for the term \(I_{3}\), there holds \[I_{3} = C\int_{Q_{\sigma_{2}}}|v_{B}|^{2}|\nabla\nabla\pi_{h,B}|dxd\tau\] \[\leq C\left(\int_{Q_{\rho}}|v_{B}|^{3}dxd\tau\right)^{\frac{2}{3}} \left(\int_{Q_{\sigma_{2}}}|\nabla\nabla\pi_{h,B}|^{3}dxd\tau\right)^{\frac{1 }{3}}\] \[\leq C\left(\int_{Q_{\rho}}|u|^{3}dxd\tau\right)^{\frac{2}{3}}\left(C \sigma_{2}^{3}(\rho-\varrho)^{-6}\int_{Q_{\rho}}|\nabla\pi_{h,B}|^{3}dxd\tau \right)^{\frac{1}{3}}\] \[\leq C\rho(\rho-\varrho)^{-2}\int_{Q_{\rho}}|u|^{3}dxd\tau.\] Similarly, \[I_{4}=C\int_{Q_{\sigma_{2}}}|v_{B}||\nabla\pi_{h}||\nabla\nabla\pi_{h}|dxd\tau \leq C\rho(\rho-r)^{-2}\int_{Q_{\rho}}|u|^{3}dxd\tau.\] For the term \(I_{5}\), using (3.19), (2.15) and (2.16), we infer that \[I_{5} = C(\rho-\varrho)^{-1}\int_{Q_{\rho}}|\pi_{1,B}+\pi_{2,B}||v_{B}|dxd\tau \tag{3.21}\] \[\leq C(\rho-\varrho)^{-1}\left(\int_{Q_{\rho}}|v_{B}|^{3}dxd\tau \right)^{\frac{1}{3}}\left(\int_{Q_{\rho}}|\pi_{1,B}|^{\frac{3}{2}}dxd\tau \right)^{\frac{2}{3}}\] \[\qquad+C(\rho-\varrho)^{-1}\left(\int_{Q_{\rho}}|v_{B}|^{2}dxd \tau\right)^{\frac{1}{2}}\left(\int_{Q_{\rho}}|\pi_{2,B}|^{2}dxd\tau\right)^{ \frac{1}{2}}\] \[\leq C(\rho-\varrho)^{-1}\int_{Q_{\rho}}|u|^{3}dxd\tau+C(\rho-\varrho )^{-1}\left(\int_{Q_{\rho}}|u|^{2}dxd\tau\right)^{\frac{1}{2}}\left(\int_{Q_{ \rho}}|\nabla u|^{2}dxd\tau\right)^{\frac{1}{2}}\] \[\leq C(\rho-\varrho)^{-1}\int_{Q_{\rho}}|u|^{3}dxd\tau+C(\rho-\varrho )^{-2}\int_{Q_{\rho}}|u|^{2}dxd\tau+\frac{1}{16}\int_{Q_{\rho}}|\nabla u|^{2} dxd\tau.\] Collecting the term \(I_{1}-I_{5}\), we arrive at \[\int_{B_{\rho}}|v_{B}(x,s)|^{2}\phi dx+\int_{Q_{\rho}}|\nabla v_{B }(x,\tau)|^{2}\phi dxd\tau\] \[\leq C(\rho-\varrho)^{-2}\int_{Q_{\rho}}|u|^{2}+C\left((\rho-\varrho) ^{-1}+\rho(\rho-\varrho)^{-2}\right)\int_{Q_{\rho}}|u|^{3}+\frac{1}{16}\| \nabla u\|^{2}_{L^{2}L^{2}(Q_{\rho})}\] \[:= J_{1}+J_{2}+J_{3}.\] **Step II: Estimate the term \(\|u\|_{L^{3}(Q_{\rho})}\).** It follows from Holder's inequality that \[J_{1}\leq C\rho^{\frac{5}{3}}(\rho-\varrho)^{-2}\|u\|^{2}_{L^{3}(Q_{\rho})} \leq C(\rho-\varrho)^{-2}\|u\|^{2}_{L^{3}(Q_{\rho})},\] since \(\rho\leq 1\). Next we deal with the term \(\|u\|_{L^{3}(Q_{\rho})}\). For any \(1\leq\tau\leq\frac{p}{2}\) and \(\frac{3p}{3p-4}\leq\kappa\leq 3\) satisfying \(\frac{2}{\tau}+\frac{3}{\kappa}=3\), by interpolation inequality, we see that for any \(f\), \[\|f^{2}\|_{L^{\tau}L^{\kappa}(Q_{\rho})}=\|f\|^{2}_{L^{2\tau}L^{2 \kappa}(Q_{\rho})}\leq C\left(\|f\|^{2}_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}+ \|f\|^{2}_{L^{2}L^{6}(Q_{\rho})}\right). \tag{3.22}\] For a fixed \(p>3\), let \(q\in\left(\frac{3}{2-\frac{2}{p}},\frac{3}{\frac{2}{p-2}-\frac{2}{p}}\right]\) and \(\alpha(\frac{2}{p}+\frac{3}{q})=2\), then \(1<\alpha\leq p-2\). For \(1<\alpha\leq\min\{2,p-2\}\), let \[r=\frac{p}{\alpha}=\frac{p}{2}(\frac{2}{p}+\frac{3}{q}),\quad s=\frac{q}{ \alpha}=\frac{q}{2}(\frac{2}{p}+\frac{3}{q}),\] then \[\frac{1}{s}+\frac{1}{s^{\prime}}=1,\quad\frac{1}{r}+\frac{1}{r^{\prime}}=1, \quad\frac{2}{r^{\prime}}+\frac{3}{s^{\prime}}=3,\] where, \(s^{\prime},r^{\prime}\) are the conjugate index of \(s,r\). Noting that \[\iint_{Q_{\rho}}|u|^{3}dxdt=\iint_{Q_{\rho}}|u|^{\alpha}|u|^{3-\alpha}dxdt\leq \||u|^{\alpha}\|_{L^{r}L^{s}(Q_{\rho})}\||u|^{3-\alpha}\|_{L^{r^{\prime}}L^{s^{ \prime}}(Q_{\rho})},\] by (3.22), we get \[\int_{Q_{\rho}}|u|^{3}dxdt \leq C\|u\|_{L^{\alpha\sigma}L^{\alpha s}(Q_{\rho})}^{\alpha}\Big{(}\|u \|^{\frac{3-\alpha}{2}}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+\|\|u\|^{ \frac{3-\alpha}{2}}\|_{L^{2}L^{6}(Q_{\rho})}^{2}\Big{)}\] \[\leq C\|u\|_{L^{\alpha\sigma}L^{\alpha s}(Q_{\rho})}^{\alpha}\Big{(}\|u \|_{L^{\frac{3-\alpha}{2}p_{\rho}}L^{\frac{3-\alpha}{2}\frac{6p}{3p-4}}(Q_{ \rho})}^{3-\alpha}+\|u\|_{L^{3-\alpha}L^{g-3\alpha}(Q_{\rho})}^{3-\alpha} \Big{)},\] since \(\frac{p}{p-2}\leq r\leq\infty\) and \(r^{\prime}\in[1,p/2]\) due to \(\alpha\leq p-2\). Then Holder's inequality implies that \[\|u\|_{L^{\frac{3-\alpha}{2}p_{\rho}}L^{\frac{3-\alpha}{2}\frac{6p}{3p-4}}(Q_ {\rho})}^{3-\alpha}+\|u\|_{L^{3-\alpha}L^{g-3\alpha}(Q_{\rho})}^{3-\alpha}\leq C \rho^{\frac{3}{2}(\alpha-1)}\left(\|u\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^ {3-\alpha}+\|\nabla u\|_{L^{2}(Q_{\rho})}^{3-\alpha}\right),\] which means \[\int_{Q_{\rho}}|u|^{3}dxdt\leq C\|u\|_{PL^{q}(Q_{\rho})}^{\alpha}\Big{(}\|u\|_{ L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+\|\nabla u\|_{L^{2}(Q_{\rho})}^{2} \Big{)}^{\frac{3-\alpha}{2}}. \tag{3.23}\] In addition, for the case of \(q>\frac{3}{\frac{3}{p-2}-\frac{2}{p}}\), the estimate of (3.23) still holds due to the Holder's inequality. Using (3.23) and Young's inequality, there holds \[J_{1}\leq\frac{1}{32}\left(\|u\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+\| \nabla u\|_{L^{2}(Q_{\rho})}^{2}\right)+C(\rho-\varrho)^{-\frac{6}{\alpha}}\| u\|_{L^{p}L^{q}(Q_{\rho})}^{2}. \tag{3.24}\] Similarly, \[J_{2}\leq\frac{1}{32}\left(\|u\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+\| \nabla u\|_{L^{2}(Q_{\rho})}^{2}\right)+C(\rho-\varrho)^{-\frac{4}{\alpha-1}} \|u\|_{L^{p}L^{q}(Q_{\rho})}^{\frac{2\alpha}{-1}}. \tag{3.25}\] Combining (3.21), (3.24) and (3.25), we arrive at \[\int_{B_{\rho}}|v_{B}|^{2}\phi^{2}+\int_{Q_{\rho}}|\nabla v_{B}|^ {2}\phi^{2}\leq\frac{1}{8}\left(\|u\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2} +\|\nabla u\|_{L^{2}(Q_{\rho})}^{2}\right)\] \[+C(\rho-\varrho)^{-\frac{6}{\alpha}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{ 2}+C(\rho-\varrho)^{-\frac{4}{\alpha-1}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{\frac{2 \alpha}{-1}}. \tag{3.26}\] **Step III: Estimate the terms including \(v_{B}\).** Noting that \(\frac{2}{p}+\frac{3}{\frac{6p}{3p-4}}=\frac{3}{2}\), using Young's inequality and Sobolev's embedding, there holds \[\|v_{B}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\varrho})}^{2}\leq C\int_{B_{\varrho}}|v_{B}(t)|^{2}dx+C \int_{Q_{\varrho}}|\nabla v_{B}|^{2}dxdt. \tag{3.27}\] Since \(\nabla\pi_{h,B}\) is harmonic function, using (3.20), there holds \[\|\nabla\pi_{h}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\sigma_{1}})}^{2} \leq C\rho^{3-\frac{4}{p}}(\rho-\varrho)^{-\frac{6}{q}}||\nabla\pi_{h} ||_{L^{p}L^{q}(Q_{\sigma_{2}})}^{2} \tag{3.28}\] \[\leq C\rho^{3-\frac{4}{p}}(\rho-\varrho)^{-\frac{6}{q}}||u||_{L^{p}L^ {q}(Q_{\rho})}^{2}.\] Using (3.26), (3.27) and (3.28), we conclude by the triangle inequality that \[\|u\|_{L^{pL}\frac{6p}{3p-4}(Q_{\varrho})}^{2} \leq 2\|v_{B}\|_{L^{pL}\frac{6p}{3p-4}(Q_{\varrho})}^{2}+2\|\nabla\pi_{ h,B}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\varrho})}^{2} \tag{3.29}\] \[\leq C\|v_{B}\phi\|_{L^{\infty}L^{2}(Q_{\rho})}^{2}+C\|\nabla v_{B}\phi \|_{L^{2}(Q_{\rho})}^{2}+2\|\nabla\pi_{h,B}\|_{L^{pL}\frac{6p}{3p-4}(Q_{\sigma _{1}})}^{2}\] \[\leq \frac{1}{4}\left(\|u\|_{L^{pL}\frac{6p}{3p-4}(Q_{\rho})}^{2}+\| \nabla u\|_{L^{2}(Q_{\rho})}^{2}\right)+C(\rho-\varrho)^{-\frac{6}{a}}\|u\|_{L^ {pL}q(Q_{\rho})}^{2}\] \[+C\left((\rho-\varrho)^{-1}+(\rho-\varrho)^{-2}\right)^{\frac{2} {a-1}}\|u\|_{L^{pL}q(Q_{\rho})}^{\frac{2\alpha}{a-1}}\] \[+C(\rho-\varrho)^{-\frac{6}{a}}\|u\|_{L^{pL}q(Q_{\rho})}^{2}.\] Similarly, noting that \(\nabla u=\nabla v_{B}-\nabla\nabla\pi_{h,B}\), for almost \(t\in I_{\rho}\), (3.20) and (3.19) implies \[\|\phi(t)\nabla u(t)\|_{L^{2}(B_{\rho})}^{2} \tag{3.29}\] \[= \int_{B_{\rho}}|\phi(t)\nabla v_{B}(t)|^{2}dx-\int_{B_{\rho}}( \nabla v_{B}(t)+\nabla u(t)):(\nabla v_{B}(t)-\nabla u(t))\phi^{2}(t)dx\] \[= \int_{B_{\rho}}|\phi(t)\nabla v_{B}(t)|^{2}dx-\int_{B_{\rho}}( \nabla v_{B}(t)+\nabla u(t)):\nabla\nabla\pi_{h,B}\phi^{2}(t)dx\] \[= \int_{B_{\rho}}|\phi(t)\nabla v_{B}(t)|^{2}dx+\int_{B_{\rho}}(v_{ B}(t)+u(t))\cdot\nabla\nabla\pi_{h,B}\cdot\nabla\phi^{2}(t)dx\] \[\leq \int_{B_{\rho}}|\phi(t)\nabla v_{B}(t)|^{2}dx+C(\rho-r)^{-1}\left( \int_{B_{\rho}}|v_{B}+u|^{2}dx\right)^{\frac{1}{2}}\left(\int_{B_{\sigma_{2}}} |\nabla^{2}\pi_{h}|^{2}dx\right)^{\frac{1}{2}}\] \[\leq \int_{B_{\rho}}|\phi(t)\nabla v_{B}(t)|^{2}dx+C(\rho-r)^{-2}\int_{ B_{\rho}}|u|^{2}dx.\] Integrating with respect to \(t\), we have \[\int_{Q_{\rho}}|\nabla u\phi|^{2}\leq\int_{Q_{\rho}}|\phi\nabla v_{B}|^{2}dx+C( \rho-r)^{-2}\int_{Q_{\rho}}|u|^{2}dx. \tag{3.30}\] Then for \(\frac{3}{4}\leq\varrho<\rho\leq 1\), \(\alpha=\frac{2}{\frac{3}{4}+\frac{3}{p}}\), combining (3.29), (3.30), (3.26) and (3.24), we have \[\|u\|_{L^{pL}\frac{6p}{3p-4}(Q_{\varrho})}^{2}+\|\nabla u\|_{L^{2 }(Q_{\varrho})}^{2} \leq \frac{3}{4}(\|u\|_{L^{pL}\frac{6p}{3p-4}(Q_{\rho})}^{2}+\|\nabla u \|_{L^{2}(Q_{\rho})}^{2})\] \[+C(\rho-\varrho)^{-\frac{6}{\alpha}}\|u\|_{L^{pL}q(Q_{\rho})}^{2} +C(\rho-\varrho)^{-\frac{6}{a}}\|u\|_{L^{pL}q(Q_{\rho})}^{2}\] \[+C(\rho-\varrho)^{-\frac{4}{\alpha-1}}\|u\|_{L^{pL}q(Q_{\rho})}^{ \frac{2\alpha}{\alpha-1}}.\] Applying the iteration lemma (see [8, Lemma V.3.1, p.161 ] ), we end up with \[\|u\|_{L^{pL}\frac{6p}{3p-4}(Q_{\varrho})}^{2}+\|\nabla u\|_{L^{2 }(Q_{\varrho})}^{2} \leq C(\rho-\varrho)^{-\frac{6}{\alpha}}\|u\|_{L^{pL}q(Q_{\rho})}^{2}+C( \rho-\varrho)^{-\frac{6}{q}}||u||_{L^{pL}q(Q_{\rho})}^{2}\] \[+C(\rho-\varrho)^{-\frac{4}{\alpha-1}}\|u\|_{L^{pL}q(Q_{\rho})}^{ \frac{2\alpha}{\alpha-1}},\] which leads to that \[\|u\|_{L^{pL}\frac{6p}{3p-4}(Q_{\frac{3}{4}})}^{2}+\|\nabla u\|_{L^{2}(Q_{ \frac{3}{4}})}^{2}\leq C||u||_{L^{pL}q(Q_{1})}^{2}+C||u||_{L^{pL}q(Q_{1})}^{ \frac{2\alpha}{\alpha-1}}.\] The proof is complete. ### Proof of Proposition 3.1: The case of \(2\leq p\leq 3\) Let \(\frac{3}{4}\leq\varrho<\rho\leq 1\) and we still use the cut-off function \(\phi\) as in the last subsection. Taking the test function \(\varphi=\phi^{2\beta}\)in the local energy inequality (2.12), we have \[\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho})}^{2}+2\|\nabla(v_ {B}\phi^{\beta})\|_{L^{2}L^{2}(Q_{\rho})}^{2}\] \[\leq C\left|\int_{Q_{\rho}}|v_{B}|^{2}(\partial_{t}(\phi^{2\beta})+ \Delta(\phi^{2\beta})+(\nabla\phi)^{2}\phi^{2\beta-2})\right|+\int_{Q_{\rho}}u \cdot\nabla(\phi^{2\beta})|v_{B}|^{2}\] \[+2\int_{Q_{\rho}}u\cdot\nabla\nabla\pi_{h,B}\cdot v_{B}\phi^{2 \beta}+2\int_{Q_{\rho}}\pi_{1,B}v_{B}\cdot\nabla(\phi^{2\beta})+2\int_{Q_{ \rho}}\pi_{2,B}v_{B}\cdot\nabla(\phi^{2\beta})\] \[:= K_{1}+K_{2}+K_{3}+K_{4}+K_{5}.\] **Estimate of \(K_{1}\).** Noting that \(q>\frac{9}{4}\) for \(p\in[2,3]\) since \(\frac{2}{p}+\frac{3}{q}<2\), using (3.19) and (3.17), there hold \[K_{1}\leq C(\rho-\varrho)^{-2}\int_{Q_{\rho}}|u|^{2}dxdt\leq C( \rho-\varrho)^{-2}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2}. \tag{3.31}\] **Estimate of \(K_{2}\).** Taking \(\beta=\beta_{0}\) large enough such that \(2\beta_{0}-1\geq(3-\alpha)\beta_{0}\) with \(\alpha=\frac{2}{\frac{2}{p}+\frac{3}{q}}\), using Holder's inequality, we have \[K_{2} \leq C(\rho-\varrho)^{-1}\iint_{Q_{\rho}}|u||v_{B}|^{2}\phi^{2\beta- 1}dxdt \tag{3.32}\] \[\leq C(\rho-\varrho)^{-1}\|u\|_{L^{p}L^{q}(Q_{\rho})}\|\|v_{B}|^{ \alpha-1}\|_{L^{\frac{p}{\alpha-1}}L^{\frac{q}{\alpha-1}}(Q_{\rho})}\|v_{B} \phi^{\beta}|^{3-\alpha}\|_{L^{r}L^{s}(Q_{\rho})}.\] Here, \[r=\frac{p}{p-\alpha},\quad s=\frac{q}{q-\alpha},\quad\alpha= \frac{2}{\frac{2}{p}+\frac{3}{q}}.\] Claim that: \(r(3-\alpha)\geq 2\) and \(2\leq s(3-\alpha)\leq 6\) for \(q\in\left(\frac{3}{2-\frac{2}{p}},\frac{7}{2-\frac{2}{p}}\right]\). First, since \(p\leq 3\), we have \[r(3-\alpha)\geq 3.\] Second, due to \(q\leq\frac{7}{2-\frac{8}{p}}\) there holds \[s(3-\alpha)=(3-\alpha)\frac{q}{q-\alpha}\geq 2,\] and \[s(3-\alpha)\leq 6,\] since \(q>\frac{9}{4}\). Noting that \[\frac{2}{r(3-\alpha)}+\frac{3}{s(3-\alpha)}=\frac{3}{3-\alpha}> \frac{3}{2},\] with \(\frac{2}{r}+\frac{3}{s}=3\), by Holder's inequality and (3.19) again, there holds \[K_{2} \leq C(\rho-r)^{-1}\rho^{\frac{3}{2}(\alpha-1)}\|u\|_{L^{p}L^{q}}\|v_{ B}\|_{L^{p}L^{q}(Q_{\rho})}^{\alpha-1}\Big{(}\|v_{B}\phi^{\beta}\|_{L^{ \infty}L^{2}(Q_{\rho})}^{2}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}(Q_{\rho})}^{ 2}\Big{)}^{\frac{3-\alpha}{2}} \tag{3.33}\] \[\leq \frac{1}{8}\Big{(}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho} )}^{2}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}(Q_{\rho})}^{2}\Big{)}+C(\rho- \varrho)^{-\frac{2}{\alpha-1}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{\frac{2\alpha}{ \alpha-1}}.\] In addition, for the case of \(q>\frac{7}{2-\frac{2}{p}}\), one can take \(q_{0}=\frac{7}{2-\frac{2}{p}}\) and (3.33) holds for \(q_{0}\). Then the Holder inequality applies and (3.33) holds for and \(q>q_{0}\). **Estimate of \(K_{3}\).** Noting that \(q>\frac{9}{4}\) since \(p\in[2,3]\) and \(\rho\leq 1\), using (3.20), (3.19) and Holder's inequality, we know that \[K_{3} \leq C\rho^{\frac{7}{2}-\frac{3}{q}-\frac{4}{p}}\|v_{B}\phi^{\beta}\|_ {L^{\infty}L^{2}(Q_{\rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}\|\nabla\nabla\pi_{h,B} \|_{L^{p}L^{\infty}(Q_{\sigma_{2}})} \tag{3.34}\] \[\leq C(\rho-\varrho)^{-1-\frac{3}{q}}\|v_{B}\phi^{\beta}\|_{L^{ \infty}L^{2}(Q_{\rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}\] \[\leq \frac{1}{8}\Big{(}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho })}^{2}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}(Q_{\rho})}^{2}\Big{)}\] \[+C(\rho-\varrho)^{-2-\frac{6}{q}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{4}.\] **Estimate of \(K_{4}\).** Recall that \[K_{4}=-4\beta\int_{Q_{\rho}}v_{B}\phi^{\beta}\cdot\nabla\phi\ \pi_{1,B}\phi^{ \beta-1}dxds.\] Now we rewrite the first equation of (1.1) as \[\partial_{t}v_{B}-\Delta v_{B}+u\cdot\nabla u+\nabla\pi_{1,B}+\nabla\pi_{2,B}=0,\] with \(v_{B}=u+\nabla\pi_{h,B}\) and \(\nabla\pi_{h,B}=-E_{B_{\rho}}(u)\). Using the representation formula of pressure \(\pi_{1,B}\), we have \[\pi_{1,B}\xi = R_{i}R_{j}(\xi u_{i}u_{j})-N*(\partial_{ij}\xi u_{i}u_{j})+ \partial_{j}N*(u_{i}u_{j}\partial_{i}\xi) \tag{3.35}\] \[+\partial_{i}N*(u_{i}u_{j}\partial_{j}\xi)-N*(\pi_{1,B}\Delta\xi )+2\partial_{j}N*(\partial_{j}\xi\pi_{1,B}),\] where \(\xi\) is a cutoff function, \(N=-\frac{1}{4\pi|x|}\) is the kernel of Poisson equation and \(R_{i}=\frac{\partial_{i}}{\sqrt{-\Delta}}\) is Riesz transform. Rewrite \(K_{4}=-4\beta(K_{41}+\cdots+K_{46})\). Choose \(\xi=\phi^{\beta-1}\), and note that \(\beta=\beta_{0}\) satisfying \(\beta_{0}-1\geq(2-\alpha)\beta_{0}\). Then \[K_{41} = \int_{Q_{\rho}}v_{B}\phi^{\beta}\cdot\nabla\phi R_{i}R_{j}(\xi u _{i}u_{j})dxdt\] \[= \int\big{(}R_{i}R_{j}(\xi u_{i}(v_{B})_{j})-R_{i}R_{j}(\xi u_{i}( \nabla\pi_{h,B})_{j})\big{)}\phi^{\beta}v_{B}\cdot\nabla\phi dxdt\] Using Lemma 2.3, the same estimates as \(K_{2}\) in (3.32) and \(K_{3}\) in (3.34) yields that \[K_{41} \leq C(\rho-r)^{-1}\|v_{B}\phi^{\beta}\|_{L^{(3-\alpha)r}L^{(3-\alpha )s}(Q_{\rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}\|v_{B}\|_{L^{p}L^{q}(Q_{\rho})}^{ \alpha-1}\|v_{B}\phi^{\beta}\|_{L^{(3-\alpha)r}L^{(3-\alpha)s}(Q_{\rho})}^{2- \alpha} \tag{3.36}\] \[\ \ \ \ \ +C(\rho-r)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}\|\nabla\pi_{h,B}\|_{L^{p}L^{\infty}(Q_{ \sigma_{2}})}\] \[\leq \frac{1}{8}\Big{(}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho} )}^{2}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}(Q_{\rho})}^{2}\Big{)}+C(\rho-r)^{- \frac{2}{\alpha-1}}\rho^{3}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{\frac{2\alpha}{\alpha-1 }}\] \[\ \ \ \ +C(\rho-r)^{-2-\frac{6}{q}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{4}.\] For \(K_{42}\), using Holder's inequality and Young's inequality for convolution form, there holds \[K_{42} \leq C(\rho-r)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho})}\|N* (\partial_{ij}\xi u_{i}u_{j})\|_{L^{1}L^{2}(Q_{\sigma_{2}})}\] \[\leq C(\rho-r)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho})}( \rho-r)^{-2}\|N\|_{L^{\infty}L^{2}(Q_{*})}\|u\|^{2}\|_{L^{1}L^{1}(Q_{\sigma_{2}})}\] where \(Q_{*}=(-\rho^{2},0)\times B_{*}\), \(B_{*}=\{x:|x|\leq 2\sigma_{2}\}\). It follows that \[\|N\|_{L^{\infty}L^{2}(Q_{*})}\leq C\rho^{\frac{1}{2}}\leq C. \tag{3.37}\] Noting that \(q\in(\frac{9}{4},+\infty)\) since \(p\in[2,3]\), using (3.37), we deduce \[K_{42} \leq C\rho^{\frac{1}{2}}(\rho-\varrho)^{-3}\|v_{B}\phi^{\beta}\|_{L^{ \infty}L^{2}(Q_{\rho})}\|u\|_{L^{2}L^{2}(Q_{\rho})}^{2} \tag{3.38}\] \[\leq C(\rho-\varrho)^{-3}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2}.\] For the term of \(K_{43}\), we have \[K_{43}\leq C(\rho-\varrho)^{-2}\left(\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_ {\rho})}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}L^{2}(Q_{\rho})}\right)\|u\|_{L^{p }L^{q}(Q_{\rho})}^{2}. \tag{3.39}\] First, when \(p=2\), i follows that \(q>3\) since \(\frac{2}{p}+\frac{3}{q}<2\). Using Holder's inequality and Young's inequality, we arrive at \[K_{43} \leq C(\rho-\varrho)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}\|\partial_{j}N*(u_{i}u_{j}\partial_{i}\xi)\|_{L^{1}L^{2}(Q_{\sigma_{2} })}\] \[\leq C(\rho-\varrho)^{-2}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}\|\partial_{j}N\|_{L^{\infty}L^{\frac{2q}{3q-4}}(Q_{*})}\||u|^{2}\|_{L^ {1}L^{\frac{q}{2}}(Q_{\rho})}\] \[\leq C(\rho-\varrho)^{-2}\left(\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2} (Q_{\rho})}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}L^{2}(Q_{\rho})}\right)\|u\|_{ L^{p}L^{q}(Q_{\rho})}^{2},\] where the term \(\|\partial_{j}N\|_{L^{\infty}_{\infty}L^{\frac{2q}{3q-4}}(Q_{\rho})}\) is integrable since \(\frac{2q}{3q-4}<\frac{3}{2}\) when \(q\in(3,4]\). When \(q>4\), the above inequality holds for \(q_{0}=4\), then it is still true for \(q>4\) by the Holder inequality. Second, For the case \(2<p\leq 3\), using Holder's inequality and Young's inequality again, we get \[K_{43} \leq C(\rho-\varrho)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\frac{p}{p-2}}L^{ \frac{6p}{8-p}}(Q_{\rho})}\|\partial_{j}N*(u_{i}u_{j}\partial_{i}\xi)\|_{L^{ \frac{p}{p}}L^{\frac{6p}{7p-8}}(Q_{\sigma_{2}})}\] \[\leq C(\rho-\varrho)^{-2}\|v_{B}\phi^{\beta}\|_{L^{\frac{p}{p-2}}L^{ \frac{6p}{8-p}}(Q_{\rho})}\|\partial_{j}N\|_{L^{\infty}L^{\frac{13}{6}-\frac{ 3}{2}(\frac{2}{p}+\frac{3}{q})}(Q_{*})}\||u|^{2}\|_{L^{\frac{p}{2}}L^{\frac{q} {2}}(Q_{\rho})}^{2}.\] Obviously, the term \(\|\partial_{j}N\|_{L^{\infty}L^{\frac{13}{6}-\frac{3}{2}(\frac{2}{p}+\frac{3} {q})}(Q_{*})}\) is integrable since \[1\leq\frac{1}{\frac{13}{6}-\frac{2}{3}(\frac{2}{p}+\frac{3}{q})}<\frac{3}{2},\quad q\leq\frac{3}{\frac{7}{4}-\frac{2}{p}}\] Note that \(\frac{2(p-2)}{p}+\frac{8-p}{2p}=\frac{3}{2}\), where \(\frac{p}{p-2}\geq 2\) and \(2\leq\frac{6p}{8-p}\leq 6\). Thus, \[\|v_{B}\phi^{\beta}\|_{L^{\frac{p}{p-2}}L^{\frac{6p}{8-p}}(Q_{\rho})}\leq C \left(\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(B_{\rho})}+\|\nabla(v_{B}\phi^{ \beta})\|_{L^{2}L^{2}(B_{\rho})}\right)\] Thus (3.39) is proved for \(q\leq\frac{3}{\frac{7}{4}-\frac{2}{p}}\). When \(q>\frac{3}{\frac{7}{4}-\frac{2}{p}}\), the above inequality holds for \(q_{0}=\frac{3}{\frac{7}{4}-\frac{2}{p}}\), then it is still true for \(q>\frac{3}{\frac{7}{4}-\frac{2}{p}}\) by the Holder inequality. The proof of (3.39) is complete. Similarly, \[K_{44}\leq C(\rho-\varrho)^{-2}\left(\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(B_ {\rho})}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}L^{2}(B_{\rho})}\right)\|u\|_{L^{p }L^{q}(Q_{\rho})}^{2}. \tag{3.40}\] For \(K_{45}\), noting that \(q>\frac{9}{4}\) and using (3.37), there holds \[K_{45} \leq C(\rho-\varrho)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}\|N*(\pi_{1,B}\Delta\xi)\|_{L^{1}L^{2}(Q_{\rho})} \tag{3.41}\] \[\leq C(\rho-\varrho)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}(\rho-r)^{-2}\|N\|_{L^{\infty}L^{2}(Q_{*})}\|\pi_{1,B}\|_{L^{1}L^{1}(Q_{ \rho})}\] \[\leq C^{\frac{1}{2}}\rho^{5-\frac{6}{4}-\frac{4}{p}}(\rho-\varrho)^{-3 }\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho})}\|\pi_{1,B}\|_{L^{\frac{p}{2} }L^{\frac{q}{2}}(Q_{\rho})}\] \[\leq C(\rho-\varrho)^{-3}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2}.\] For \(K_{46}\), Holder's inequality and Young's inequality, we deduce that \[K_{46} \leq C(\rho-\varrho)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\frac{p}{p-2}}L^{ \frac{6p}{8-p}}(Q_{\rho})}\|\partial_{j}N*(\pi_{1,B}\partial_{j}\xi)\|_{L^{ \frac{p}{2}}L^{\frac{6p}{7p-8}}(Q_{\rho})}\] \[\leq C(\rho-\varrho)^{-1}\|v_{B}\phi^{\beta}\|_{L^{\frac{p}{p-2}}L^{ \frac{6p}{8-p}}(Q_{\rho})}\|\partial_{j}N\|_{L^{\infty}L^{\frac{11}{6}-\frac{ 3}{2}(\frac{2}{p}+\frac{3}{q})}(Q_{*})}\|\pi_{1,B}\partial_{j}\xi\|_{L^{\frac{p} {2}}L^{\frac{q}{2}}(Q_{\rho})},\] which is similar as \(K43\) and \(\frac{p}{p-2}=\infty\) for \(p=2.\) The same arguments yields that \[K_{46}\leq C(\rho-\varrho)^{-2}\left(\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{ \rho})}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}L^{2}(Q_{\rho})}\right)\|u\|_{L^{p} L^{q}(Q_{\rho})}^{2}. \tag{3.42}\] Combining (3.36), (3.38), (3.39), (3.40), (3.41) and (3.42), choosing \(\beta=\beta_{0}\) and using Young's inequality, we have \[K_{4}\leq\tfrac{1}{16}\Big{(}\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{ 2}(Q_{\rho})}^{2}+\|\nabla(v_{B}\phi^{\beta})\|_{L^{2}(Q_{\rho})}^{2}\Big{)}+C (\rho-\varrho)^{-\frac{2}{\alpha-1}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{\frac{2 \alpha}{\alpha-1}}\] \[+C(\rho-\varrho)^{-6}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{4}. \tag{3.43}\] **Estimate of \(K_{5}\).** Using (2.16) and integration by parts, there holds \[K_{5}\leq C\rho^{\frac{5}{2}-\frac{2}{p}-\frac{3}{q}}(\rho-\varrho )^{-1}\|v_{B}\|_{L^{p}L^{q}(Q_{\rho})}\|\pi_{2,B}\|_{L^{2}L^{2}(Q_{\rho})}\] \[\leq C(\rho-\varrho)^{-2}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2}+\frac{1 }{4}\|\nabla u\|_{L^{2}L^{2}(Q_{\rho})}^{2}. \tag{3.44}\] Combining (3.31), (3.33), (3.34), (3.43) and (3.44), we have \[\|v_{B}\phi^{\beta}\|_{L^{\infty}L^{2}(Q_{\rho})}^{2}+\|\nabla(v_{B}\phi^{ \beta})\|_{L^{2}L^{2}(Q_{\rho})}^{2}\leq C(\rho-r)^{-\frac{2}{\alpha-1}}\|u\|_ {L^{p}L^{q}(Q_{\rho})}^{\frac{2\alpha}{\alpha-1}}\] \[+C(\rho-r)^{-8}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{4}+C(\rho-r)^{-2}\|u\|_{L^{p}L^{q} (Q_{\rho})}^{2}+\frac{1}{4}\|\nabla u\|_{L^{2}L^{2}(Q_{\rho})}^{2}. \tag{3.45}\] Noting that \(\frac{2}{p}+\frac{3}{\frac{6p}{3p-4}}=\frac{3}{2}\) with \(p\geq 2\) and \(q>\frac{9}{4}\), we have \[\|v_{B}\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}+\|\nabla(v_{B}\phi^ {\beta})\|_{L^{2}L^{2}(Q_{\rho})}^{2}\leq C(\rho-r)^{-\frac{2}{\alpha-1}}\|u\| _{L^{p}L^{q}(Q_{\rho})}^{\frac{2\alpha}{\alpha-1}}\] \[+C(\rho-r)^{-8}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{4}+C(\rho-r)^{-2}\|u\|_{L^{p}L^{q} (Q_{\rho})}^{2}+\frac{1}{4}\|\nabla u\|_{L^{2}L^{2}(Q_{\rho})}^{2}. \tag{3.46}\] On the other hand, it follows from (2.14), (3.19) and (3.20), there holds \[\|u\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2} \leq \|v_{B}\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+ \|\nabla\pi_{h,B}\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\sigma_{2}})}^{2} \tag{3.47}\] \[\leq \|v_{B}\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+C( \rho-\varrho)^{-\frac{3}{q}}\|\nabla\pi_{h,B}\|_{L^{p}L^{q}(Q_{\rho})}^{2}\] \[\leq \|v_{B}\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+C( \rho-\varrho)^{-\frac{3}{q}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2},\] and \[\|\nabla u\phi^{\beta}\|_{L^{2}L^{2}(Q_{\rho})}^{2} \leq \|\nabla v_{B}\phi^{\beta}\|_{L^{2}L^{2}(Q_{\rho})}^{2}+\|\nabla \nabla\pi_{h,B}\phi^{\beta}\|_{L^{2}L^{2}(Q_{\sigma_{2}})}^{2} \tag{3.48}\] \[\leq \|\nabla(v_{B}\phi^{\beta})\|_{L^{2}L^{2}(Q_{\rho})}^{2}+\|v_{B} \nabla\phi^{\beta}\|_{L^{2}L^{2}(Q_{\rho})}^{2}\] \[+C(\rho-\varrho)^{-\frac{3}{q}-1}\|\nabla\pi_{h,B}\|_{L^{p}L^{q}(Q _{\rho})}^{2}\] \[\leq \|\nabla(v_{B}\phi^{\beta})\|_{L^{2}L^{2}(Q_{\rho})}^{2}+C(\rho- \varrho)^{-2}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2}\] \[+C(\rho-\varrho)^{-\frac{3}{q}-1}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2},\] Combining (3.46), (3.47) and (3.48), we arrive at \[\|u\phi^{\beta}\|_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\rho})}^{2}+\| \nabla u\phi^{\beta}\|_{L^{2}L^{2}(Q_{\rho})}^{2}\] \[\leq C(\rho-r)^{-\frac{2}{\alpha-1}}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{\frac{2 \alpha}{\alpha-1}}+C(\rho-r)^{-8}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{4}\] \[+C(\rho-r)^{-2}\|u\|_{L^{p}L^{q}(Q_{\rho})}^{2}+\frac{3}{4}\|\nabla u \|_{L^{2}L^{2}(Q_{\rho})}^{2}.\] Finally, using the iterative lmma (see [8, Lemma V.3.1, p.161 ]), the following Caccioppoli inequality holds \[\|u\|^{2}_{L^{p}L^{\frac{6p}{3p-4}}(Q_{\frac{3}{4}})}+\|\nabla u\|^{2}_{L^{2}L^{ 2}(Q_{\frac{3}{4}})}\leq C\|u\|^{2}_{L^{p}L^{q}(Q_{1})}+C\|u\|^{4}_{L^{p}L^{q}( Q_{1})}+C\|u\|^{\frac{2\alpha}{\alpha-1}}_{L^{p}L^{q}(Q_{1})}.\] for \(2\leq p\leq 3\). The proof is complete. ## 4. Proof of Theorem 1.2 ### Case I: \(2\leq p<3\) The proof is divided into three steps. **Step I: Decay estimates from local energy inequality.** Choose a cut-off function as in [2]. Let \(G(x,t)=(4\pi t)^{-\frac{3}{2}}\exp{(-\frac{|x|^{2}}{4t})}\) be the Gaussian kernel. For \(r>0\), denote \[\Phi(x,t)=r^{2}G(x,r^{2}-t),\quad(x,t)\in\mathbb{R}^{3}\times(-\infty,0),\] By direct calculation, there holds for any \(0<4r<\rho\leq\frac{1}{2}\), \[\Phi(x,t)\geq Cr^{-1},\quad(x,t)\in Q_{r};\] \[\Phi(x,t)\leq Cr^{-1},\quad|\nabla\Phi(x,t)|\leq Cr^{-2},\quad(x,t)\in Q_{\rho}; \tag{4.49}\] \[\Phi(x,t)\leq Cr^{2}\rho^{-3},\quad|\nabla\Phi(x,t)|\leq Cr^{2} \rho^{-4},\quad(x,t)\in Q_{\rho}\setminus Q_{\frac{\rho}{2}}.\] Let \(\eta:\mathbb{R}^{3}\times\mathbb{R}\to[0,1]\) be a smooth cut-off function suitable on \(Q_{\rho}\setminus Q_{\frac{\rho}{2}}\) with \(|\partial_{t}\eta|+|\nabla^{2}\eta|\leq C\rho^{-2}\) and \(|\nabla\eta|\leq C\rho^{-1}\). Substitute \(\phi=\Phi\eta\) in the local energy inequality. Obviously, \[\partial_{t}\phi+\Delta\phi=(\partial_{t}\Phi+\Delta\Phi)\eta+\Phi\partial_{t }\eta+2\nabla\Phi\cdot\nabla\eta+\Phi\Delta\eta.\] Noting that \(\partial_{t}\Phi+\Delta\Phi=0\), we arrive at \[|\partial_{t}\phi+\Delta\phi|\leq Cr^{2}\rho^{-5}. \tag{4.50}\] Similarly, \[|\nabla\phi|=|\nabla\phi\eta+\phi\nabla\eta|\leq Cr^{-2}+Cr^{2}\rho^{-4}\leq Cr ^{-2}. \tag{4.51}\] Take a fixed ball \(B=B_{\frac{3}{4}}\) for \(v_{B}\). Write \(v=v_{B}=u+\nabla\pi_{h}\), \(\nabla\pi_{h}=-E_{B_{\frac{3}{4}}}(u)\), \(\nabla\pi_{1}=-E_{B_{\frac{3}{4}}}(u\cdot\nabla u)\) and \(\nabla\pi_{2}=E_{B_{\frac{3}{4}}}(\Delta u)\). Then it follows from the local energy inequality (2.12): \[\int_{B_{\frac{3}{4}}}|v(x,s)|^{2}\phi(x,s)dx+2\int_{Q_{\frac{3}{ 4}}}|\nabla v(x,\tau)|^{2}\phi(x,\tau)dxd\tau \tag{4.52}\] \[\leq \int_{Q_{\frac{3}{4}}}|v(x,\tau)|^{2}(\partial_{t}\phi+\Delta \phi)dxd\tau+\int_{Q_{\frac{3}{4}}}|v_{B}|^{2}(v-\nabla\pi_{h})\cdot\nabla\phi dxd\tau\] \[+2\int_{Q_{\frac{3}{4}}}v_{B}\cdot\nabla\nabla\pi_{h}v\phi dxd \tau-2\int_{Q_{\frac{3}{4}}}\nabla\pi_{h}\cdot\nabla\nabla\pi_{h}v\phi dxd\tau\] \[+2\int_{Q_{\frac{3}{4}}}(\pi_{1}+\pi_{2})v_{B}\cdot\nabla\phi dxd\tau,\] Using \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq: **Estimate of \(M_{6}\):** Noting that \(\nabla\pi_{2}=-E_{B_{\frac{3}{4}}}(\Delta u)\) is harmonic and \(\|\pi_{2}\|_{L^{2}(B_{\frac{3}{4}})}\leq C\|\nabla u\|_{L^{2}(B_{\frac{3}{4}})}\) by (2.16), by (3.20) there holds \[M_{6} \leq Cr^{-2}\|v\|_{L^{\infty}L^{2}(Q_{\rho})}\|\pi_{2}\|_{L^{1}L^{2}(Q_ {\rho})}\leq C\left(\frac{\rho}{r}\right)^{2}I(\rho)^{\frac{1}{2}}\|\pi_{2}\|_ {L^{1}L^{2}(Q_{\frac{3}{4}})} \tag{4.58}\] \[\leq C\left(\frac{\rho}{r}\right)^{2}I(\rho)^{\frac{1}{2}}\|\nabla u \|_{L^{1}L^{2}(Q_{\frac{3}{4}})}\leq\delta I(\rho)+C(\delta)\left(\frac{\rho}{ r}\right)^{4}\|\nabla u\|_{L^{2}L^{2}(Q_{\frac{3}{4}})}^{2}.\] **Estimate of \(M_{7}\):** The Holder's inequality, Young's inequality and (2.15) imply that \[M_{7} \leq Cr^{-2}\|v\|_{L^{\infty}L^{2}(Q_{\rho})}\|\pi_{1}-(\pi_{1})_{B_{ \rho}}\|_{L^{1}L^{2}(Q_{\rho})} \tag{4.59}\] \[\leq C(\delta)\left(\frac{\rho}{r}\right)^{6}I(\rho)^{\frac{3}{2}}+ \delta\left(\rho^{-\frac{3}{2}}\|\pi_{1}-(\pi_{1})_{B_{\rho}}\|_{L^{1}L^{2}(Q _{\rho})}\right)^{\frac{3}{2}}\] \[\leq C(\delta)\left(\frac{\rho}{r}\right)^{6}I(\rho)^{\frac{3}{2}}+ \delta I(\rho).\] **Estimate of \(r^{-\frac{3}{2}}\|\pi_{1}-(\pi_{1})_{B_{r}}\|_{L^{1}L^{2}(Q_{r})}\):** Noting that the function \(\pi_{1}\) satisfies \[-\Delta v_{1}+\nabla\pi_{1}=-u\cdot\nabla u,\quad\nabla\cdot v_{1}=0\quad\mbox {in}\quad B_{\frac{3}{4}},\] we have \(-\Delta\pi_{1}=\partial_{i}\partial_{j}(u_{i}u_{j})\) in \(B_{\frac{3}{4}}\). Let \(\zeta\) be a cutoff function which equals \(1\) in \(Q_{\frac{\rho}{2}}\) and vanishes outside of \(Q_{\rho}\) with \(0<4r<\rho<\frac{1}{2}\). Set \(\pi_{1}-(\pi_{1})_{B_{r}}=p_{1}-(p_{1})_{B_{r}}+p_{2}-(p_{2})_{B_{r}}\) with \[p_{1}=\frac{1}{4\pi}\int_{\mathbb{R}^{3}}\frac{1}{|x-y|}\partial_{i}\partial_{j }(u_{i}u_{j}\zeta)(y)dy,\] and \(p_{2}-(p_{2})_{B_{r}}\) is harmonic function in \(Q_{\frac{\rho}{2}}\). For any \(p^{\prime}>1\), according to the Calderon-Zygmund inequality, we deduce that \[\int_{B_{\frac{\rho}{2}}}|p_{1}|^{\prime\prime}dx\leq C\int_{B_{\rho}}|u|^{2p^ {\prime}}dx. \tag{4.60}\] And for the harmonic part, there holds \[\int_{B_{r}}|p_{2}-(p_{2})_{B_{r}}|^{p^{\prime}}dx\leq C\left( \frac{r}{\rho}\right)^{3+p^{\prime}}\int_{B_{\frac{\rho}{2}}}|p_{2}-(p_{2})_{B _{\frac{\rho}{2}}}|^{p^{\prime}}dx \tag{4.61}\] \[\leq C\left(\frac{r}{\rho}\right)^{3+p^{\prime}}\int_{B_{\frac{\rho}{ 2}}}|p_{1}-(p_{1})_{B_{\frac{\rho}{2}}}|^{p^{\prime}}dx+C\left(\frac{r}{\rho} \right)^{3+p^{\prime}}\int_{B_{\frac{\rho}{2}}}|\pi_{1}-(\pi_{1})_{B_{\frac{ \rho}{2}}}|^{p^{\prime}}dx\] \[\leq C\left(\frac{r}{\rho}\right)^{3+p^{\prime}}\int_{B_{\rho}}|u|^{2 p^{\prime}}dx+C\left(\frac{r}{\rho}\right)^{3+p^{\prime}}\int_{B_{\frac{\rho}{2}}}| \pi_{1}-(\pi_{1})_{B_{\frac{\rho}{2}}}|^{p^{\prime}}dx.\] Specially, for \(p^{\prime}=2\), (4.60) and (4.61) imply that \[\|\pi_{1}-(\pi_{1})_{B_{r}}\|_{L^{2}(B_{r})}\leq C\|u\|_{L^{4}(B_{\rho})}^{2}+C \left(\frac{r}{\rho}\right)^{\frac{5}{2}}\|\pi_{1}-(\pi_{1})_{B_{\rho}}\|_{L^{2 }(B_{\rho})}.\] Integration the above inequality with respect to \(t\) from \(-r^{2}\) to \(0\), there holds \[\|\pi_{1}-(\pi_{1})_{B_{r}}\|_{L^{1}L^{2}(Q_{r})}\leq C\|u\|_{L^{2}L^{4}(Q_{ \rho})}^{2}+C\left(\frac{r}{\rho}\right)^{\frac{5}{2}}\|\pi_{1}-(\pi_{1})_{B_{ \rho}}\|_{L^{1}L^{2}(Q_{\rho})}.\] Using Holder's inequality, (3.19) and (3.20), we have \[r^{-\frac{3}{2}}\|\pi_{1}-(\pi_{1})_{B_{r}}\|_{L^{1}L^{2}(Q_{r})} \leq C\left(\frac{\rho}{r}\right)^{\frac{3}{2}}\rho^{-\frac{3}{2}}\| \nabla\pi_{h}\|_{L^{2}L^{4}(Q_{\rho})}^{2}\] \[\quad+C\left(\frac{\rho}{r}\right)^{\frac{3}{2}}\rho^{-\frac{3}{2 }}\|v\|_{L^{2}L^{4}(Q_{\rho})}^{2}+C\frac{r}{\rho}I(\rho)^{\frac{2}{3}}\] \[\leq C\left(\frac{\rho}{r}\right)^{\frac{3}{2}}\|u\|_{L^{p}L^{q}(Q_{1 })}^{2}+C\left(\frac{\rho}{r}\right)^{\frac{3}{2}}I(\rho)+C\frac{r}{\rho}I( \rho)^{\frac{2}{3}},\] which means that \[r^{-\frac{9}{4}}\|\pi_{1}-(\pi_{1})_{B_{r}}\|_{L^{1}L^{2}(Q_{r})}^{\frac{3}{2} }\leq C\left(\frac{\rho}{r}\right)^{\frac{9}{4}}I(\rho)^{\frac{3}{2}}+C\left( \frac{r}{\rho}\right)^{\frac{9}{2}}I(\rho)+C\left(\frac{\rho}{r}\right)^{\frac {9}{4}}\|u\|_{L^{p}L^{q}(Q_{1})}^{3}. \tag{4.62}\] Collecting \((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq: and at this time \(\phi=1\) in \(Q_{1/2}\). Then \[\|v\|_{L^{\infty}L^{2}(Q_{\frac{1}{4}})}^{2}+\|\nabla v\|_{L^{2}L^{2 }(Q_{\frac{1}{4}})}^{2} \leq C\|u\|_{L^{p}L^{q}(Q_{1})}^{\frac{2\alpha}{\alpha-1}}+C\|u\|_{L^{p} L^{q}(Q_{1})}^{4}\] \[+C\|u\|_{L^{p}L^{q}(Q_{1})}^{2}+\frac{1}{4}\|\nabla u\|_{L^{2}L^{2 }(Q_{\frac{3}{4}})}^{2}.\] Using Propositon 3.1 again, there holds \[\|v\|_{L^{\infty}L^{2}(Q_{\frac{1}{4}})}^{2}+\|\nabla v\|_{L^{2}L^{2 }(Q_{\frac{1}{4}})}^{2}\leq C\left(\varepsilon_{0}^{2}+\varepsilon_{0}^{\frac {2\alpha}{\alpha-1}}+\varepsilon_{0}^{4}\right). \tag{4.66}\] For the pressure part of \(I(\frac{1}{4})\), if \(q\geq 4\), noting that \(\|\pi_{1}-\bar{\pi}_{1}\|_{L^{1}L^{2}(Q_{\frac{1}{4}})}\leq C\|u\|_{L^{2}L^{4 }(Q_{\frac{3}{4}})}^{2}\) due to (2.15), there holds \[\|\pi_{1}-\bar{\pi}_{1}\|_{L^{1}L^{2}(Q_{\frac{1}{4}})}\leq C\|u\|_{L^{2}L^{4 }(Q_{\frac{3}{4}})}^{2}\leq C\|u\|_{L^{p}L^{q}(Q_{\frac{3}{4}})}^{2}. \tag{4.67}\] If \(\frac{9}{4}<q<4\), by embedded inequality, we have \[\|\pi_{1}-\bar{\pi}_{1}\|_{L^{1}L^{2}(Q_{\frac{1}{4}})}\leq C\|u\|_{L^{2}L^{4 }(Q_{\frac{3}{4}})}^{2}\leq C\left(\|\nabla u\|_{L^{2}(Q_{\frac{3}{4}})}^{2}+ \|u\|_{L^{p}L^{q}(Q_{\frac{3}{4}})}^{2}\right). \tag{4.68}\] Combining (4.66), (4.67) and (4.68), using Proposition 3.1, there holds \[I(\frac{1}{4})\leq C\left(\varepsilon_{0}^{2}+\varepsilon_{0}^{\frac{2\alpha }{\alpha-1}}+\varepsilon_{0}^{4}\right)\leq\varepsilon_{0}^{\frac{3}{2}}:= \varepsilon_{1}.\] Without loss of generality, we set \(I(\rho_{0})\leq\varepsilon_{1}\) for some \(\rho_{0}>0\). Assume that for any \(s\in\mathbb{N}_{+}\), \(s<n\), \[I(\theta_{0}^{s-1}\rho_{0})\leq\varepsilon_{1},\] then for \(s=n\), by (4.64), there holds \[I(\theta_{0}^{n}\rho_{0}) \leq \frac{1}{4}I(\theta_{0}^{n-1}\rho_{0})+CI(\theta_{0}^{n-1}\rho_{0 })^{\frac{3}{2}}+C\varepsilon_{0}^{2}\] \[\leq \left(\frac{1}{4}+C\varepsilon_{1}^{\frac{1}{2}}+C\varepsilon_{1} ^{\frac{1}{3}}\right)\varepsilon_{1}.\] Choosing \(\varepsilon_{1}\), which is dependent on \(\varepsilon_{0}\), is small enough but fixed, such that \(\frac{1}{4}+C\varepsilon_{1}^{\frac{1}{2}}+C\varepsilon_{1}^{\frac{1}{3}}\leq 1\), we arrive \(I(\theta_{0}^{n}\rho_{0})\leq\varepsilon_{1}\). By mathematical induction, for any \(n\in\mathbb{N}\), \[I(\theta_{0}^{n}\rho_{0})\leq\varepsilon_{1}.\] Then for any \(r\in(0,\frac{1}{4})\), there exist constant \(n_{0}\) such that \(\theta_{0}^{n_{0}}\rho_{0}<r\leq\theta_{0}^{n_{0}-1}\rho_{0}\). Then \[r^{-1}\|v\|_{L^{\infty}L^{2}(Q_{r})}^{2}+r^{-1}\|\nabla v\|_{L^{2 }L^{2}(Q_{r})}^{2}\] \[\leq \theta_{0}^{-n_{0}}\rho_{0}^{-1}\|v\|_{L^{\infty}L^{2}(Q_{\theta_ {0}^{n_{0}-1}}\rho_{0}^{-1})}^{2}+\theta_{0}^{-n_{0}}\rho_{0}^{-1}\|\nabla v\|_ {L^{\infty}L^{2}(Q_{\theta_{0}^{n_{0}-1}}\rho_{0}^{-1})}^{2}\] \[\leq C\theta_{0}I(\theta_{0}^{n_{0}-1}\rho_{0})\leq C\varepsilon_{1}.\] By translation invariance of Navier-Stokes equations, we obtain \[\sup_{z_{0}\in Q_{1/4}}\sup_{r\in(0,1/4)}\{r^{-1}\|v\|_{L^{\infty}L^{2}(Q_{r}) }^{2}+r^{-1}\|\nabla v\|_{L^{2}L^{2}(Q_{r})}^{2}\}\leq C\varepsilon_{1}. \tag{4.69}\] Next we prove the regularity. By triangle inequality, there holds \[r^{-2}\|u\|_{L^{3}(Q_{r})}^{3}\leq r^{-2}\|v\|_{L^{3}(Q_{r})}^{3}+r^{-2}\| \nabla\pi_{h}\|_{L^{3}(Q_{r})}^{3}.\] It is sufficient to estimate the term \(r^{-2}\|\nabla\pi_{h}\|_{L^{3}(Q_{r})}^{3}\), since (4.69) implies the smallness of the term \(r^{-2}\|v\|_{L^{3}(Q_{r})}^{3}\), and \[r^{-2}\|\nabla\pi_{h}\|_{L^{3}(Q_{r})}^{3} = r^{-2}\int_{I_{r}}\|\nabla\pi_{h}\|_{L^{3}(B_{r})}^{3}\leq r^{-2} \int_{I_{r}}r^{3}\|\nabla\pi_{h}\|_{L^{2}(B_{\frac{3}{2}})}^{3}\] \[\leq Cr^{-2}\int_{I_{\frac{3}{2}}}r^{3}\|u\|_{L^{2}(B_{\frac{3}{2}})} ^{3}\leq Cr\|u\|_{L^{\infty}L^{2}(Q_{\frac{3}{2}})}^{3}.\] Note that \(u\) is local suitable weak solution, which means that \(\|u\|_{L^{\infty}L^{2}(Q_{\frac{3}{2}})}^{3}<+\infty\). Then there exists \(r_{0}>0\) such that \(Cr_{0}\|u\|_{L^{\infty}L^{2}(Q_{\frac{3}{2}})}^{3}\leq C\varepsilon_{1}^{\frac {3}{2}}\), and we have \[r^{-2}\|u\|_{L^{3}(Q_{r})}^{3}\leq C\varepsilon_{1}^{\frac{3}{2}},\quad\forall \ 0<r\leq r_{0},\] which means that for all \(z_{0}\in Q_{\frac{r_{0}}{2}}\) by Wolf's result (1.5) or Wang-Wu-Zhou's result (1.8). ### Case II: \(p\geq 3\) At this time, there holds \(\frac{3}{2}<q\leq 9\), since \(1\leq\frac{2}{p}+\frac{3}{q}<2\). **Case of \(3\leq q\leq 9\).** It follows that \[\|u\|_{L^{3}L^{3}(Q_{\frac{1}{2}})}\leq C\|u\|_{L^{p}L^{q}(Q_{\frac{1}{2}})}\] which implies the regularity due to (1.5) or (1.8). **Case of \(\frac{3}{2}<q<3\).** It follows from Proposition 3.1 that \[\|u\|_{L^{p}L^{\frac{6p}{3^{p-2}}}(Q_{\frac{1}{2}})}+\|\nabla u\|_{L^{2}L^{2}( Q_{\frac{1}{2}})}\leq C\left(\|u\|_{L^{p}L^{q}(Q_{1})}+\|u\|_{L^{p}L^{q}(Q_{1})}^{2} +\|u\|_{L^{p}L^{q}(Q_{1})}^{\frac{\alpha}{\alpha-1}}\right)\] where \(\alpha=\frac{2}{\frac{2}{p}+\frac{3}{q}}\). Thus \[\|u\|_{L^{3}L^{3}(Q_{\frac{1}{2}})}\leq C\|u\|_{L^{3}L^{\frac{18}{5}}(Q_{\frac {1}{2}})}\leq C\left(\|u\|_{L^{p}L^{q}(Q_{\frac{1}{2}})}+\|\nabla u\|_{L^{2}L^ {2}(Q_{\frac{1}{2}})}\right).\] Apply Wolf's result again. The proof is complete. **Acknowledgments.** W. Wang was supported by NSFC under grant 12071054, National Support Program for Young Top-Notch Talents and by Dalian High-level Talent Innovation Project (Grant 2020RD09). D. Zhou was supported by NSFC under grant 12071113.
2302.13125
Non-Intrusive Driver Behavior Characterization From Road-Side Cameras
In this paper, we demonstrate a proof of concept for characterizing vehicular behavior using only the roadside cameras of the ITS system. The essential advantage of this method is that it can be implemented in the roadside infrastructure transparently and inexpensively and can have a global view of each vehicle's behavior without any involvement of or awareness by the individual vehicles or drivers. By using a setup that includes programmatically controlled robot cars (to simulate different types of vehicular behaviors) and an external video camera set up to capture and analyze the vehicular behavior, we show that the driver classification based on the external video analytics yields accuracies that are within 1-2\% of the accuracies of direct vehicle-based characterization. We also show that the residual errors primarily relate to gaps in correct object identification and tracking and thus can be further reduced with a more sophisticated setup. The characterization can be used to enhance both the safety and performance of the traffic flow, particularly in the mixed manual and automated vehicle scenarios that are expected to be common soon.
Pavana Pradeep Kumar, Krishna Kant, Amitangshu Pal
2023-02-25T17:22:49Z
http://arxiv.org/abs/2302.13125v1
# Non-Intrusive Driver Behavior Characterization ###### Abstract In this paper, we demonstrate a proof of concept for characterizing vehicular behavior using only the roadside cameras of the ITS system. The essential advantage of this method is that it can be implemented in the roadside infrastructure transparently and anonymisedy and can have a global view of each vehicle's behavior without any involvement of or awareness by the individual vehicles or drivers. By using a setup that includes programmatically controlled robot cars (to simulate different types of vehicular behaviors) and an external video camera set up to capture and analyze the vehicular behavior, we show that the driver classification based on the external video analytics yields accuracies that are within 1-2% of the accuracies of direct vehicle-based characterization. We also show that the residual errors primarily relate to gaps in correct object identification and tracking and thus can be further reduced with a more sophisticated setup. The characterization can be used to enhance both the safety and performance of the traffic flow, particularly in the mixed manual and automated vehicle scenarios that are expected to be common soon. **Keywords:** Intelligent Transportation Systems, Reasoning, Event Logic, Smart Cities, Cyber-Physical Systems. ## I Introduction Road accidents are responsible for \(\sim\)5 million severe injuries and \(\sim\)50K deaths annually in the USA ([https://www.bankrate.com/insurance/car/car-crash-statistics/](https://www.bankrate.com/insurance/car/car-crash-statistics/)). According to the National Highway Traffic Safety Administration (NHTSA), at least one of the following risky behaviors was evident in 45% of fatal crashes involving passenger vehicles: speeding, alcohol impairment, or not keeping a safe distance, etc [1]. Intelligent Transportation Systems (ITS) aim to reduce accidents and make traffic flow smoother by introducing capabilities to monitor traffic and communicate with vehicles. Characterizing driver behavior is essential for improving traffic safety and performance. It should be noted that the term "driver" does not necessarily imply a manually operated vehicle. As the market for automated vehicles develops and becomes significant, it is anticipated that vehicles from different manufacturers will have distinct personalities, influenced in part by the design of the software and in part by the "personalization" that users will desire [2]. Some users may be comfortable with automated driving aggressively, while others prefer a more conservative driving style. This aspect also depends on familiarity with automated driving and driving conditions. Therefore, in this paper, driver behavior refers to vehicular behavior, although the type of feedback provided will still be different for manual vs. automated vehicles. ### _Vehicular and Driver Behavior Characterization_ There is a tremendous amount of work on vehicular behavior characterization, mainly regarding monitoring vehicular kinematics parameters. Vehicular behavior can be measured either from the sensors in the car through the OBD (onboard diagnosis) port or a device carried/mounted in the car. Typical behavioral measurements include acceleration, braking, distance to the next car, speed and speed variability, lane change behavior, etc [3]. In addition, there is a large body of work specifically targeting the behavior of the human driver in terms of direct actions (use of brake, accelerator, or steering wheel), alertness, and physical health (e.g., eye or head movements, facial expressions, lack of focus on the road, etc.) to determine if the driver is fired, drowsy, etc. [4]. Other sensors may also be used for such things as analyzing breathing patterns and breath smell (for alcohol), and may even use contact sensors such as those for measuring heart rate, brain signals, skin conductance, etc. With their increasing array of sensors and wearable devices such as smartwatches, smartphones can easily measure many vehicular and driver parameters. The measurements are generally used to train a classifier to categorize the driver. However, such methods have several inherent problems. First, the measurements require either the deployment of special gadgets in the vehicles or depend on the driver carrying or wearing devices with appropriate sensors. Using devices carried by a driver for a driver or vehicular behavior is impractical and has little value beyond a proof of concept. While the vehicles do monitor all of the kinematic parameters and thus could estimate the driver behavior, this has limitations since each vehicle only has a local view. The behavior of a group of vehicles traveling together or in close proximity is interdependent, and these interdependencies make it challenging to identify the cause-and-effect relationships among the vehicles. For example, a sudden slowing of a vehicle will cause the vehicle behind it to also slow suddenly. While the sudden slowing of the front vehicle may be undesired (e.g., aggressive driving), that of the vehicle behind is highly desired for maintaining safety. Another serious problem with these methods is the awareness of the drivers that their driving style or physical condition (e.g., drowsiness, drunkenness) is being monitored. This intrusiveness would surely change driver behavior; thus, the monitoring will be inaccurate. Furthermore, many drivers may refuse such monitoring or try to defeat it. ### _Our contributions:_ In this paper, we demonstrate that it is possible to characterize vehicular behavior very accurately using only the roadside cameras of the ITS system. To the best of our knowledge, this is the first such study; its key advantage is that it is entirely non-intrusive, inexpensive, and does not impact driving behavior in any way. Also, the global view can deduce the more obvious cause-and-effect relationships between adjacent vehicles (e.g., whether the car decelerated on its own or due to diminishing distance from the vehicle ahead). Furthermore, the minimal error rate of our video analytics system is attributable to instances where the vehicular ID is not tracked correctly or the camera is too far from the vehicle. These errors can be further minimized by more elaborate feature-based tracking of vehicles (e.g., by color, shape, etc.) and by having associations between multiple cameras. The monitoring can be used to provide advice to the vehicles in terms of enhancing safety and traffic performance. The scheme becomes even more effective with automated vehicles since they can help smooth out the traffic by behaving in a specific way. ### _Paper outline:_ The remainder of the paper is organized as follows. Section II discusses the related work. Section III presents our proposed framework for driver behavior characterization. Experimental evaluation and results are summarized in section IV. Section V then concludes the paper. ## II Related work Different methods that have been studied for identifying driver behavior can be classified into broadly two categories: using (a) in-vehicle sensor or video data analysis and (b) analyzing driver's physiological data. In the following, we summarize them separately. **Sensor or video data analysis:** Authors in [5] have developed a drunk driving and alerting system using a mobile phone that can be installed inside a car; the phone can record the acceleration samples from the phone sensors and compare them with the drunk driving patterns. In [6], the authors have proposed an aggressive driving behavior detection system that uses data from multiple smartphone-based sensors (i.e., accelerometer, gyroscope, magnetometer, GPS, video) and recognizes abnormal driving behaviors using the Dynamic Time Warping (DTW) algorithm. Similarly, the authors in [7] have developed a driver behavior classification and sharing system based on vehicle-mounted acceleration samples. Authors in [8] have used a public dataset named UAH-DriveSet [9] to analyze different in-vehicle sensory data and classify driver behaviors using different machine learning based classification methods. In [10], the authors have studied driver fatigue detection using eye tracking, where distances between the intensity changes in the eye area are measured to determine whether the eyes are closed or not. Similar other studies [11, 12] have analyzed various in-vehicle video data like driver's facial expressions, eye or head movements, etc., for determining driver's behaviors. **Physiological data analysis:** Psycho-physical states of drivers using respiration rates and ECG signals can also be studied through wearable devices, which also determine drivers' stress or distraction levels. In [13], the authors have shown that significant electroencephalographic and psychological changes (like increased delta and theta activity, lower heart rate, changes in blank rate, etc.) occur during fatigue. In [14], the authors have used psychological features, such as EEG and ECG signals to classify the driver's behavior into four categories (alert, mild fatigue, deep fatigue, and drowsiness) using a support vector machine (SVM). Similar studies on physiological data analysis for driver's state detection are reported in [15]. The existing works for driver behavior detection require either installation of in-vehicle sensors and cameras or require on-body wearable sensors, which bring additional installation overhead. As opposed to the existing literature, our proposed framework characterizes driver behavior from roadside camera frames, making the solution inexpensive, non-intrusive, and easily deployable. ## III Framework for Driver Behavior Prediction ### _Categorizing Driver behavior_ Driver behavior is important from the safety perspective, but there is no standard way to characterize drivers. In this paper, we adopt a 3-way classification, where a driver is designated as safe, aggressive, and distracted [16]. Please note that our "safe" category includes any behavior that is not considered aggressive or distracted; therefore, there is no undetermined behavior. For this categorization, we define a small set of "micro-behaviors" and then characterize the driver based on the combination of micro-behaviors observed. The micro behaviors include: (a) _Weaving_, or driving alternatively towards one side of the lane and then the other; (b) _Sudden Steer_ where the driver makes an abrupt redirection when driving along a straight course; (c) _Hard Braking_; (d) _Lane Drifting_, or not keeping in the center of the lane while driving; (e) _Straddling_, or driving while staying close to one side of the lane; and (f) _Over speeding_, or driving significantly above the speed limit. By performing object detection and estimating driving parameters over traffic videos as well as in vehicular data, we observe these micro-behaviors and classify the driver's behavior into one of three categories. For example, an aggressive driver can be identified by a combination of the following behaviors: weaving through traffic, driving at a higher speed (speeding), following other vehicles too closely with rapid steering, and applying harsh braking. In this paper, we formulate the driver behavior detection problem as a boolean satisfiability problem so that the popular SMT (Satisfiability Modulo Theory) based tools can be used along with suitable theories. We make use of a popular framework called _Event Calculus_ (EC) that introduces the concept of Events, which are actions that occur at a specific point in time, and Fluents, which are entities whose state changes in response to the occurrence of an event or action. EC provides constructs to reason about situations, events, and changes in time, allowing a precise specification of time relationships between situations and events, which is essential for describing the various driving style patterns exhibited by drivers. In addition, concurrent actions can also be specified in EC. ### _System Architecture_ We assume that the camera deployment is such that all vehicles on the roadway are visible without excessive image distortion. Given the increasing processing power in smart cameras, each camera can do the object detection and tracking tasks discussed here rather than sending the raw video feed to the next level known as _Road-Side Edge Controller (REC)_. In our earlier work, we designed a lightweight object recognition and tracking algorithm called YLLO (You Look Less than Once) that can run in the cameras and avoid transmission of redundant frames to RECs [17]. The RECs receive video streams from multiple cameras along a road segment and use them to monitor driver behaviors and associated anomalies. Further processing, including perspective transformation and estimation of orientations and speeds of the objects, may be done by the camera itself or by the RECs. The REC can then build a spatio-temporal logic model of the situation that includes all "facts" of different driver behaviors and the conditions leading to near-miss accidents along with the supporting "theories" (i.e., Newton's laws, arithmetic, etc.) Fig. 1 depicts our overall architecture, which is comprised of two different frameworks. One framework is based on Deep Learning (DL) and reasoning that runs on roadside cameras receiving traffic videos as input. The other is based on logical reasoning that runs on individual vehicles receiving in-vehicle parameters as inputs. The initial stage of the roadside camera framework is a lightweight object detection and tracking model based on a convolutional neural network (CNN) model called YLLO that we have developed for video analysis [17]. YLLO performs processing of video sequences in a highly efficient manner. The second stage for each detected object (e.g., vehicles, pedestrians, etc.) is a spatio-temporal logic-based reasoning system that captures the relative movements of the objects in real-time to identify various driver behaviors. The reasoning framework operating on individual vehicles consists of a formal specification of driver behavior used by an event recognition tool. ### _YLLO based Object Detection and Tracking_ YLLO is a lightweight object detection technique based on YOLOv6 and is optimized for continuous video streams by utilizing redundancy to identify the "only" essential frames. The previous version of YLLO as in [17] was based on YOLOv4; for this work, YLLO has been updated to a newer and more efficient version among all versions of YOLO, YOLOv6 [18]. YLLO is a three-stage process that begins with a scene change detection algorithm and progresses to object detection via YOLOv6. The Simple Online and Real-time Tracking with a Deep Association Metric (Deep-SORT) algorithm assign a tracker to each detected object or multiple objects. YLLO decouples classification and regression tasks to eliminate redundant objects between the frames. Additionally, before sending frames to object detection, for the scene change detection, it generates Color Difference Histograms (CDH) for edge orientations, where edge orientations are determined using the Laplacian-Gaussian edge detection framework. ### _Spatio-Temporal Reasoning_ In our work, we use an efficient dialect of the Event Calculus, termed "Event Calculus for run-time Reasoning" (RTEC) [19]. RTEC is an open-source implementation of the EC in Prolog and uses LTL (linear time logic) with integer time points. RTEC implements novel techniques for identifying complex events from a set of micro-behaviors and is scalable to large volumes of complex events. We can define micro-behavior in RTEC as rules that define the event instances using the predicates "happens at" (HA) and "happens for" (hF). The _fluents_ that are time-varying properties and the effects of events on _fluents_ are defined using the _in_A and _t_A predicates. The value of _fluents_ at any time point is defined using the _hoA_ and _hoF_ predicates. If \(F\) is a variable ranging over _fluents_, the term _F_=_V_ denotes that variable \(F\) has a value \(V\). There also exists Boolean fluents with values _true_ or _false_. Table I shows the predicates used in the RTEC tool. The characterization of real-world behavior often involves fuzziness, whereas a Boolean logic framework requires an assertion to be either true or false. It is not straightforward to extend RTEC to fuzzy or probabilistic logic; therefore, we model the fuzziness indirectly by introducing weights. Recall that we classify driver behavior in terms of micro-behaviors, some of which are essential for characteristics of Fig. 1: Driver Behavior Characterization Framework. certain driver behavior. For example, sudden braking can be considered a key characteristic of aggressive driving. Thus we consider certain micro-behavioral assertions as _hard_ in that they must hold, whereas others can be considered as _soft_ or optional. Let \(S_{k}\) denote the set of soft assertions for driver behavior \(k\). We define a weight, denoted as \(w_{ik}\) for each member \(i\) of the set \(S_{k}\). Then, the driver behavior \(k\) will be recognized as (a) all hard assertions (micro-behaviors) holding, and (b) \(\sum_{i\in S_{k}}iw_{ik}\!>\!W_{k}\) for some threshold \(W_{k}\). Note that the weights need not be static but depend on various spatio-temporal factors and context (e.g., day vs. night time, roads with different speed limits, etc.) A weight change would require pausing all current condition evaluations, changing all weights that need to be changed simultaneously, and then restarting the evaluation. To handle hard/soft conditions, we can use an extension to the Boolean Satisfiability problem known as _Weighted Partial Maxsat_ (WPM2). In WPM2, each clause is designated as either _hard_ or _soft_ with a given weight. We then have an optimization problem to find an assignment that satisfies all hard clauses and minimizes the total weight of soft clauses. The SAT core returned by the WPM2 solver confirms the presence of driver behavior. ### _Defining Events and Fluents in RTEC_ In second stage of the framework, the RTEC tool receives input as event calculus (EC) predicates representing time-stamped micro behaviors detected on individual video frames or the recorded in-vehicular parameters as shown in Fig 1. For example, the object's bounding box coordinates can define the appearance of a static object or multiple moving objects in each frame. We also have the object's angle/orientation and the direction in which they are moving, as well as lane drifting, which indicates if the vehicle is not staying in the center of the lane. Micro-behaviors such as acceleration, braking, lane change to the left or right, etc., are represented as events in EC that are defined along with their associated timestamps, which indicate the point in time at which the activity occurred. The _hA_ predicate establishes this type of input. For instance, _hA_(_hardBreaking(id6, 60_)_ indicates that an object(id6) engaged in hard or sudden braking at video frame 60, which is determined by comparing the deceleration value going beyond a predefined threshold. Some of the micro behaviors are represented as fluents in the EC. We use the _inA_ and \(A\) predicates for expressing the conditions in which these fluents initiate and terminate a specific driver behavior described above. The Micro-behaviors represented as EC events are defined mostly with _hF_ predicate, which can also compute the associated intervals. For example, _hF_(_laneDrifting(id3) = true, [(0, 60),(210, 280)]_ indicates that object3 was not keeping the center of the lane during the intervals (0, 60) and (210, 280). A few examples of events and fluents defined along with their meaning are shown in Table II. After defining events and fluents, we must define an initiation and termination map for each defined fluent in the system, indicating which events initiate and terminate which fluents. The next step is to specify the relation between fluents and events in the form of rules. For example, the initiation and termination map for fluent _overSpeed(v)_ is shown in Definition 1. In the definition, _overSpeed_, _speed_ are input events and _lane1_ and _lane2_ are the input fluents. _th_ is an temporal predicate indicating numerical threshold of driving parameters and in this case it represents the user-specified speeding threshold. The ruleset defined states that _overSpeed(v)_ is a Boolean fluent, which is invoked when a _speed_ event is detected. Further, the vehicle is present in lane1 and not in lane2, which is detected by fluents _lane1_ and _lane2_ respectively, and the momentary speed of the vehicle is more than the user-specified speeding threshold. The event _overSpeed(v)_ is terminated when the vehicle vs. speed is smaller than the speeding threshold. The exact definition applies when the vehicle is located in _lane2_, as indicated by fluent _lane2_ and the negation of fluent _lane1_. ### _Characterizing Driver Behavior_ As discussed in III-A, each category of driver behavior is a combination of different types of micro-behaviors, wherein all micro-behaviors defining a driver's behavior must be satisfied. For instance, the micro-behaviors that represent aggressive driver behavior as RTEC events or fluents include events such as suddenSteer, weaving, as well as fluents such as laneChange, atLane1, atLane2, overSpeed, and hardBraking. The aggressive driving behavior is represented as a boolean fluent defined as shown in the Definition 2. Similarly the distracted driver behavior is a boolean fluent defined as conjunction of events like laneDrifting, straddling, and fluents like laneChange, atLane1, atLane2, slowSpeed and normalBraking. _in(aggressiveDriving(v) = true, T) \(\leftarrow\) hoA(atLane1(v), T) \(\land\) - hoA(atLane2(v), T) \(\land\) - hoA(atLane2(v), T) \(\land\) - hoA(tLaneChange(v), T) \(\land\) - hoA(overSpeed(v), T) \(\land\) - hoA(waving(v), T) \(\land\) hA(suddeSteer(v), T) \(\land\) - hoA(sudeDriving(v), T) \(\land\) - hoA(distractedDriving(v), T)._ Definition 2. Characterization of _Aggressive_ Driver Behavior To express the dependencies when modeling the driver behavior, we define derived events, i.e., the events that occur due to the change in state or value of another fluent and/or the occurrence of another event (e.g., the effect of a normal or harsh braking on a vehicle's speed). These events indicate when specific actions occur in the traffic due to a combination of particular conditions. If **R** denotes the rules passed to the WPM2 solver, before invoking the WPM2 solver, we find relevant rules corresponding to derived events based on current ongoing events [20]. We identify the rules or relations \(\textbf{R}^{*}\subseteq\textbf{R}\) that lead to derived events based on the current events, either directly or indirectly. The dependency is expressed via a dependency graph \(G\), where the vertices denote the rules/relations, and the (directed) edges denote the dependency between them. We then take the transitive closure of \(G\) (say \(G^{\prime}\)) using the Floyd-Warshall algorithm. Thus, in \(G^{\prime}\) an edge \(i\!\rightarrow\!j\) denotes that \(j\) is directly or indirectly dependent on \(i\). Finally, the rules **R'** expressed in CNF are then passed on to a WPM2 solver [21]. ## IV Experimental Evaluation In this section, we evaluate our framework on our collected PiCarX dataset. The effectiveness of our framework is determined by the following metrics: (a) the accuracy and (b) the percentage of errors in driver behavior recognition. The experiments were performed on a computer with Intel(R) Core(TM) i7-7700 CPU @ 3.60 GHz, 32 GB RAM, and 1 TB SSD and SWI-Prolog 8.2.3. ### _Modeling Vehicular Driving_ The vehicular traffic on a roadway has been characterized by numerous models starting with the early 1950s. The models can be microscopic (i.e., model the behavior of each vehicle) or macroscopic (i.e., model the behavior of the traffic as a whole). Numerous microscopic (or "car-following") models exist, which are reviewed in a recent article [22] that also examines the incorporation of human factors into these models. Most models are continuous time, continuous space type, and for a single lane, express acceleration of a vehicle as a function of its current speed, distance to and speed of next vehicle (and sometimes the previous vehicle as well), etc. Most models also introduce some random slowdowns to model human behavior and to break the strict vehicle following behavior. Generally, lane change behavior is tacked on to the single-lane models with a set of rules concerning when lane change can occur; however, such models quickly become very difficult to analyze mathematically. In our implementation, we used the so-called cellular automaton (CA) model, which simplifies the introduction of complex rules and simulation implementation by discretizing both time and space. In the CA model, a roadway is seen as a sequence of cells of some fixed size. At any point in time, a cell could be empty or occupied. That is, a cell can be occupied by only one vehicle, although it is possible to model large vehicles that occupy multiple consecutive cells. In each time step, a vehicle may move over some integral number of cells depending on its speed and the availability of the cells ahead. The CA model makes it easy to introduce complex rules that account for various situations, including the presence of signals or other traffic control mechanisms. Complex lane change rules can also be coded; for example, coding a lane change decision based on the number of free cells ahead and behind in both the source and target lanes. It is typical to assume that a lane change always occurs in a single step, and the discrete model is well suited for this kind of discontinuous change. CA model makes it easy to provide various forms of driving personalities to a vehicle in terms of vehicle following, lane change, safe/unsafe vehicle position in a cell, etc. ### _Experimental Setup and Dataset Collection_ One significant difficulty in conducting this work was the scarcity of real datasets that provide both the vehicular motion parameters and the external videos that we can analyze. This paper can be considered a proof of concept (PoC) of characterizing the vehicular behavior remotely and non-intrusively without any involvement of the vehicle/driver or deployment of any further instrumentation in the vehicles. In particular, this paper aims to study how accurately this can be done by comparing it against the ground truth. Unfortunately, it is impossible to conduct such a study with actual vehicles on the road because of safety concerns since we need the vehicles to perform unsafe maneuvers. Also, to compare the video monitoring results against the ground truth, we need to tap into the OBD and obtain detailed vehicular information. In order to get around these difficulties, we set up an entire infrastructure using the automated toy cars, known as "PiCarXs" [23] in an indoor basement environment. Each PiCarX carries a raspberry pi board for programmatically controlling the car. We assembled six such cars and used a logically centralized controller to independently control each car's behavior remotely over the WiFi link. We also set up a "roadside" external camera to record the videos of the cars independently and analyze those videos in real time to determine and classify the behavior of each PiCarX. Each PiCarX used the 2-lane extension of the basic Cellular Automata (CA) car-following model [24] and further updated the model to introduce different types of driver behaviors. The software control of the vehicles can read their current speed and other parameters via appropriate sensors. We can then analyze the driver behavior for both in-vehicle sensing units and roadside camera units using a mixture of CNN models and logic-based behavior analysis. Finally, we validate our claim of analyzing driver behavior "only" from the roadside units by correlating the in-vehicle data and the roadside data. Thus, we collected such a dataset using our PiCarX setup. We captured High Definition Video Streams (HDVS) at 30 FPS with PicarX cars imitating different driver behavioral patterns on the road as discussed in section III-A. By varying speed limits, we also capture the driving traffic patterns on different road types, such as highways versus local roads. Along with capturing the video streams, we also record the eight in-vehicular driving parameters from all six robot cars, including vehicle orientation, acceleration, deceleration, braking, steering angle, lateral position, and lane change maneuvers to the left and right lanes. Fig. 2(a) and (b) shows the front and side view of PicarX used in the experiments, respectively. The experimental setup is shown in Fig. 3 and shows two distinct lanes with a single direction flow of traffic. We use a tripod and Sony FDR-AX33 Camcorder to record the simulation videos. We conducted five video simulations for our controlled environment, with each video being around 1-2 minutes long and averaging around 10,000 video frames. In addition, to simulate the in-vehicle framework as discussed in section III, we have approximately 2700 seconds of time-series data for all required in-vehicle driving parameters as discussed above. In addition to the simulated videos, we select approximately 48 real-world traffic videos from our other dataset, TU_DAT dataset to evaluate the effectiveness of our proposed work. TU_DAT dataset was used in our previous work [25] to predict and resolve anomalous situations in CCTV traffic-videos, which contain a diverse collection of accident videos collected in challenging environments. Table III shows the details of the data collected and the number of samples. ### _Implementation of Cellular Automata (CA) Model_ We implement the CA car-following model in each PiCarX and then extend the model to implement different driving behaviours. A classical CA model is a uniform lattice of cells representing an identical finite automaton with a state and a transition function; each cell takes its state and the states of a set of neighboring cells defined by a time and space invariant geometrical pattern. Starting with an initial condition, the CA evolves by sequentially activating all transition functions simultaneously. We use a generic, multi-layered and complete cellular automaton simulation engine in Python called "Cellular Automata General Environment" (CAGE) [26]. The environment supports multilayered grids, which provide several ways for the formation of neighborhoods. It enables the definition of transition rules that can be formulated algorithmically to reflect real-time driver behavior. Furthermore, the rules can be specified at different spatial levels and can change as a function of space and time, making it suitable for use alongside a logical reasoning tool. In our two-lane experimental setup, we implement abstractions of the CA model, road topology, and neighborhood information using CAGE to represent the various driver behaviors and a two-lane car-following model. An address in CAGE is a tuple of one or more integers representing the location of a cell within a given topology. Topologies determine the arrangements of cells in a network, and neighborhoods encapsulate the translation of addresses to a list of their neighbors, taking into account the topology they are connected with. The Map is the high-level class used by the automaton to perform operations on the cellular network. It is a combination of the topology and the Neighborhood classes. For the purposes of our experiments, we implemented a line map topology with a radial neighborhood for each lane having a total of 35 cells. ### _PicarX Object Detection_ As discussed in III-C, the roadside camera or RECs use YLLO-based object detection and tracking framework and a logical-reasoning-based method to classify the driver's behavior into one of three categories. The YLLO running on the cameras must be able to detect and classify the robot cars since we are using robot cars to simulate road traffic consisting of different driver behaviors. To train YLLO to recognize the robot cars, we constructed our own training and testing set by selecting positive samples from the recorded videos. For negative samples, we have utilized the frames from our toy car experiment videos used in our previous work [17]. Overall we have a total of 18,550 samples used to train the model. To reduce the workload of annotating the dataset, we have used an auto annotation tool [27], which is based on a semi-supervised architecture, where a model trained with a small amount of labeled data is used to produce the new labels for the rest of the dataset. Since YLLO is a CNN-based object detection model, it requires large amounts of Fig. 2: PiCarX used in the experiments. data. Hence, we have used various augmentation techniques to reach sufficient data amounts, with most augmentations occurring at run-time using Keras built-in functions. Keras offers several techniques for performing image augmentation online, which means that the augmentation is done as each image is processed by the network. Consequently, multiple augmentations can be performed without the need to save each image separately on the computer. The augmentations used were flipping, translation, shear, and rotation. The trained model has an overall identification accuracy of 90.11%. Fig. 3 shows the PiCarX detection and tracking results of the trained YILLO model. Due to space limitations, the trained model's results are omitted from this paper. We evaluate the performance of the proposed driver behavior detection system using both roadside cameras and an in-vehicle framework. We evaluate the classification performance of the driver behavior using standard performance metrics such as precision, recall, and accuracy. ### _Results and Discussion_ Estimating various vehicular parameters, such as speed, vehicle orientation, etc., forms the basis of driver behavior analysis [28]. For this, we need to first calibrate the camera so that it is possible to correct the inherent perspective distortion in the images. The perspective effect relates 3D points on the road (world) coordinate system to 2D pixels on the image plane differently. This effect assigns distinct informational contents to different image pixels. The objective of Inverse Perspective Mapping (IPM) is to invert the perspective effect, thereby imposing a uniform distribution of information across the image plane. To map the front-view image smoothly into a bird's-eye view for videos captured with PicarX cars, we employ the IPM technique. The performance of the proposed roadside camera-based driver behavior analysis framework is evaluated by first detecting a set of micro behaviors, which are then composed to classify the driver behavior into safe, distracted, and aggressive driving behaviors. We validate the proposed framework against the logic-based reasoning framework operating in individual vehicles, which provides the actual values of driving parameters such as speed, orientation, distance from the car in front, distance from the left or right side of the lane, etc. Table IV shows the results, and it can be seen that the roadside camera framework achieves an average precision and recall of 97.98% and 98,005%, respectively, and an average accuracy of 98.73% which are averaged over five runs of the experiments. The difference between these performance metrics values and those of the in-vehicle framework is quite small, ranging between 1-2 percent. In addition, we analyzed the errors that may arise when characterizing driver behavior via roadside cameras. In addition to the recorded videos using PicarX, we have evaluated the proposed mechanism using the TU_DAT dataset. Since this dataset is comprised of CCTV traffic videos of anomalous situations from various parts of the world, these cameras' intrinsic and extrinsic parameters may be unknown or different from one another due to mounting setups and camera types. Therefore, individual calibration of the captured videos is not possible. Table V shows the performance results of the proposed roadside camera framework on TU_DAT, it can be seen that the average precision and recall values are 90.694%, and the accuracy is approximately 94.5%. _It is important to note that the worse accuracy here is primarily due to lack of perspective correction; if the camera position and angles were known, we believe that the accuracies here would be similar to those obtained using PiCarX._ Table. VI shows how the change in micro-behavior by a driver as a result of feedback can reduce the instances Fig. 4: Error analysis. Fig. 3: PiCarX object detection and tracking.
2302.05747
Individualized Treatment Allocation in Sequential Network Games
Designing individualized allocation of treatments so as to maximize the equilibrium welfare of interacting agents has many policy-relevant applications. Focusing on sequential decision games of interacting agents, this paper develops a method to obtain optimal treatment assignment rules that maximize a social welfare criterion by evaluating stationary distributions of outcomes. Stationary distributions in sequential decision games are given by Gibbs distributions, which are difficult to optimize with respect to a treatment allocation due to analytical and computational complexity. We apply a variational approximation to the stationary distribution and optimize the approximated equilibrium welfare with respect to treatment allocation using a greedy optimization algorithm. We characterize the performance of the variational approximation, deriving a performance guarantee for the greedy optimization algorithm via a welfare regret bound. We implement our proposed method in simulation exercises and an empirical application using the Indian microfinance data (Banerjee et al., 2013), and show it delivers significant welfare gains.
Toru Kitagawa, Guanyi Wang
2023-02-11T17:19:32Z
http://arxiv.org/abs/2302.05747v4
# Individualized Treatment Allocation in Sequential Network Games+ ###### Abstract Designing individualized allocation of treatments so as to maximize the equilibrium welfare of interacting agents has many policy-relevant applications. Focusing on sequential decision games of interacting agents, this paper develops a method to obtain optimal treatment assignment rules that maximize a social welfare criterion by evaluating stationary distributions of outcomes. Stationary distributions in sequential decision games are given by Gibbs distributions, which are difficult to optimize with respect to a treatment allocation due to analytical and computational complexity. We apply a variational approximation to the stationary distribution and optimize the approximated equilibrium welfare with respect to treatment allocation using a greedy optimization algorithm. We characterize the performance of the variational approximation, deriving a performance guarantee for the greedy optimization algorithm via a welfare regret bound. We establish the convergence rate of this bound. We demonstrate the performance of our proposed method in simulation exercises. **Keywords**: Treatment choice, Markov random field, Gibbs distribution, variational approximation, mean field games, graphical potential game. Introduction The question of how best to allocate treatment to units interacting in a network is relevant to many policy areas, including the provision of local public goods (Bramoulle and Kranton, 2007), the diffusion of microfinance (Banerjee et al. (2013); and Akbarpour et al. (2020)), and strategic immunization (Galeotti and Rogers (2013); and Kitagawa and Wang (2023)). Obtaining an optimal individualized allocation, however, is often infeasible due to analytical and computational challenges. As a consequence, practical counterfactual policy analysis in the presence of network spillovers is limited to simulating and comparing outcome distributions or welfare values across a few benchmark candidate policies. This leaves the magnitude of the potential welfare gains of an optimal individualized assignment policy unknown. Focusing on a class of social network models in which interacting agents play sequential decision games, this paper develops a method to obtain optimal treatment assignment rules that maximize a social welfare criterion. We consider an individualized allocation of binary treatments over agents who are heterogeneous in terms of their own observable characteristics, their network configurations, and their neighbors' observable characteristics. Each agent chooses a binary outcome so as to maximize their own utility. This choice depends upon the agent's own characteristics and treatment as well as their neighbors' characteristics, treatments and choices. The sequential decisions of randomly ordered agents induce a unique stationary distribution of choices (Mele, 2017). We specify the planner's welfare criterion to be the mean of the aggregate outcomes (i.e., the sum of the means of binary outcomes over all agents in the network) at the stationary state that is associated with a given treatment allocation. We aim to maximize the welfare evaluated at the stationary outcome distribution with respect to the individualized allocation of treatments. There are analytical and computational challenges to solving the maximization problem for optimal targeting. First, fixing an allocation of treatments, the sequential decision games induce a Markov random field (MRF) and the stationary outcome distribution has a Gibbs distribution representation. The analytical properties of the mean of the aggregate outcomes, however, are difficult to characterize. To approximate the joint distribution of outcomes, the literature on MRFs performs numerical methods such as Markov Chain Monte Carlo (MCMC) (Geman and Geman, 1984). If the size of the network is moderate to large though, MCMC can be slow to converge. It is, therefore, infeasible to perform MCMC to evaluate the welfare at every candidate treatment assignment policy. Second, obtaining an optimal individualized assignment is a combinatorial optimization problem with respect to a binary vector whose cardinality is equal to the number of agents in the network. A brute force search quickly becomes infeasible as the size of network expands. We tackle these challenges by proposing methods for approximately solving the combinatorial optimization problem for individualized assignment. Our proposal is to perform a variational approximation of the stationary distribution of outcomes and to optimize the approximated equilibrium welfare with respect to an assignment vector by using a greedy optimization algorithm. The variational approximation step lessens the computational burden of running MCMC at each candidate policy. The greedy optimization step reduces the computational burden of the combinatorial optimization by assigning treatment sequentially to the agent who generates the largest welfare gain given previous assignments. Since our proposal involves approximation of the objective function and the heuristic method of greedy optimization, our proposal is not guaranteed to lead to a global optimum. A novel contribution of this paper is that we derive a performance guarantee for our proposed method in terms of an analytical upper bound on the welfare loss relative to the globally optimal assignment. The upper bound on the welfare regret consists of two terms: the welfare loss due to variational approximation and the welfare loss due to greedy optimization. We show that, once scaled such that it can be interpreted as the per-person welfare loss, the first term of the upper bound on the welfare loss (originating from the variational approximation) vanishes asymptotically as the number of agents in the network increases. On the other hand, we show that the second term of the upper bound on the the welfare loss (originating from the greedy optimization) does not generally converge to zero. To highlight this paper's unique contributions to the literature, we abstract from estimation of the structural parameters underlying the sequential decision game and assume that they are known. See Geyer and Thompson (1992), Snijders et al. (2002), Wainwright et al. (2008), Chatterjee and Diaconis (2013), Mele (2017), and Mele and Zhu (2022) for identification and estimation of these parameters. In practical terms, our proposed method is useful for computing an optimal assignment of treatment, with point estimates plugged-in in place of structural parameters. To assess the performance of our proposal, we perform extensive numerical studies. Given that the number of possible configurations increases exponentially with the size of the network, we are only able to apply the brute force method to search for the optimal allocation rule in a small network setting. We find that our proposed method leads to a globally optimal solution in a small network setting under our assumptions. In a large network setting, we first examine the performance of using our method compared with _No treatment_ and with _random allocation_. We attain a welfare improvement of around \(50\%\) relative to a No treatment rule, and an improvement of around \(10\%\) relative to a random allocation rule. In addition, we evaluate the welfare performance gap of using variational approximation compared with MCMC to approximate the stationary distribution of outcome. Under our assumptions, the variational approximation performs as well as MCMC for all of the treatment allocation rules that we consider. The remainder of this paper is organized as follows. We first review the relevant literature in the remainder of this section. Section 2 details the sequential decision process and the stationary distribution of the outcome variable. Section 3 contains theoretical results relating to the implementation of a variational approximation and to the maximization of the variationally approximated outcome. Simulation results are shown in Section 4 whilst Section 5 concludes. All proofs and derivations are shown in Appendix A to B. ### Literature Review This paper intersects with several literatures in economics and econometrics, including graphical game analysis, Markov random fields and variational approximation, discrete optimization of non-submodular functions, and statistical treatment rules. Graphical game analysis has a long history in economics, see Rosenthal (1973), Kakade et al. (2003), Ballester et al. (2006), Roughgarden (2010), Kearns et al. (2013), Babichenko and Tamuz (2016), De Paula et al. (2018), and Parise and Asuman (2023). The most relevant paper to our work is Mele (2017) which studies strategic network formation. Mele (2017) formulates the network formation game as a potential game (Monderer and Shapley, 1996), and characterizes the stationary distribution of the network as an exponential distribution. We also formulate our game as a potential game and adopt a similar sequential decision process (Blume, 1993) to study the stationary distribution for our game. Jackson and Watts (2002) indicates that this sequential decision process is a specific equilibrium selection mechanism. Kline et al. (2021) discusses the difficulties and potential solutions of analyzing counterfactuals with multiple equilibria. Lee and Pakes (2009) suggests using the best response dynamics learning model (our sequential decision process) to perform counterfactual analysis with multiple equilibria. De Paula (2013) reviews the recent literature on the econometric analysis of games with multiple equilibria. Badev (2021) extends the setting in Mele (2017) to study how behavioral choices change the network formation. De Paula (2020) reviews recent works on network formation. Whilst Mele (2017) devotes considerable attention to characterizing the stationary distribution of a network formation game, the main focus of this paper is to develop a method for approximating optimal targeting in network games. Galeotti et al. (2020) also studies targeting an intervention in a network setting, but the utility specification and decision process in that paper differ from those in our setting. Those differences lead to a different equilibrium and a different identification strategy for the optimal intervention compared with our setting. This paper is also relevant to the literature on Markov random fields (MRF) and variational approximation. MRF offer a way to represent the joint distribution of random variables as a collection of conditional distributions. MRF has been used in a wide variety of fields, including in statistical physics (e.g., Ising model; Ising, 1925), and in image processing (Wang et al., 2013). We model an individual's choice of outcomes as the maximization of a latent payoff function that depends upon a treatment allocation and their neighbors' choices, and derive an MRF representation of the joint distribution of outcomes. We use variational approximation as a computationally tractable approximation of the stationary outcome distribution. See Wainwright et al. (2008) for a comprehensive survey on variational approximation and MRF. Chatterjee and Dembo (2016) provides an approximation error bound to variational approximation applied to MRF of binary outcomes. Mele and Zhu (2022) apply this method to estimate parameters in a network formation model. However, none of the aforementioned papers have considered how changing parameter values affects the mean value or the distribution of MRF. This literature has not, to the best of our knowledge, studied how to obtain an optimal intervention in terms of a criterion function defined on the joint distribution of outcomes characterized through a MRF. We build a connection to the literature on providing a theoretical performance guarantee to using a greedy algorithm. Nemhauser et al. (1978) provides a performance guarantee for a general greedy algorithm solving submodular maximization problems with a cardinality constraint. Many optimization problems, however, are not submodular (Krause et al., 2008) and greedy algorithms usually still exhibit good empirical performance (Das and Kempe, 2011). Given this, there is considerable interest in solving non-submodular optimization problems using greedy algorithms amongst researchers. See Das and Kempe (2011), Bian et al. (2017), El Halabi and Jegelka (2020), and Jagalur-Mohan and Marzouk (2021). We use the result from Bian et al. (2017) to produce a performance guarantee for our treatment allocation problem by clarifying sufficient conditions for obtaining non-trivial bounds on the submodularity ratio and the curvature of our objective function. We are unaware of these recent advances in the literature on discrete optimization of non-submodular functions being applied elsewhere to the problem of optimal targeting in the presence of network spillovers. Although it does not introduce sampling uncertainty, this paper shares some motivation with the literature on statistical treatment rules, which was first introduced into econometrics by Manski (2004). Following the pioneering works of Savage (1951), and Hannan (1957), researchers often characterize the performance of decision rules using regret.1 See Dehejia (2005), Stoye (2009, 2012), Hirano and Porter (2009, 2020), Chamberlain (2011, 2020), Tetenov (2012), and Christensen et al. (2022) for decision theoretic analyses of statistical treatment rules. There is also a growing literature learning on studying individualized treatment assignments including Kitagawa and Tetenov (2018), Athey and Wager (2021), Kasy and Sautmann (2021), Kitagawa et al. (2021), Mbakop and Tabord-Meehan (2021), Sun (2021), and Adjaho and Christensen (2022), among others. These works do not consider settings that allow for the network spillovers of treatments. There are some recent works that introduce network spillovers into statistical treatment choice, such as Viviano (2019, 2020), Ananth (2020), Kitagawa and Wang (2023), and Munro et al. (2023). Viviano (2019) and Ananth (2020) assume the availability of network data from a randomised control trial (RCT) experiment. They do not model the behavior of units from a structural perspective. Viviano (2020) studies how to assign treatments over the social network in an experiment design setting. Munro et al. (2023) studies targeting analysis taking into account spillovers through the market equilibrium. Kitagawa and Wang (2023) considers the allocation of vaccines over an epidemiological network model (a Susceptible-Infected-Recovered, or SIR, network). Kitagawa and Wang (2023) considers a simple two-period transition model and does not consider the long-run stationary distribution of health status over the network. In contrast, in this paper, we consider sequential decision games and aim to optimize the long-run equilibrium welfare by exploiting its MRF representation. As an application of vorational approximation to treatment choice in a different context, Kitagawa et al. (2022) applies variational approximation to a quasi-posterior distribution for individualized treatment assignment policies and studies welfare regret performances when assignment policies are drawn randomly from the variationally approximated posterior. ## 2 Model ### Setup Let \(\mathcal{N}=\{1,2,...,N\}\) be the population. Each unit has a \(K\)-dimensional vector of observable characteristics that we denote by \(X_{i}\), \(i\in\mathcal{N}\). Assuming that the support of \(X_{i}\) is bounded, we normalize the measurements of \(X_{i}\) to be nonnegative, such that \(X_{i}\in\mathbb{R}_{+}^{K}\). Let \(\mathcal{X}=\{X_{1},...,X_{N}\}\) be a matrix that collects the characteristics of units in the population, and let \(\mathcal{X}^{N}\) denote the set of all possible matrices \(\mathcal{X}\). Let \(D=\{d_{1},...,d_{N}\}\) denote a vector of treatment allocation, where \(d_{i}\in\{0,1\}\), \(i\in\mathcal{N}\), indicates whether unit \(i\) is treated (\(d_{i}=1\)) or untreated (\(d_{i}=0\)). The social network is represented by an \(N\times N\) binary matrix that we denote by \(\{G_{ij}\}_{i,j\in\mathcal{N}}\), and that is fixed and exogenous in this work. \(G_{ij}=1\) indicates that units \(i\) and \(j\) are connected in the social network, whilst \(G_{ij}=0\) indicates that they are not. Let \(\mathcal{N}_{i}\) indicate the set of neighbors of unit \(i\). \(\overline{N}\) denotes the maximum number of edges for one unit in the network (i.e., \(\overline{N}=\max_{i}|\mathcal{N}_{i}|\)); \(\underline{N}\) denotes the minimum number of edges for one unit in the network (i.e., \(\underline{N}=\min_{i}|\mathcal{N}_{i}|\)). As a convention, we assume there are no self-links (i.e., \(G_{ii}=0,\,\forall i\in\mathcal{N}\)). We further assume that the following property holds for the network structure : **Assumption 1**.: _(**Undirected Link**) The adjacency matrix \(G\) is undirected. i.e., \(G_{ij}=G_{ji}.\)_ The symmetric property of interaction in Assumption 1 is a necessary condition for our interacted sequential decision game to be a proper potential game (Definition 2.1 below) that can yield a unique stationary outcome distribution. The size of the spillover between units \(i\) and \(j\) depends not only upon \(G_{ij}\) but also upon the treatment allocation and upon covariates, which are allowed to be asymmetric. We, accordingly, have a directed weighted network structure for the spillovers. As we have previously mentioned, we consider a sequential decision game setting to derive the unique stationary outcome distribution. We now introduce the notation for our sequential decision game. Let \(Y_{i}^{t}\in\mathcal{Y}=\{0,1\}\) be unit \(i\)'s choice made at time \(t\), which we refer to as \(i\)'s outcome. Let \(Y^{t}\) be the collection of outcome variables \(\{Y_{1}^{t},...,Y_{N}^{t}\}\in\mathcal{Y}^{N}\) at time \(t\). We consider a discrete-time infinite-horizon setting. For each time period \(t\) in the decision process, we denote the realization of \(Y^{t}\) by \(y^{t}\in\{0,1\}^{N}\), and the realization of unit \(i\)'s outcome by \(y_{i}^{t}\). The outcome set that includes all of the current outcomes but \(y_{i}^{t}\), that is, \(y^{t}\setminus y_{i}^{t}\), is denoted by \(y_{-i}^{t}\). Let \(Y=\{Y_{i}\}_{i=1}^{N}\in\mathcal{Y}^{N}\) denote the collection of the outcome variables in equilibrium, which follows the stationary outcome distribution. The game, which we denote by \(\mathcal{G}\), comprises: * the aforementioned set of individuals that we label \(\mathcal{N}\), a social planner, and nature; * a set of actions \(Y^{t}\) that records the binary choice that is made by each individual in every time period \(t\) in which they are selected (by nature) to move, and a treatment choice \(D\) for each individual that is made by the social planner in the initial period upon observing \(\mathcal{X}\) and \(G\) but before \(Y^{1}\) is chosen; * a player function that selects a single individual to be active in each time period based upon whom nature indicates; * a sequence of histories over an infinite-horizon that is summarised by an initial treatment allocation and by the identity of the individual that is selected by nature in each time period alongside their corresponding action; * the preferences (utilities) of individuals \(\{U_{i}(y^{t},\mathcal{X},D,G;\boldsymbol{\theta})\}_{i=1}^{N}\), which depend upon both their own and others' actions (i.e., upon each individual's initial treatment allocation and the binary choices that they subsequently make whenever they are selected to do so by nature) and by the social planner's actions (i.e., upon the treatment choice that the social planner makes in the initial period), and that we imbue with certain properties specified in Section 2.5; * the individual selected by nature in each time period receives a pair of preference shocks (one for each of their two choices) before they make a decision. Each individual maximizes their utility at each time period in which they are selected by nature. The social planner chooses the initial treatment allocation to maximize an objective function, which we call the _planner's welfare_. ### Potential Game We consider pure-strategy Nash equilibrium as the solution concept of our game. Recall that the definition of a pure-strategy Nash equilibrium is a set of actions \(y^{*}=\{y^{*}_{1},...,y^{*}_{N}\}\) such that \[U_{i}(y^{*}_{i},y^{*}_{-i},\mathcal{X},D,G;\boldsymbol{\theta})\geq U_{i}(y^{ \prime}_{i},y^{*}_{-i},\mathcal{X},D,G;\boldsymbol{\theta}) \tag{1}\] for any \(y^{\prime}_{i}\in\mathcal{Y}\) and for all \(i\in\mathcal{N}\). This requires that no individual has a profitable deviation from her current decision when she is randomly selected by nature. To analyze the Nash equilibrium of our game, we characterize our game as a potential game. The concept of a potential game has been used to study strategic interaction since Rosenthal (1973). It provides a tool to analyze the Nash equilibria of (non) cooperative games in various settings (e.g., Jackson, 2010, and Bramoulle et al., 2014). We now formally define the potential game. **Definition 2.1**.: **(Potential Game**(Monderer and Shapley, 1996)) \(\mathcal{G}\) is a potential game if there exists a potential function \(\Phi:\mathcal{Y}^{N}\rightarrow\mathbb{R}\) such that for all \(i\in\mathcal{N}\) and for all \(y_{i},y^{\prime}_{i}\in\mathcal{Y}\)** \[U_{i}(y_{i},y_{-i})-U_{i}(y^{\prime}_{i},y_{-i})=\Phi(y_{i},y_{-i})-\Phi(y^{ \prime}_{i},y_{-i}). \tag{2}\] The change in potentials from any player's unilateral deviation matches the change in their payoffs. Nash equilibria, therefore, must be the local maximizers of potential. Monderer and Shapley (1996, SSTheorem 4.5) states that \[\frac{\partial U_{i}}{\partial y_{i}\partial y_{j}}=\frac{\partial U_{j}}{ \partial y_{j}\partial y_{i}} \tag{3}\] is a necessary and sufficient condition for a game featuring a twice continuously differentiable utility function to be a potential game. For the discrete outcome case, a condition2 - that we refer to as the _symmetry property_ - analogous to Eq.3 is a necessary and sufficient condition for the existence of a potential function. Chandrasekhar and Jackson (2014), and Mele (2017) also use a potential game framework to analyze Nash equilibria in a network game. We restrict our analysis to potential games equipped with a potential function \(\Phi(y,\mathcal{X},D,G;\boldsymbol{\theta})\). We later specify a functional form for the utility function that satisfies the symmetry property and provide an explicit functional form for the potential function in Section 2.5. In assuming that our game is a potential game, we guarantee that at least one pure strategy Nash equilibrium exists, as per Monderer and Shapley (1996). Footnote 2: Replacing the second-order derivative in Eq.3 with second-order differences. See Monderer and Shapley (1996, §Corollary 2.9) for further details. ### Sequential Decision Process The details of the sequential decision process are as follows. In the initial period, the social planner observes the connections in the social network and individuals' attributes, and decides the treatment allocation so as to maximize the planner's welfare. Then, at the beginning of every period \(t\), an individual \(i\) is randomly chosen from \(\mathcal{N}\) by nature. Unit \(i\) chooses an action (outcome) \(y_{i}^{t}\). The _selection process_ is a stochastic sequence \(O=(O^{t})_{t=1}^{\infty}\) with support \(\mathcal{N}\). Realizations of \(O^{t}\) indicate the unit that makes a decision in period \(t\); all other units maintain the same choice as in the last period. The probability of unit \(i\) being randomly chosen from \(\mathcal{N}\) at time \(t\) is given by: \[\Pr(O^{t}=i|y^{t-1},\mathcal{X},D,G)=\rho_{i}^{t}, \tag{4}\] where \(\sum_{i=1}^{N}\rho_{i}^{t}=1\) for all \(y\in\{0,1\}^{N}\). In the simplest case, \(\rho_{i}^{t}=1/N\) for all \(t\). The idea here is that only previously-made choices (outcome) factor into the decision of the unit that is selected by nature in period \(t\). Without this, it is not possible to provide a closed-form expression for the joint distribution of the outcome. We require that any individual can be selected and that this selection depends upon \(y_{-i}^{t-1}\) rather than upon \(y^{t-1}\). **Assumption 2**.: _(Decision Process) The probability of unit \(i\) being selected at time \(t\) does not depend upon \(y_{i}^{t-1}\), and each action has a positive probability of occurring:_ \[\rho_{i}^{t}=\Pr(O^{t}=i|y_{-i}^{t-1},\mathcal{X},D,G)>0\quad\forall i\in \mathcal{N}. \tag{5}\] Once unit \(i\) has been selected in period \(t\), they choose action \(y_{i}^{t}\) so to maximize their current utility. We assume that there is _complete information_, such that unit \(i\) can observe the attributes and treatment status of their neighbors. Before making their decision, unit \(i\) receives an idiosyncratic shock \(\varepsilon\). Then, unit \(i\) chooses \(Y_{i}^{t}=1\) if and only if: \[U(1,y_{-i}^{t-1},\mathcal{X},D,G;\mathbf{\theta})+\varepsilon_{1t}\geq U(0,y_{-i}^{ t-1},\mathcal{X},D,G;\mathbf{\theta})+\varepsilon_{0t}. \tag{6}\] Following the discrete choice literature (e.g., Brock and Durlauf, 2001; Train et al., 1987) and Mele (2017), we put the following assumption about the idiosyncratic shock. **Assumption 3**.: _(**Preference Shock**) \(\varepsilon_{1t}\) and \(\varepsilon_{0t}\) follow a Type 1 extreme value distribution and are independent and identically distributed among units and across time._ Under Assumption 3, the conditional probability of unit \(i\) choosing \(Y_{i}^{t}=1\) is given by: \[P(Y_{i}^{t}=1|Y_{-i}^{t-1}=y_{-i}^{t-1},\mathcal{X},D,G;\mathbf{\theta})=\frac{ \exp[U_{i}(1,y_{-i}^{t-1},\mathcal{X},D,G;\mathbf{\theta})]}{\sum_{y_{i}\in\{0,1\} }\exp[U_{i}(y_{i},y_{-i}^{t-1},\mathcal{X},D,G;\mathbf{\theta})]}. \tag{7}\] Therefore, the sequence \([y^{0},y^{1},...,y^{t}]\) evolves as a Markov chain such that: \[y_{i}^{t}=\begin{cases}y_{i}^{t-1}&\text{w/p }\;1-\rho_{i}^{t}\\ y&\text{w/p }\;\rho_{i}^{t}\cdot P(Y_{i}^{t}=y|Y_{-i}^{t-1}=y_{-i}^{t-1}), \end{cases}\;\;\;\;\;\;\forall i\in\mathcal{N}, \tag{8}\] where \(y\in\{0,1\}\). Under Assumption 1 to 3, this Markov chain is _irreducible_ and _aperiodic_,3 which has a unique stationary distribution. Note that in the special case when there is no idiosyncratic shock, the sequence will stay in one Nash equilibrium in the long run. Footnote 3: It is irreducible since every configuration could happen in a finite time given our assumption on the selection process. It is aperiodic since the selected unit has a positive probability to choose the same choice as in the last period. The individual decision process is a stochastic best response dynamic process (Blume, 1993). This sequential decision process generates a Markov Chain of decisions. Jackson and Watts (2002) shows that the sequential decision process plays the role of a stochastic equilibrium selection mechanism. Without this sequential structure, the model would be an incomplete model. Lee and Pakes (2009) performs counterfactual predictions of policy interventions in the presence of multiple equilibria, with best response dynamics playing the role of an equilibrium selection mechanism. ### Stationary Distribution Following Mele (2017, SSTheorem 1), the stationary joint distribution of the outcomes in our sequential decision game is given by: **Theorem 2.1**.: _Unique Stationary Distribution_ _(_Mele_,_ 2017_)__: Under Assumption 1 to 3, the interacted decision game has a unique stationary distribution:_ \[P[Y=y|\mathcal{X},D,G;\mathbf{\theta}]=\frac{\exp[\Phi(y,\mathcal{X},D,G;\mathbf{ \theta})]}{\sum_{\delta\in\{0,1\}^{N}}\exp[\Phi(\delta,\mathcal{X},D,G;\mathbf{ \theta})]}. \tag{9}\] Mele (2017) discusses the relationship between Nash equilibria and this stationary distribution. The set of Nash equilibria is the set of local maxima of the potential function. We also know that the probability of a given configuration increases with the value of the potential. Nash equilibria of the game must, therefore, be visited more often in the long run. Given this, a high proportion of the possible configurations generated by the joint distribution will correspond to Nash equilibria. Theorem 2.1 shows that, given the parametric specification of the distribution of unobservables (Assumption 3), the joint distribution of the outcomes is given by a Gibbs distribution characterized by the potentials. This result has a close connection to the MRF literature. Specifically, we can view the joint distribution of the outcomes in the stationary as a Markov random field (see, e.g., Bremaud, 2013): The random field \(\{Y_{i}\}_{i=1}^{N}\) is a collection of random variables on the state space \(\mathcal{Y}\). This random field is a Markov random field if for all \(i\in\mathcal{N}\) and \(y\in\mathcal{Y}^{N}\): \[P(Y_{i}=y_{i}|Y_{-i}=y_{-i})=P(Y_{i}=y_{i}|Y_{j\in\mathcal{N}_{i}}=y_{j\in \mathcal{N}_{i}}). \tag{10}\] Given the specification of our utility function, the conditional distribution of \(Y_{i}\) satisfies this Markov property. By connecting \(Y\) to MRF, the _Hammersley-Clifford Theorem_ (Besag, 1974; Hammersley and Clifford, 1971) establishes that the joint distribution of \(Y\) must follow a _Gibbs distribution_, which is consistent with the result of Theorem 2.1. The stationary distribution of the outcomes shown in Theorem 2.1 is structural in the sense that the specification of the potential function in the Gibbs distribution relies on the functional form specification of the latent payoff function of agents. An advantage of the current structural approach is that we are transparent about the assumptions that we impose on the behavior of agents, on the structure of social interaction, and on the equilibrium concept. The structural approach, accordingly, disciplines the class of joint distributions of observed outcomes to be analyzed. As an alternative to the structural approach, we can consider a reduced-form approach where we model the conditional distribution of the observed outcomes given the treatment vector. Maintaining the family of Gibbs distributions, the reduced-form approach corresponds to introducing a more flexible functional form for the potential functions without guaranteeing that it is supported as a Nash equilibrium of the potential game. Despite this potential issue, our approach of variational approximation and greedy optimization can be used to obtain an optimal targeting rule for a broad class of potential functions. ### Preference As in Mele (2017), Galeotti et al. (2020), and Sheng (2020), we specify the individual utility function as a linear quadratic function of choice. The deterministic component of the utility of player \(i\) of choosing \(y_{i}\) relative to \(y_{i}=0\) is given by: \[U_{i}(y,\mathcal{X},D,G;\alpha,\beta)=\alpha_{i}y_{i}+\sum_{j\neq i}^{N}\beta_ {ij}y_{i}y_{j}. \tag{11}\] Given a network \(G\), covariates \(\mathcal{X}=(X_{1},...,X_{N})\), and a treatment allocation \(D=(d_{1},...,d_{N})\), the coefficient \(\alpha_{i}\) on unit \(i\)'s choice depends upon their own covariates and treatment status as well as those of all of their neighbors; the coefficient \(\beta_{ij}\) on the quadratic term \(y_{i}y_{j}\) depends upon their own covariates and treatment status as well as those of their neighbor unit \(j\). Allowing for \(\alpha_{i}\) and \(\beta_{ij}\) to be unconstrained, this specification of the utility function is without loss of generality since choice is binary. The condition for the existence of a potential function (Eq.3), however, requires that \(\beta_{ij}=\beta_{ji}\) for all \(i\neq j\in\mathcal{N}\).4 This symmetry assumption on \(\beta_{ij}\) restricts the spillover effect of unit \(i\)'s choice on unit \(j\). The approach that is proposed in this paper to obtain an optimal treatment allocation can be implemented for any utility function specification as long as this symmetry condition is imposed. Nevertheless, to obtain a specific welfare performance guarantee for our method, we consider the following parametric specification of the utility functions in the remaining sections. Footnote 4: For a potential function to exist, after eliminating zero terms, we require that \(U_{i}(1,0,y_{-ij})-U_{i}(1,1,y_{-ij})+U_{j}(1,1,y_{-ij})-U_{j}(0,1,y_{-ij})=0\). This implies that \(-\beta_{ij}+\beta_{ji}=0\). \[U_{i}(y,\mathcal{X},D,G;\mathbf{\theta})=\Big{[}\theta_{0}+\theta_{1}d_{i}+X_{i}^{ \prime}\theta_{2}+X_{i}^{\prime}\theta_{3}d_{i}+A_{N}\sum_{j\in\mathcal{N}_{i} }\theta_{4}m_{ij}d_{j}\Big{]}y_{i}+A_{N}\sum_{j\in\mathcal{N}_{i}}m_{ij}( \theta_{5}+\theta_{6}d_{i}d_{j})y_{i}y_{j}, \tag{12}\] where \(m_{ij}=m(X_{i},X_{j})\) is a (bounded) real-valued function of personal characteristics. In the absence of binary treatments, this specification appears in Mele (2017), and Sheng (2020). \(m_{ij}\) measures the distance between unit \(i\)'s characteristics and unit \(j\)'s characteristics; the spillover effect is weighted by how similar two units appear. \(A_{N}\) is a term that governs the magnitude of spillovers. As \(A_{N}\) increases, unit \(i\)'s decision is more heavily influenced by their neighbor's choices and treatment status. The magnitude of \(A_{N}\) that is suitable for generating stochastic choice depends upon the size and the density of the network. We adopt the following notion of network density. **Definition 2.2**.: **Sparse Network**: the maximum number of links that a node can have is constant (i.e., independent of \(N\)). **Dense network**: the maximum number of links a node can have increases with \(N\).5 Footnote 5: Graham (2020) writes “_call a network dense if its size, or number of edges, is ‘close to’ \(N^{2}\) and sparse if its size is ‘close to’ \(N\)._” This is similar to our definition. For instance, to maintain comparability of the magnitude of own and spillover effects, a suitable scaling for \(A_{N}\) is \[A_{N}=\begin{cases}1&\text{for sparse networks}\\ \frac{1}{N}&\text{for dense networks}.\end{cases} \tag{13}\] Similar choices of \(A_{N}\) have been used in many settings. For example, Sheng (2020) chooses \(A_{N}=\frac{1}{N-2}\); Galeotti et al. (2020) chooses \(A_{N}=1\) but imposes an additional assumption on the coefficient. The utility that unit \(i\) derives from an action is the sum of the net benefits that they accrue from their own actions and from those of their neighbors. In this work, we assume that only direct neighbors are valuable and units do not receive utility from one-link-away contacts. The total benefit of playing action \(Y_{i}=1\) has six components. When unit \(i\) chooses action \(Y_{i}=1\), they receive utility \(\theta_{0}\) from their own choice without treatment. They also receive additional utility \(\theta_{1}d_{i}\) depending upon their own treatment status. Their utility also has a heterogeneous treatment effect component \(X_{i}^{\prime}(\theta_{2}+\theta_{3}d_{i})\), which depends upon their personal characteristics \(X_{i}\). Units value treatment externalities; that is, treatment received by other units. Unit \(i\) receives additional utility \(\theta_{4}m_{ij}\) if their neighbor unit \(j\) receives treatment, no matter their own treatment status. Units value choice spillovers. When unit \(i\) is deciding whether to play action \(1\), they observe unit \(j\)'s choice and attributes. If unit \(j\) is a neighbor of unit \(i\) that chooses action \(1\) then this provides \(\theta_{5}m_{ij}\) additional utility to unit \(i\). The final component corresponds to the choice spillovers from those neighbors who receive treatment. If both unit \(i\) and unit \(j\) receive treatment and both of them choose action \(1\), unit \(i\) receives additional utility \(\theta_{6}m_{ij}\) from the common treatment and choice. **Example 1**.: _(Customer Purchase Decisions) Individual \(i\) makes a purchase decision \(Y_{i}\) (i.e., buy or not buy) for one product (e.g., Dropbox subscription, Orange from Sainsbury, iPhone). In this example, the social planner is the company that is trying to maximize the total number of customers that purchase its products. Individuals' purchase decisions sequentially depend upon the purchase decision of their friends or of their colleagues. The company observes individuals' friendships and then decides how to allocate discount offers to achieve its own targets. (e.g., Richardson and Domingos, 2002)_ **Example 2**.: _(**Criminal Network**) In a criminal network, suspects are connected by a social network. Suspect \(i\) makes a decision whether to commit a crime, \(Y_{i}=0\), or not, \(Y_{i}=1\). The social planner in this example is the government or a police force that is trying to minimize the total number of crimes in the long run. The decision that a suspect makes about whether to commit a crime is based upon whether they and their friends have been arrested before (\(d_{i}=1\) denotes they have been arrested before and \(d_{i}=0\) denotes they have not been arrested in the past). The social planner observes the criminal network and decides which suspects to arrest. (e.g, Lee et al., 2021)_ To ensure that our game is a potential game, we impose an additional assumption on \(m_{ij}\). We assume that the following condition is satisfied. **Assumption 4**.: _(**Non-negative, Bounded and Symmetric Property**) Function \(m_{ij}\) satisfies the following restrictions:_ \[m(X_{i},X_{j})=m(X_{j},X_{i}),\quad\forall i,j\in\mathcal{N}. \tag{14}\] \[m_{ij}\in[\underline{m},\overline{m}]\in\mathbb{R}_{+},\quad\forall i,j\in \mathcal{N}. \tag{15}\] Assumption 4 ensures that \(m_{ij}\) is symmetric hold for all \(i,j\in\mathcal{N}\). Researchers can freely choose any \(m_{ij}\) which satisfies the above assumption. The following proposition indicates that our decision game is a potential game. **Proposition 2.1**.: _(**Potential Function**) Under Assumption 1 and 4, the potential function \(\Phi(y,\mathcal{X},D,G;\mathbf{\theta})\) for \(U_{i}(y,\mathcal{X},D,G;\mathbf{\theta})\) specified in Eq.12 can be defined as:_ \[\begin{split}\Phi(y,\mathcal{X},D,G;\mathbf{\theta})&= \sum_{i=1}^{N}\left(\theta_{0}+\theta_{1}d_{i}+X_{i}^{\prime}(\theta_{2}+\theta _{3}d_{i})+A_{N}\sum_{j=1}^{N}\theta_{4}m_{ij}G_{ij}d_{j}\right)y_{i}\\ &\quad+\frac{A_{N}}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}m_{ij}G_{ij}y_ {i}y_{j}(\theta_{5}+\theta_{6}d_{i}d_{j}),\end{split} \tag{16}\] _and our interacted decision game is a potential game._ Proof of Proposition 16 is provided in Appendix A.2. We can, however, easily verify that this specification satisfies the definition of a potential function (i.e., Eq.2). Notice that the potential function is not the summation of the utility function across all units; summation of the utility function counts the interaction terms twice and violates Eq.2. By characterizing our game as a potential game, we can employ the stationary outcome distribution that we derived in Theorem 2.1 to evaluate the planner's expected welfare. Treatment Allocation The objective of the social planner is to select a treatment assignment \(D^{*}\in\{0,1\}^{N}\) that maximizes equilibrium mean outcomes subject to a capacity constraint that the number of individuals that are treated cannot exceed \(\kappa>0\): \[D^{*}=\operatorname*{arg\,max}_{D\in\{0,1\}^{N}}\sum_{i=1}^{N}\mathbb{E}_{P}[Y_ {i}|\mathcal{X},D,G;\mathbf{\theta}], \tag{17}\] \[s.t.\quad\sum_{i=1}^{N}d_{i}\leq\kappa.\] From Theorem 2.1, the stationary joint distribution of \(Y\) depends on the treatment allocation \(D\). Fixing the parameters \(\mathbf{\theta}\), attributes \(\mathcal{X}\), and network \(G\), the social planner selects the joint distribution that maximizes equilibrium outcomes by manipulating treatment allocation rules. In this work, we assume that the structural parameters \(\mathbf{\theta}\) underlying the sequential decision game are given and abstract from uncertainty in parameters estimation. In general, there are two common estimation strategies used in the MRF literature. * _Markov chain Monte Carlo (MCMC; Metropolis and Ulam, 1949)_**:**: Geyer and Thompson (1992), Snijders et al. (2002), Mele (2017), Badev (2021). * _Variational approximation_**:**: Wainwright et al. (2008), Chatterjee and Diaconis (2013), Mele and Zhu (2022). MCMC involves sampling from a large class of joint distributions and scales well with the dimensionality of the sample space (Bishop and Nasrabadi, 2006). An issue, however, is that the mixing time of the Markov chain generated by Metropolis or Gibbs sampling takes exponential time (Bhamidi et al., 2008; Chatterjee and Diaconis, 2013). Variational approximation, which is optimization-based rather than sampling-based, is an attractive alternative to MCMC if a fast optimization algorithm is available. To approximate the Gibbs distribution, a fast iteration algorithm for optimization is known (Wainwright et al., 2008) and is what we employ in a part of our algorithm (Algorithm 1 in Section 3.2). ### Welfare Approximation We cannot directly maximize the equilibrium welfare; instead, we seek to maximize the approximated welfare. We now discuss what prevents us from maximizing the equilibrium welfare. Recall that the objective function \(W(D)\) from Eq.17 is: \[\begin{split} W(D)&=\sum_{i=1}^{N}\mathbb{E}_{P}[Y_{i} |\mathcal{X},D,G]\\ &=\sum_{i=1}^{N}\sum_{y\in\{0,1\}^{N}}y_{i}P(Y=y|\mathcal{X},D,G) \\ &=\sum_{i=1}^{N}\sum_{y\in\{0,1\}^{N}}y_{i}\frac{\exp[\Phi(y, \mathcal{X},D,G;\mathbf{\theta})]}{\sum_{\delta\in\{0,1\}^{N}}\exp[\Phi(\delta, \mathcal{X},D,G;\mathbf{\theta})]}\\ &=\sum_{i=1}\sum_{y\in\{0,1\}^{N}}y_{i}\frac{\exp[w_{1}^{\prime} y+y^{\prime}w_{2}y]}{\sum_{\delta\in\{0,1\}^{N}}\exp[w_{1}^{\prime}\delta+ \delta^{\prime}w_{2}\delta]},\end{split} \tag{18}\] where \(w_{1}\) is a \(N\times 1\) weighting vector and \(w_{2}\) is a \(N\times N\) weighting matrix. The \(i\)-th element in \(w_{1}\) takes the value: \[w_{1}^{i}=\theta_{0}+\theta_{1}d_{i}+(\theta_{2}+\theta_{3}d_{i})X_{i}+A_{N} \sum_{j=1}^{N}\theta_{4}m_{ij}G_{ij}d_{j}. \tag{19}\] The \(i,j\)-th element in \(w_{2}\) takes the value: \[w_{2}^{ij}=\frac{A_{N}}{2}m_{ij}G_{ij}(\theta_{5}+\theta_{6}d_{i}d_{j}). \tag{20}\] We define the denominator in Eq.18 - the _partition function_ - as \(\mathcal{Z}\): \[\mathcal{Z}\coloneqq\sum_{\delta\in\{0,1\}^{N}}\exp[w_{1}^{\prime}\delta+ \delta^{\prime}w_{2}\delta]. \tag{21}\] Since the partition function \(\mathcal{Z}\) sums all possible configurations (of which there are \(2^{N}\)), it is infeasible to evaluate the expectation. _When \(N>276\), there are more configurations than atoms in the observable universe_(De Paula, 2020). Given this well-known problem, we seek to approximate the distribution \(P\) using a tractable distribution \(Q\). Defining \(\mu_{i}^{P}\coloneqq\mathbb{E}_{P}[Y_{i}|\mathcal{X},D,G]\) and \(\mu_{i}^{Q}\coloneqq\mathbb{E}_{Q}[Y_{i}|\mathcal{X},D,G]\), the objective func tion can be bounded from above by: \[\begin{split} W(D)&=\sum_{i=1}^{N}\mu_{i}^{P}\\ &\leq\sum_{i=1}^{N}|\mu_{i}^{P}-\mu_{i}^{Q}|+\sum_{i=1}^{N}\mu_{i}^ {Q}\\ &=2\mathbb{T}\mathbb{V}(P,Q)+\sum_{i=1}^{N}\mu_{i}^{Q}\\ &\leq\sqrt{2\mathbb{KL}(Q||P)}+\sum_{i=1}^{N}\mu_{i}^{Q}\quad \text{ (by Pinsker's Inequality)},\end{split} \tag{22}\] where \(\mathbb{T}\mathbb{V}(P,Q)\) is the total variation distance between the distributions \(P\) and \(Q\). The approximation error is, therefore, bounded by: \[\sum_{i=1}^{N}\mu_{i}^{P}-\sum_{i=1}^{N}\mu_{i}^{Q}\leq\sqrt{2\mathbb{KL}(Q||P)}. \tag{23}\] It is natural to choose a distribution \(Q\) that minimizes the upper bound \(\mathbb{KL}(Q||P)\) so as to reduce the approximation error. It is not, however, feasible to search over all tractable distributions to find \(Q\); we choose to work with an _independent Bernoulli distribution_. **Remark 3.1**.: Some social planners may target maximizing the expected utilitarian welfare (i.e., the summation of individual utilities) when choosing the optimal treatment allocation, in which case the objective function becomes: \[\begin{split} W_{U}(D)&=\sum_{i=1}^{N}\mathbb{E}_{P} [U_{i}(y,\mathcal{X},D,G;\mathbf{\theta})|\mathcal{X},D,G]\\ &=\sum_{i=1}^{N}\theta_{ij}^{1}\mathbb{E}_{P}[y_{i}|\mathcal{X},D,G]+\sum_{i=1}^{N}\sum_{j=1}^{N}\theta_{ij}^{2}\mathbb{E}_{P}[y_{i}y_{j}| \mathcal{X},D,G]\\ &=\sum_{i=1}^{N}\theta_{ij}^{1}\mu_{i}^{P}+\sum_{i=1}^{N}\sum_{j =1}^{N}\theta_{ij}^{2}\mu_{ij}^{P},\end{split} \tag{24}\] where \(\theta_{ij}^{1}=\theta_{0}+\theta_{1}d_{i}+X_{i}^{\prime}\theta_{2}+X_{i}^{ \prime}\theta_{3}d_{i}+A_{N}\sum_{j\in\mathcal{N}_{i}}\theta_{4}m_{ij}d_{j}\), \(\theta_{ij}^{2}=A_{N}\sum_{j\in\mathcal{N}_{i}}(\theta_{5}+\theta_{6}d_{i}d_{ j})m_{ij}\), and \(\mu_{ij}^{P}=\mathbb{E}_{P}[y_{i}y_{j}|\mathcal{X},D,G]\). This \(\mu_{ij}^{P}\) term leads to the bound on the objective function differing substantially from the one in Eq.22. Standard variational approximation does not apply in this setting. We leave analysis of this problem for future research. ### Mean Field Method Using an independent Bernoulli distribution to approximate the target distribution is called _naive mean field approximation_(Wainwright et al., 2008). This method can be viewed as a specific method in the general approach of _variational approximation_, which approximates a complicated probability distribution by a distribution belonging to a class of analytically tractable parametric distributions. In Eq.22, \(P\) corresponds to the target distribution to be approximated and \(Q\) corresponds to a simple parametric distribution approximating \(P\). We consider the class of independent Bernoulli distributions as a parametric family for \(Q\), since it delivers a feasible and fast optimization algorithm and the magnitude of its approximation error is already established in the literature. The probability mass function of an independent Bernoulli distribution \(Q\) is expressed as: \[Q(Y=y)=\prod_{i=1}^{N}(\mu_{i}^{Q})^{y_{i}}(1-\mu_{i}^{Q})^{1-y_{i}}. \tag{25}\] Let \(\mu^{Q}\) be an \(N\times 1\) vector that collects \(\{\mu_{i}^{Q}\}_{i=1}^{N}\). The Kullback-Leibler divergence between \(Q\) and \(P\) equals: \[\begin{split}\mathbb{KL}(Q||P)&=\mathbb{E}_{Q}\Big{[} \log\frac{Q(y)}{P(y)}\Big{]}\\ &=\mathbb{E}_{Q}\Big{[}\log\frac{Q(y)}{\exp[w_{1}^{\prime}y+y^{ \prime}w_{2}y-\log\mathcal{Z}]}\Big{]}\\ &=\mathbb{E}_{Q}[\log Q(y)-w_{1}^{\prime}y-y^{\prime}w_{2}y+\log \mathcal{Z}]\\ &=\log\mathcal{Z}-\Big{[}w_{1}^{\prime}\mu^{Q}+(\mu^{Q})^{\prime }w_{2}\mu^{Q}-\sum_{i=1}^{N}\big{[}\mu_{i}^{Q}\log(\mu_{i}^{Q})+(1-\mu_{i}^{Q })\log(1-\mu_{i}^{Q})\big{]}\Big{]}.\end{split} \tag{26}\] The last line holds since the diagonal entries of \(w_{2}\) are zero and \[\begin{split}\mathbb{E}_{Q}[y^{\prime}w_{2}y]&= \mathbb{E}_{Q}\Big{[}\sum_{i=1}^{N}\sum_{j\neq i}^{N}w_{2}^{ij}y_{i}y_{j}\Big{]} =\sum_{i=1}^{N}\sum_{j\neq i}^{N}w_{2}^{ij}\mathbb{E}_{Q}[y_{i}y_{j}]=\sum_{i =1}^{N}\sum_{j\neq i}^{N}w_{2}^{ij}\mathbb{E}_{Q}[y_{i}]\mathbb{E}_{Q}[y_{j}] \\ &=(\mu^{Q})^{\prime}w_{2}\mu^{Q}.\end{split} \tag{27}\] Recall \(\mathcal{Z}\) in Eq.18 sums over all possible configurations. \(\mathcal{Z}\) is, therefore, independent of \(Y\) (i.e., it is constant). We define \(\mathcal{A}(\mu^{Q},\mathcal{X},D,G)\) as: \[\mathcal{A}(\mu^{Q},\mathcal{X},D,G)\coloneqq w_{1}^{\prime}\mu^{Q}+(\mu^{Q})^ {\prime}w_{2}\mu^{Q}-\sum_{i=1}^{N}\big{[}\mu_{i}^{Q}\log(\mu_{i}^{Q})+(1-\mu_ {i}^{Q})\log(1-\mu_{i}^{Q})\big{]}. \tag{28}\] As such, minimizing \(\mathbb{KL}(Q||P)\) is equivalent to maximizing \(\mathcal{A}(\mu^{Q},\mathcal{X},D,G)\). We denote by \(\tilde{\mu}\) the result of the following optimization: \[\begin{split}\tilde{\mu}&=\operatorname*{arg\,sup}_ {\mu^{Q}}\mathcal{A}(\mu^{Q},\mathcal{X},D,G)\\ &=\operatorname*{arg\,sup}_{\mu^{Q}}w_{1}^{\prime}\mu^{Q}+(\mu^{ Q})^{\prime}w_{2}\mu^{Q}-\sum_{i=1}^{N}\big{[}\mu_{i}^{Q}\log(\mu_{i}^{Q})+(1-\mu_{i}^{ Q})\log(1-\mu_{i}^{Q})\big{]}.\end{split} \tag{29}\] Then the approximated distribution \(Q^{*}\) is expressed as: \[Q^{*}(Y=y)=\prod_{i=1}^{N}(\tilde{\mu}_{i})^{y_{i}}(1-\tilde{\mu}_{i})^{1-y_{i }}. \tag{30}\] The first order condition of Eq.29 is: \[\begin{split}\tilde{\mu}_{i}&=\frac{1}{1+\exp[-( \theta_{0}+\theta_{1}d_{i}+X_{i}^{\prime}(\theta_{2}+\theta_{3}d_{i})+A_{N} \theta_{4}\sum\limits_{j\neq i}m_{ij}G_{ij}d_{j}+A_{N}\sum\limits_{j\neq i}m_ {ij}G_{ij}(\theta_{5}+\theta_{6}d_{i}d_{j})\tilde{\mu}_{j})]}\\ &=\Lambda\Big{[}\theta_{0}+\theta_{1}d_{i}+X_{i}^{\prime}(\theta _{2}+\theta_{3}d_{i})+A_{N}\theta_{4}\sum\limits_{j\neq i}m_{ij}G_{ij}d_{j}+A_ {N}\sum\limits_{j\neq i}m_{ij}G_{ij}(\theta_{5}+\theta_{6}d_{i}d_{j})\tilde{ \mu}_{j}\Big{]}.\end{split} \tag{31}\] Given that the above objective function (Eq.29) is non-concave, there may exist multiple maximizers. In the following proposition, we show that this optimization problem does have a unique maximizer. **Proposition 3.1**.: _Unique Maximizer: Under Assumptions 1 to 4, the optimization problem defining \(\tilde{\mu}\) has a unique maximizer and the iteration procedure of Algorithm 1 converges to it._ Proof of Proposition 3.1 is provided in Appendix A.3. To obtain the global optimum, it is sufficient to solve the first-order condition (Eq.31). Finding a root of the first-order conditions is feasible and there exists a fast off-the-shelf iterative method to compute \(\tilde{\mu}\) (see Algorithm 1). Convergence of this algorithm has been extensively studied in the literature on variational approximation (Wainwright et al., 2008). Iteration in Algorithm 1 amounts to coordinate ascent of the mean field variational problem (Eq.29). Given that Eq.29 is a strictly concave function of \(\mu_{i}\) when all other coordinates \(\mu_{-i}\) are held fixed (Wainwright et al., 2008, SSChapter 5.3), the maximum is uniquely attained at every coordinate update. Bertsekas (2016, SSChapter 1.8) guarantees that \(\{\tilde{\mu}^{0},\tilde{\mu}^{1},...\}\) converges to a local optimum. Although local convergence is guaranteed, convergence to a global optimum is not; guaranteeing convergence to a global optimum requires additional conditions that we present in Assumption 5 below. In approximating an interacted joint distribution by a fully independent distribution it is conceivable that there should be some information loss. The following theorem shows, however, that the information loss due to variational approximation (measured in terms of the Kullback-Leibler divergence) converges to zero as the size of the network grows to infinity. **Theorem 3.1**.: _Approximation Error Bound: Let \(Q^{*}\) denote the independent Bernoulli distribution solving Eq.29. Under Assumptions 1 to 4, the Kullback-Leibler divergence of \(Q^{*}\) from \(P\) is bounded from above by:_ \[\mathbb{KL}(Q^{*}||P)\leq C_{1}A_{N}\overline{N}+C_{2}N+\mathcal{O}\left( \sqrt{A_{N}^{2}\overline{N}^{2}N}\right)+\mathcal{O}\left(\sqrt{A_{N}^{3} \overline{N}^{2}N^{2}}\right)+\mathcal{O}\left(\sqrt{A_{N}^{3}\overline{N}N^{ 3}}\right)+o(N), \tag{32}\] _where \(C_{1},C_{2}\) are known constants that depend only upon \(\boldsymbol{\theta}\) and \(\overline{m}\)._ This theorem follows as a direct corollary of Chatterjee and Dembo (2016, SSTheorem 1.6). Proof of Theorem 3.1 is provided in Appendix B.1. Theorem 3.1 shows that the upper bound on the approximation error depends upon the complexity of the network (i.e., upon \(\overline{N}\), which is the maximum number of links for one unit in the network, and upon the size of the network), upon attributes in the population, and upon the parameters of the individual utility function. Recall from Eq.23 and Eq.32 that the error due to approximating the welfare at \(P\) by the welfare at \(Q^{*}\) can be bounded by \(\sqrt{\mathbb{KL}(Q^{*}||P)}\leq\mathcal{O}(N^{3/4})\). If our objective is to maximize \(\frac{1}{N}\sum_{i=1}^{N}\mu_{i}^{P}\), Theorem 3.1 implies that this term can be bounded from above by \(\frac{1}{N}\sqrt{\mathbb{KL}(Q^{*}||P)}\leq\mathcal{O}(N^{-1/4})\), which converges to zero as \(N\) becomes large. This means that, as the size of the network becomes large, spillover effects become less important. ### Implementation In the last section, we discussed how to approximate the mean value of the outcome variable using the mean field method. In this section, we propose an algorithm to allocate treatment so as to maximize the approximated welfare and discuss its implementation. Suppose that the set of feasible allocations is subject to a capacity constraint, \(\sum_{i=1}^{N}d_{i}\leq\kappa\), where \(\kappa\in\mathbb{N}_{+}\) specifies the maximum number of units that can be treated. We denote the set of feasible allocations by \(\mathcal{D}_{\kappa}\equiv\{D\in\{0,1\}^{N}:\sum_{i=1}^{N}d_{i}\leq\kappa\}\), and the approximated welfare by: \[\tilde{W}(D)=\sum_{i=1}^{N}\tilde{\mu}_{i}. \tag{33}\] We seek to maximize the approximated welfare: \[\tilde{D}=\operatorname*{arg\,max}_{D\in\mathcal{D}_{\kappa}}\tilde{W}(D). \tag{34}\] As shown in the Eq.31, \(\{\tilde{\mu}_{i}\}_{i=1}^{N}\) is a large non-linear simultaneous equation system. The approximated mean value \(\tilde{\mu}_{i}\) of each unit \(i\) depends non-linearly upon the approximated mean value \(\tilde{\mu}_{j}\) and the treatment assignment \(d_{j}\) of her neighbor, unit \(j\). Hence, the optimization problem (Eq.34) becomes a complicated combinatorial optimization. We propose a greedy algorithm (Algorithm 2) to solve this problem heuristically. The idea of our greedy algorithm is to assign treatment to the unit that contributes most to the welfare objective, repeating this until the capacity constraint binds. Specifically, in each round, Algorithm 1 computes the marginal gain of receiving treatment for each untreated unit. We refer to the unit whose treatment induces the largest increase in the approximated welfare as the most influential unit in that round. We provide a theoretical performance guarantee for our greedy algorithm in Section 3.4. We also numerically examine the performance of our method in Section 4. In Algorithm 2, we use a variational approximation method to compute \(\tilde{\mu}\) for each assignment rule and for each round (i.e., there are \(\mathcal{O}(N)\) operations in each round). Alternatively, MCMC can be used to simulate the mean value \(\mu\) of the unique stationary distribution (Eq.9) instead of computing the variationally approximated \(\tilde{\mu}\). Since MCMC may require exponential time for convergence (Chatterjee and Diaconis, 2013) though, simulating \(\mu\) is infeasible for a large network (i.e., MCMC needs to be run \(\mathcal{O}(\kappa N)\) times). In Section 4.2, we compare the welfare computed using these two methods for various treatment allocation rules. ``` Input: Weighted adjacency matrix \(G\), treatment allocation \(D\), covariates \(\mathcal{X}\), parameters \(\boldsymbol{\theta}\), and threshold \(\varrho\) Initialization: Draw \(\tilde{\mu}_{i}^{0}\sim U[0,1],\forall i\in\mathcal{N};\quad t=1\) for\(i\gets 1,...,N\)do \(\tilde{\mu}_{i}^{1}\leftarrow\Lambda\Big{[}\theta_{0}+\theta_{1}d_{i}+X_{i}^{ \prime}(\theta_{2}+\theta_{3}d_{i})+A_{N}\theta_{4}\sum\limits_{j\neq i}m_{ij} G_{ij}d_{j}+A_{N}\sum\limits_{j\neq i}m_{ij}G_{ij}(\theta_{5}+\theta_{6}d_{i}d_{j}) \tilde{\mu}_{j}^{0}\Big{]}\) end for while\(\mathcal{A}(\tilde{\mu}^{t},\mathcal{X},D,G)-\mathcal{A}(\tilde{\mu}^{t-1}, \mathcal{X},D,G)>\varrho\)do \(t\gets t+1\) for\(i\gets 1,...,N\)do \(\tilde{\mu}_{i}^{t}\leftarrow\Lambda\Big{[}\theta_{0}+\theta_{1}d_{i}+X_{i}^{ \prime}(\theta_{2}+\theta_{3}d_{i})+A_{N}\theta_{4}\sum\limits_{j\neq i}m_{ij} G_{ij}d_{j}+A_{N}\sum\limits_{j\neq i}m_{ij}G_{ij}(\theta_{5}+\theta_{6}d_{i}d_{j}) \tilde{\mu}_{j}^{t-1}\Big{]}\) end for end for Return \(\tilde{\mu}\leftarrow\tilde{\mu}^{t}\) ``` **Algorithm 1**Computing \(\tilde{\mu}\) ### Theoretical Analysis In this section, we focus on the theoretical properties of our proposed treatment allocation method. First, we study the convergence of Algorithm 1 to a global optimum. We provide a sufficient condition under which Algorithm 1 is a contraction mapping, which has a unique fixed point. Second, we analyze the regret of the treatment allocation rule computed using our greedy algorithm. **Assumption 5**.: _(Parameter Restriction) We assume that the following restriction on the parameters holds:_ \[A_{N}\overline{m}(|\theta_{5}|+|\theta_{6}|)\overline{N}\leq 4. \tag{35}\] Assumption 5 restricts the magnitude of the spillover effect from common actions with neighbors, \(y_{i}=y_{j}=1\). If this spillover effect is too strong, Algorithm 1 is not guaranteed to be a contraction mapping and convergence to a global optimum is not a given. Algorithm 1 is still guaranteed, however, to converge to a local optimum regardless of whether this condition holds or not. If a different procedure is available for optimizing Eq.29, we do not need to impose Assumption 5. Given knowledge of the parameter values, we can directly check if Assumption 5 holds or not in the given application. In the simulation exercise below, we examine the performance of our greedy algorithm when Assumption 5 is relaxed. **Proposition 3.2**.: _Global Optimum: Under Assumptions 1 to 5, Algorithm 1 is a contraction mapping for all \(\{d_{i}\}_{i=1}^{N}\in\{0,1\}^{N}\), for all \(\mathcal{X}\in\mathbb{R}^{N\times k}\), and for all \(G\in\{0,1\}^{N\times N}\)._ Proof of Proposition 3.2 is provided in Appendix A.4. Since \(\tilde{\mu}\in[0,1]^{N}\), we know that any sequence of iterations generated by Algorithm 1 must converge to a unique fixed point. Algorithm 1 must, therefore, yield the unique solution to the problem in Eq.29. Importantly, using iteration to solve the mean field approximation under a proper condition (Assumption 5) does not introduce any further error. We are now able to analyze the regret that is associated with our method. Given \(D^{*}=\arg\max_{D\in\mathcal{D}_{\kappa}}W(D)\) is the maximizer of \(W(D)\), then \(W(D^{*})\) denotes the maximum value of \(W(D)\). _Regret_ is the gap between the maximal equilibrium (oracle) welfare \(W(D^{*})\) and the equilibrium welfare attained at the treatment allocation rule computed using our greedy algorithm \(W(D_{G})\). We decompose regret into four terms: \[\begin{split} W(D^{*})-W(D_{G})&=\underbrace{W(D^{* })-\tilde{W}(D^{*})}_{\leq\sqrt{2\mathbb{KL}(Q^{*}||P)}}+\underbrace{\tilde{W} (D^{*})-\tilde{W}(\tilde{D})}_{\leq 0}+\underbrace{\tilde{W}(\tilde{D})- \tilde{W}(D_{G})}_{\text{Regret from greedy}}+\underbrace{\tilde{W}(D_{G})-W(D_ {G})}_{\leq\sqrt{2\mathbb{KL}(Q^{*}||P)}}\\ &\leq\sqrt{8\mathbb{KL}(Q^{*}||P)}+\tilde{W}(\tilde{D})-\tilde{W} (D_{G}).\end{split} \tag{36}\] The first term corresponds to the approximation error of using variational approximation; the second term comes from using the maximizer of the approximated equilibrium welfare \(\tilde{D}\); the third term comes from using our greedy algorithm instead of using the maximizer of the variationally approximated welfare; and the last component is again introduced by using the approximated equilibrium welfare \(\tilde{W}(D)\). Theorem 3.1 provides an upper bound on the approximation error \(\sqrt{8\mathbb{KL}(Q^{*}||P)}\): \[\begin{split}\sqrt{8\mathbb{KL}(Q^{*}||P)}&\leq \sqrt{8\left[C_{1}A_{N}\overline{N}+C_{2}N+\mathcal{O}\left(\sqrt{A_{N}^{2} \overline{N}^{2}N}\right)+\mathcal{O}\left(\sqrt{A_{N}^{3}\overline{N}^{2}N^ {2}}\right)+\mathcal{O}\left(\sqrt{A_{N}^{3}\overline{N}N^{3}}\right)\right] }\\ &\quad+o(N^{1/2}).\end{split} \tag{37}\] In Eq.37, the convergence rate of the upper bound on approximation error depends upon the network size \(N\), sparsity of the network \(\bar{N}\), and the choice of normalization \(A_{N}\). Depending upon how \(A_{N}\) changes with the size of the network (i.e., our choice of \(A_{N}\) in Eq.13), we have different convergence rates for the upper bound on the variational approximation error: \[\sqrt{8\mathbb{KL}(Q^{*}||P)}\leq\begin{cases}\mathcal{O}(N^{3/4})&\text{ for Sparse Network}\\ \mathcal{O}(N^{1/2})&\text{for Dense Network}.\end{cases} \tag{38}\] For a general objective function mapping \(\{0,1\}^{N}\) to \(\mathbb{R}\), however, there is no theoretical performance guarantee for the greedy algorithm, i.e., it is not known how much worse the greedy optimizer can be than the global optimum in terms of the value of the objective function. For a class of non-decreasing submodular functions on \(\mathcal{D}_{\kappa}\subset\{0,1\}^{N}\), Nemhauser et al. (1978) shows the existence of performance guarantees (\(1-1/e\)). Unfortunately, _submodularity_ does not generally hold for our problem (Eq.34). Other applications have faced the same issue. Relaxing the requirement of submodularity, Conforti and Cornuejols (1984) introduces the concept of _curvature_ to characterize a constant factor in the performance guarantee. Das and Kempe (2011) introduces the _submodularity ratio_ to define the closeness of a set function to submodularity. Bian et al. (2017) combines these two concepts (curvature and the submodularity ratio) to obtain a performance guarantee for the greedy algorithm for a large class of non-submodular functions. In what follows, we apply these techniques to the variationally approximated welfare. The definitions of submodularity, the submodularity ratio, and the curvature of a set function \(f\) are as follows. **Definition 3.1**.: **(Submodularity)**: A set function is a submodular function if: \[\sum_{k\in R\setminus S}[f(S\cup\{k\})-f(S)]\geq f(S\cup R)-f(S),\quad\forall S,R\subseteq\mathcal{N}. \tag{39}\] **Definition 3.2**.: **(Submodularity Ratio)** The submodularity ratio of a non-negative set function \(f(\cdot)\) is the largest \(\gamma\) such that \[\sum_{k\in R\setminus S}[f(S\cup\{k\})-f(S)]\geq\gamma[f(S\cup R)-f(S)],\quad \forall S,R\subseteq\mathcal{N}. \tag{40}\] **Definition 3.3**.: **(Curvature)** The curvature of a non-negative set function \(f(\cdot)\) is the smallest value of \(\xi\) such that \[f(R\cup\{k\})-f(R)\geq(1-\xi)[f(S\cup\{k\})-f(S)],\quad\forall S\subseteq R \subseteq\mathcal{N},\forall k\in\mathcal{N}\setminus R. \tag{41}\] The submodularity of a set function is analogous to concavity of a real function and implies that the function has diminishing returns. The marginal increase in the probability of choosing action \(1\) decreases with the number of treated units. The submodularity ratio captures how much greater the probability of choosing action \(1\) is from providing treatment to a group of units versus the combined benefit of treating each unit individually. Curvature can be interpreted as how close a set function is to being additive. We associate the set function \(f(\cdot)\) in the above definitions with the variationally approx imated welfare \(\tilde{W}(\cdot)\), which we view as a real-valued mapping of treatment allocation sets \(\mathcal{D}\subset\mathcal{N}\) (i.e., \(\mathcal{D}=\{i\in\mathcal{N}:d_{i}=1\}\)): \[\begin{split}\tilde{W}(\mathcal{D})&=\sum_{i\in \mathcal{D}}\Lambda\big{[}\theta_{0}+\theta_{1}+X_{i}^{\prime}(\theta_{2}+ \theta_{3})+A_{N}\theta_{5}\sum_{\begin{subarray}{c}j\neq i\\ j\in\mathcal{N}\end{subarray}}m_{ij}G_{ij}\tilde{\mu}_{j}+A_{N}\sum_{ \begin{subarray}{c}j\neq i\\ j\in\mathcal{D}\end{subarray}}m_{ij}G_{ij}(\theta_{4}+\theta_{6}\tilde{\mu}_{ j})\big{]}\\ &\quad+\sum_{k\in\mathcal{N}\setminus\mathcal{D}}\Lambda\big{[} \theta_{0}+X_{k}^{\prime}\theta_{2}+A_{N}\theta_{4}\sum_{\ell\in\mathcal{D}}m_ {k\ell}G_{k\ell}+A_{N}\theta_{5}\sum_{\begin{subarray}{c}\ell\neq k\\ \ell\in\mathcal{N}\end{subarray}}m_{k\ell}G_{k\ell}\tilde{\mu}_{\ell}\big{]}. \end{split} \tag{42}\] We characterize the submodularity ratio and curvature of \(\tilde{W}(\cdot)\) to obtain an analytical performance guarantee for our greedy algorithm. In addition, we restrict our analysis to settings of positive treatment and spillover effects by imposing the following assumption. **Assumption 6**.: _(Positive Treatment and Spillover Effects) We assume that \(\theta_{1},\theta_{3},\theta_{4},\theta_{5},\theta_{6}\geq 0\)._ Assumption 6 restricts the signs of own treatment and spillover effects and it ensures that \(\tilde{W}(\mathcal{D})\) is a non-decreasing set function of \(\mathcal{D}\). This assumption works in many applications, such as allocating vaccinations to increase social health, providing discounts to encourage purchase, and assigning tax auditing to encourage paying tax. **Lemma 3.1**.: _Under Assumption 1 to 6, \(\tilde{W}(\mathcal{D})\) is a non-negative and non-decreasing set function._ Proof of Lemma 3.1 is provided in Appendix A.5. By showing that a set function is non-decreasing, its curvature \(\xi\) and its submodularity ratio \(\gamma\) must belong to \([0,1]\)(Bian et al., 2017). Having \(\xi\in[0,1]\) and \(\gamma\in[0,1]\) is not, however, enough to attain a nontrivial performance guarantee. For instance, if \(\gamma=0\), the lower bound in Theorem 3.2 equals 0, which is a trivial lower bound; if \(\xi=0\), then the lower bound equals \(\gamma\), which could be \(0\). To rule out these trivial cases, we impose the following assumption, which gives a sufficient condition to bound the submodularity ratio and curvature away from 0 and 1. **Assumption 7**.: _(Lower Bound on \(N\)) We assume that the sample size satisfies:_ \[N\geq\big{(}\theta_{3}\underline{N}\cdot\underline{m}+\theta_{1}\big{)}/4. \tag{43}\] Assumption 7 restricts the sample size of the network. Even in a dense network, \(\underline{N}\) (the minimum number of edges for one unit in the network) can be small and, accordingly, the requirement on the sample size. Assumption 7 easily holds as \(N\) grows. We are now able to provide a performance guarantee for our greedy algorithm. **Theorem 3.2**.: _Performance Guarantee for Greedy Algorithm: Under Assumptions 1 to 7, the curvature \(\xi\) of \(\tilde{W}(\mathcal{D})\) and the submodularity ratio \(\gamma\) of \(\tilde{W}(\mathcal{D})\) are in \((0,1)\). The greedy algorithm enjoys the following approximation guarantee for the problem in Eq.34:_ \[\tilde{W}(D_{G})\geq\frac{1}{\xi}(1-e^{-\xi\gamma})\tilde{W}(\tilde{D}), \tag{44}\] _where \(D_{G}\) is the treatment assignment rule that is obtained by Algorithm 2._ The second part of Theorem 3.2 is taken from (Bian et al., 2017, SSTheorem 1). Proof of Theorem 3.2 is provided in Appendix B.2. Theorem 3.2 indicates that there exists a performance guarantee that depends upon the unknown curvature and upon the submodularity ratio. The first part of Theorem 3.2 dictates that the performance guarantee is a non-trivial bound. It is infeasible to determine \(\xi\) and \(\gamma\) for \(\tilde{W}(\mathcal{D})\); it is, however, possible to derive an upper bound for \(\xi\) and a lower bound for \(\gamma\), which combined with Assumption 7, excludes triviality. Combining all of the previous results, we are able to use Bian et al. (2017, SSTheorem 1) to provide a non-trivial performance guarantee on \(\tilde{W}(\mathcal{D})\). We emphasize that if \(\xi=1\) and \(\gamma=1\), the performance guarantee in Theorem 3.2 coincides with the well-known performance guarantee constant of the greedy algorithm for submodular functions (i.e., \(1-1/e\) Nemhauser et al., 1978). If \(\xi<1\) or \(\gamma<1\), the performance guarantee is worse than \(1-1/e\). Via Theorem 3.2 we can obtain an upper bound on the regret from using our greedy algorithm: \[\tilde{W}(\tilde{D})-\tilde{W}(D_{G})\leq\Big{[}1-\frac{1}{\xi}(1-e^{-\xi \gamma})\Big{]}\tilde{W}(\tilde{D}). \tag{45}\] Plugging Eq.45 into Eq.36, we obtain our main theorem: **Theorem 3.3**.: _Regret Bound: Let \(D^{*}\) denote the maximizer of \(\tilde{W}(\mathcal{D})\) and \(D_{G}\) be the assignment vector obtained by Algorithm 2. Under Assumptions 1 to 7, given curvature \(\xi\) and submodularity ratio \(\gamma\), the regret is bounded from above by:_ \[W(D^{*})-W(D_{G}) \leq\sqrt{8\left[C_{1}A_{N}\overline{N}+C_{2}N+\mathcal{O}\left( \sqrt{A_{N}^{2}\overline{N}^{2}N}\right)+\mathcal{O}\left(\sqrt{A_{N}^{3} \overline{N}^{2}N^{2}}\right)+\mathcal{O}\left(\sqrt{A_{N}^{3}\overline{N}N^ {3}}\right)\right]} \tag{46}\] \[+\Big{[}1-\frac{1}{\xi}(1-e^{-\xi\gamma})\Big{]}\tilde{W}(\tilde {D})+o(N^{1/2}),\] _where \(C_{1},C_{2}\) are known constants that are defined in Theorem 3.1. Hence,_ \[W(D^{*})-W(D_{G})\leq\begin{cases}\mathcal{O}(N^{3/4})+\mathcal{O}(N)\Big{[}1- \frac{1}{\xi}(1-e^{-\xi\gamma})\Big{]}&\text{for a sparse network}\\ \mathcal{O}(N^{1/2})+\mathcal{O}(N)\Big{[}1-\frac{1}{\xi}(1-e^{-\xi\gamma}) \Big{]}&\text{for a dense network}.\end{cases} \tag{47}\] Theorem 3.3 is our key result. It characterizes the convergence rate of overall regret. Overall regret depends upon the network complexity, the network size, and the parameters of the utility function. If we examine the average equilibrium welfare, then the regret bound becomes: \[\frac{1}{N}(W(D^{*})-W(D_{G}))\leq\begin{cases}\mathcal{O}(N^{-1/4})+\frac{1}{N }\tilde{W}(\tilde{D})\Big{[}1-\frac{1}{\xi}(1-e^{-\xi\gamma})\Big{]}&\text{for a sparse network}\\ \mathcal{O}(N^{-1/2})+\frac{1}{N}\tilde{W}(\tilde{D})\Big{[}1-\frac{1}{\xi}(1 -e^{-\xi\gamma})\Big{]}&\text{for a dense network}.\end{cases} \tag{48}\] The first term is the approximation error and shrinks to zero as \(N\) goes to infinity. Given that \(\tilde{W}(\tilde{D})\) can be a function of \(N\), the regret that is associated with our greedy algorithm can converge to a constant. ## 4 Simulation Exercises In this section, we evaluate the performance of our greedy algorithm in simulation exercises. We use an Erdos-Renyi model to generate random social networks. For each choice of \(N\), we generate \(100\) networks with fixed density (i.e., \(0.3\) and \(0.6\))6 and use the average of the equilibrium welfare over these \(100\) networks to assess the performance of our method. For personal covariates \(\mathcal{X}\), we choose a binary variable that is generated from a Bernoulli distribution \(B(0.5)\). We specify \(m(X_{i},X_{j})\) as \(m_{ij}=|X_{i}-X_{j}|\). We report the equilibrium welfare as the per-person equilibrium average, \(\max_{D\in\mathcal{D}_{\kappa}}1/N\sum_{i=1}^{N}\mathbb{E}[Y_{i}|\mathcal{X}, D,G]\). In addition, we specify the tolerance level \(\varrho\) of Algorithm 1 as \(1.0\mathrm{E}-9\). The capacity constraint that we choose is \(\kappa=30\%N\). To evaluate the impact of Assumption 5 on the performance of our greedy algorithm, we choose two parameter sets. The first set of parameters satisfies Assumption 5 whilst the second set of parameters violates this condition. Table 1 summarizes the values of the parameters in our simulation. Footnote 6: Number of edges = density \(\times\frac{N(N-1)}{2}\). In the following sections, we compare our greedy algorithm with random allocation in a small network setting and in a large network setting. Random allocation assigns treatment to a fraction \(\kappa\) of units independently of personal characteristics and network structure. In the small network setting, we are able to compute the distribution of outcomes at every possible assignment vector, and use a brute force method to find an optimal treatment allocation. Using the welfare level at the optimal assignment as a benchmark, we can calculate the regret of our greedy algorithm. Since the number of possible assignment vectors grows rapidly with the number of units, we cannot compute the regret in the large network analysis of Section 4.2 in this way. We instead assess the welfare performance of the greedy targeting rule in comparison to the welfare level of the No treatment rule. ### Small Network We consider \(N=5,7,9,11,13\) or \(15\) to be a small network setting in our simulation exercise. First, we consider all possible treatment allocations subject to the capacity constraint and perform brute force search to find an optimal assignment. For instance, when \(N=15\), the number of feasible assignment vectors is \(32,768\). We compute the joint distribution of outcomes at each possible treatment allocation by applying the joint probability mass function of the Gibbs distribution (Eq.9). Second, to assess the welfare loss from implementing the variational approximation, we evaluate the regret of a treatment assignment rule that is obtained by maximizing the variationally approximated welfare over every feasible treatment allocation meeting the capacity constraint (without greedy optimization). We label this method of obtaining the optimal treatment assignment as _brute force with variational approximation_ (BFVA). Table 2 records the main differences between the two aforementioned methods and the greedy targeting rule in terms of in-sample average welfare. From Table 2, we find that our greedy algorithm performs as well as the brute force method in a small network setting except when \(N=5\) (\(1\%\) gap for \(N=5\)). This indicates a good performance of our method. We find that the regret when \(N=5\) mainly comes from the approximation error of using a variational approximation. As we have shown in Theorem 3.1, the upper bound on the Kullback-Leibler divergence can be large when the sample size is small. This coincides with the empirical result. Our greedy algorithm can, however, achieve \begin{table} \begin{tabular}{l c c c c c c} \hline \hline _Parameters_ & \(\theta_{0}\) & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & \(\theta_{4}\) & \(\theta_{5}\) & \(\theta_{6}\) \\ \hline _Set_\(1\) & \(-2\) & \(0.5\) & \(0.1\) & \(0.6\) & \(0.7\) & \(0.8\) & \(0.9\) \\ _Set_\(2\) & \(-2\) & \(0.5\) & \(0.1\) & \(0.6\) & \(0.7\) & \(7\) & \(7\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the parameter values the same performance as BFVA, which means that using our greedy algorithm has a negligible effect upon regret. In Figure 1, we compare the regret from using our greedy algorithm to random allocation for parameter set \(1\) (Assumption 5 is satisfied). Here, random allocation means that we randomly draw 50 allocation rules that satisfy the capacity constraint and average the welfare that they generate. The left-hand graph presents this comparison for density equal to \(0.3\); the right-hand graph presents this comparison for density equal to \(0.6\). From Figure 1, we find that the performance gap between our greedy targeting rule and random allocation in terms of regret ranges from \(7\%\) to \(14\%\). Figure 2 indicates the results from using parameter set 2 (Assumption 5 is violated). Regret is greater than for parameter set 1, both when using our greedy algorithm and using random allocation. This indicates the appropriateness of our assumptions. As for parameter set 1, when the sample size is small the majority of regret comes from using variational approximation, with Algorithm 1 converging to a local optimum for each of the sample sizes that we consider. This result coincides with Proposition 3.2. We emphasize, however, that when \(N=7,9,11,13,15\), the regret from using our greedy algorithm is maintained within \(10\%\), which dominates the performance of random allocation. This indicates that the advantage to using our greedy method is maintained even when Assumption 5 does not hold. ### Large Network We now extend our simulation exercise to large network settings where \(N=50,100\) or \(150\). As previously mentioned, we can neither search over all possible allocation vectors nor compute the joint distribution over all possible vectors in a large network setting. To deal with these two problems, we first choose a baseline assignment rule - the No treatment rule - with which to compare the allocation rules that we compute. We evaluate the additional average welfare that we gain by providing treatment relative to the No treatment rule, rather than relative to the optimal assignment rule as we did for the small network setting. In Table 3, we summarize the average welfare for treatment assignment rules corresponding to greedy targeting, random allocation, and No treatment. Second, we use Gibbs sampling Figure 1: Comparison between the greedy algorithm and random allocation for the parameter set \(1\) (Left: density \(=0.3\) and Right: density \(=0.6\)) Figure 2: Comparison between the greedy algorithm and random allocation for the parameter set \(2\) (Left: density \(=0.3\) and Right: density \(=0.6\)) to approximate the joint distribution (Eq.9), iterating \(10,000\) times (burning period equal to \(5,000\)) for each class of treatment rule. Using Gibbs sampling, however, is not necessarily a feasible method to evaluate random allocation (and more generally) in a large network given its slow convergence. In the exercise, we use \(10\) random networks and \(10\) random draws, which takes approximately \(30\) hours to compute a result for random allocation.7 In contrast, it takes only \(20\) seconds to obtain a result for random allocation using variational approximation. Footnote 7: We use parallel processing on a computer with an 8 core Intel i7-10700 CPU and 32GB RAM. In Table 3, we compare the welfare delivered by Gibbs sampling with that delivered by variational approximation for the three aforementioned classes of treatment assignment rules. All the results in Table 3 are computed across \(100\) random networks, using the average of \(10\) random draws for random allocation, and with the capacity constraint set at \(0.3N\). Table 3 indicates that variational approximation constitutes a good approximation of the Gibbs distribution (Eq.9) under Assumption 5. This provides strong evidence in favour of using the variational approximation in our algorithm. Table 3 indicates that using our greedy algorithm leads to an increase in welfare of approximately \(10\%\) as compared with random allocation. Relative to No treatment, our greedy algorithm performs \(37\%\sim 55\%\) better than the random allocation. This result is robust to the network density. This suggests that the welfare gain from using our greedy algorithm carries over to the large network setting. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{_Average Welfare with MCMC_} & \multicolumn{3}{c}{_Average Welfare with VA_} \\ \cline{2-7} _Allocation Rule_ & \(N=50\) & \(N=100\) & \(N=150\) & \(N=50\) & \(N=100\) & \(N=150\) \\ \hline _Density_\(=0.3\) & & & & & & \\ **Greedy algorithm** & \(0.186\) & \(0.186\) & \(0.186\) & \(0.186\) & \(0.186\) & \(0.186\) \\ & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) \\ **Random allocation** & \(0.166\) & \(0.170\) & \(0.170\) & \(0.164\) & \(0.170\) & \(0.169\) \\ & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) \\ **No treatment rule** & \(0.126\) & \(0.127\) & \(0.127\) & \(0.126\) & \(0.127\) & \(0.127\) \\ & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) \\ **Greedy algorithm** & \(0.194\) & \(0.193\) & \(0.193\) & \(0.194\) & \(0.193\) & \(0.193\) \\ & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) \\ **Random allocation** & \(0.173\) & \(0.178\) & \(0.178\) & \(0.172\) & \(0.178\) & \(0.178\) \\ & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) \\ **No treatment rule** & \(0.128\) & \(0.129\) & \(0.129\) & \(0.127\) & \(0.129\) & \(0.129\) \\ & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) & (\(<0.01\)) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison between the average welfare computed using Gibbs sampling and variational approximation for parameter set \(1\) ## 5 Conclusion In this work, we have introduced a novel method to obtain individualized treatment allocation rules that maximize the equilibrium welfare in sequential network games. We have considered settings where the stationary joint distribution of outcomes follows a Gibbs distribution. To handle the analytical and computational challenge of analyzing the Gibbs distribution, we use variational approximation and maximize the approximated welfare criterion using a greedy maximization algorithm over treatment allocations. We have obtained bounds on the approximation error of the variational approximation and of the greedy maximization in terms of the equilibrium welfare. Moreover, we derive an upper bound on the convergence rate of the welfare regret bound. Using simulation, we have shown that our greedy algorithm performs as well as the globally optimal treatment allocation in a small network setting. In a large network setting with a given specification of parameter values, our greedy algorithm dominates random allocation and leads to a welfare improvement of around \(50\%\) compared with No treatment. We suggest that several questions remain open and that there are several ways in which our work can be extended. First, we have not considered parameter estimation in this work. A relevant question is how to incorporate the uncertainty from parameter estimation into our analysis of regret. In addition, we may want to perform inference for the welfare at the obtained assignment rule, taking into account the uncertainty of parameter estimates and a potential winner's bias (Andrews et al., 2020). Second, to validate the iteration method for \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{_Average Welfare with MCMC_} & \multicolumn{3}{c}{_Average Welfare with VA_} \\ \cline{2-7} _Allocation Rule_ & \(N=50\) & \(N=100\) & \(N=150\) & \(N=50\) & \(N=100\) & \(N=150\) \\ \hline \multicolumn{7}{l}{_Density \(=0.3\)_} \\ **greedy algorithm** & \(\begin{array}{c}0.227\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.218\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.215\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.237\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.228\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.225\\ (<0.01)\end{array}\) \\ **Random allocation** & \(\begin{array}{c}0.201\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.203\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.203\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.209\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.214\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.213\\ (<0.01)\end{array}\) \\ **No treatment rule** & \(\begin{array}{c}0.143\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.143\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.143\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.149\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.150\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.149\\ (<0.01)\end{array}\) \\ \(Density=0.6\) & & & & & & \\ **greedy algorithm** & \(\begin{array}{c}0.317\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.305\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.299\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.346\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.343\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.339\\ (<0.01)\end{array}\) \\ **Random allocation** & \(\begin{array}{c}0.287\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.293\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.292\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.321\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.334\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.339\\ (<0.01)\end{array}\) \\ **No treatment rule** & \(\begin{array}{c}0.171\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.171\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.170\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.207\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.208\\ (<0.01)\end{array}\) & \(\begin{array}{c}0.208\\ (<0.01)\end{array}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison between the average welfare computed using Gibbs sampling and variational approximation for parameter set \(2\) computing the variational approximation, we rely on assumptions on the spillover effect to guarantee convergence to an optimal variational approximation. Relaxing this assumption to allow for unconstrained parameter values remains a topic for future research. Third, we have used a naive mean field method in this work. As is mentioned in Wainwright et al. (2008), using a structural mean field method can improve the performance of an approximation and can lead to better welfare performance.
2308.10477
What absorbs the early TeV photons of GRB 221009A?
The tera-electronvolt (TeV) light curve of gamma-ray burst (GRB) 221009A shows an unprecedentedly rapid rise at the beginning epoch. This phenomenon could be due to the strong absorption of photons and electrons within the emitting region. As the external shock expands outwards and the radius increases, the volume of matter also increases, leading to a gradual decrease in the optical depth for TeV photons. We explore several possibilities for the physical origin of this peculiar behavior. We calculate the optical depth for TeV photons due to annihilation with lower energy photons in the external shock and scattering by electrons produced via cascading of the TeV emission. Even under aggressive assumptions, we find the optical depths for these processes are orders of magnitude too small to explain the observed light curve. Other sources of absorbers, such as electrons in the ejecta or external shock, also do not yield sufficient optical depths. Therefore, the origin of the early peculiar TeV light curve remains uncertain.
Jun-Yi Shen, Yuan-Chuan Zou, A. M. Chen, Duan-Yuan Gao
2023-08-21T05:24:10Z
http://arxiv.org/abs/2308.10477v2
# What absorbs the early TeV photons of GRB 221009A? ###### Abstract The terahertz-electronvolt (TeV) light curve of gamma-ray burst (GRB) 221009A shows an unprecedentedly rapid rise at the beginning epoch. This phenomenon could be due to the strong absorption of photons and electrons within the emitting region. As the external shock expands outwards and the radius increases, the volume of matter also increases, leading to a gradual decrease in the optical depth for TeV photons. We explore several possibilities for the physical origin of this peculiar behavior. We calculate the optical depth for TeV photons due to annihilation with lower energy photons in the external shock and scattering by electrons produced via cascading of the TeV emission. Even under aggressive assumptions, we find the optical depths for these processes are orders of magnitude too small to explain the observed light curve. Other sources of absorbers, such as electrons in the ejecta or external shock, also do not yield sufficient optical depths. Therefore, the origin of the early peculiar TeV light curve remains uncertain. keywords: (transients:) gamma-ray bursts - opacity ## 1 Introduction Gamma-ray bursts (GRBs) are transient sources resulting from highly energetic astrophysical events that emit a large number of high-energy photons, which travel vast distances to reach the observer. GRB emissions can be divided into two states: prompt emission and afterglow emission, each exhibiting distinct observational characteristics. In recent years, there are several GRBs with afterglows have been detected at very high energies (VHE, i.e., \(E>0.1\) TeV), including GRBs 180720B (Abdalla et al., 2019), 190114C (MAGIC Collaboration et al., 2019), 190829A (H. E. S. S. Collaboration et al., 2021), 201015A (Blanch et al., 2020), and 201216C (Blanch et al., 2020). On 2022 October 9 at 13:16:59.99 UT (\(T_{0}\)), the Gamma-Ray Burst Monitor (GBM) onboard the Fermi satellite triggered on GRB 221009A (Lesage et al., 2023). Following the trigger, other high-energy detectors such as Konus-WIND, SRG/ART-XC, INTEGRAL, Insight-HXMT, GECAM-C, Swift, MAXI, and NICER also detected the prompt emission and afterglow emission at an early stage (Ripa et al., 2023; Rodi and Ubertini, 2023; An et al., 2023; Williams et al., 2023). With a redshift \(z\sim 0.151\), corresponding to a luminosity distance \(2.1\times 10^{27}\)cm (Malesani et al., 2023), GRB 221009A stands out as an exceptionally high-luminosity event, which is the brightest burst ever detected (Burns et al., 2023). The Large High Altitude Air Shower Observatory (LHAASO), situated in Daocheng, Sichuan Province, China (Cao et al., 2019), also reported the detection of the very early VHE afterglow of GRB 221009A, with more than 64,000 photons above 0.2 TeV observed within the first 3000 seconds (LHAASO Collaboration, 2023). Overall, the TeV light curve of GRB 221009A shows a four-segment shape, with a rapid initial rise, a slower rise up to the peak, a slow decay after the peak, and then a steep decay after a break. Each stage can be well-fitted by a power-law function of time (\(f_{\nu}\propto t^{\alpha}\), where \(t=T-T_{\star}\), and \(T_{\star}\) is defined as \(T_{\star}=T_{0}+226\) s), indicating an external shock origin (LHAASO Collaboration, 2023). Although the observed TeV data can be basically explained by the synchrotron-self-Compton emission in the external shock, the rapid rise at the very early stage (\(t\sim 0-4.85\) s) with a temporal slope of \(\alpha\approx 14.9\) is difficult to be explained under the standard afterglow scenario, which predicts \(\alpha=4\) in a homogeneous medium or \(\alpha=1/2\) in a wind medium. There are several possible reasons that may explain the initial rapid rise behavior of TeV light curves. On one hand, this could be related to the dynamic process of the external shock. At the very early stage, the external shock driven by the outer ejecta could be energized by the inner ejecta, leading to the increase of the bulk Lorentz factor of the external shock, and therefore increasing the TeV flux dramatically (LHAASO Collaboration, 2023). Alternatively, the rapid rise behavior could be due to the strong absorption of photons and electrons within the emitting region. If we check carefully, at around \(t\sim 2\) s, the TeV flux is high enough to be extrapolated back from the normal slower rise during \(t\sim 4.85\) s to \(t\sim 18\) s. This indicates the fast rise phase during \(t\sim 2\) s to \(t\sim 4.85\) s could be due to the absorption of the TeV photons. In this paper, we will explore the possibility of this scenario for the physical origin of the early stage of the TeV light curve of GRB 221009A. We propose several possible explanations for this process. One assumption is that the optical depth for TeV photons decreases with time. Initially, the external shock is optically thick for TeV photons, but as time progresses, the optical depth (\(\tau\)) decreases, and the external shock becomes transparent. Alternatively, TeV photons could be blocked by cascade-generated secondary electrons. As the external shock moves, the electron density decreases, leading to the rapid increase of the TeV flux at the early stage. We also explore other potential sources of particles that may absorb TeV photons, such as annihilation with afterglow hundred keV photons, scarted by ambient particles. This paper is organized as follows: section 2 introduces the method of calculating the external shock particles density. In this section, we also describe some other processes that may absorb TeV photons. Then, we take the observation data to calculate the optical depth. In section 3, we present our conclusion and discussion. ## 2 Possibilities of absorbing the early TeV photons A TeV photon can be shaded generally with two processes: annihilated with a low energy photon and scattered by an electron. For the annihilation, the low-energy photons could have been observed. Therefore, we can constrain directly from the observation. For the scattering, the electrons (and/or positrons) may have diverse origins. It could be from a cascade of TeV photons. The cascade process generates a large number of electron-positron pairs in the external shock. Additionally, the external shock accelerates the interstellar medium (ISM) and draws these particles, causing them to move together with the external shock. It could be from electrons ejected from the central engine. A study by Wang et al. (2023) proposed a possibility that TeV could be from internal shock. In this condition, the electrons in internal shock also may absorb TeV photons. We also discuss this effect. These two groups of particles may block TeV photons. It also could be from the swept curium burst medium. In this section, we will describe how to calculate the optical depth of TeV photons for several possibilities. Before going to the detailed origin of low-energy photons or electrons, we first give the expressions of the optical depths. In the dense plasma created by the cascade process, TeV photons interact with these particles, potentially causing the matter of the external shock to become optically thick for TeV photons. The interactions include processes such as \(\gamma\gamma\) pair production and Compton scattering and other processes. The scattering cross-section of Compton scattering \(\sigma_{\rm c}\) can be expressed by Klein-Nishina equation (Section 2, Chapter 5 in You 1998; Klein & Nishina 1929): \[\sigma_{\rm c}=\frac{3\sigma_{\rm T}}{8}\Gamma^{-1}(\ln 2\Gamma+\frac{1}{2}), \tag{1}\] where \(\Gamma\) is \(\hbar\omega/m_{e}c^{2}\), and the \(\omega\), \(\hbar\), \(c\), \(m_{e}\), \(\sigma_{\rm T}\) are the frequency of the photon, speed of light, reduced Planck constant, rest mess of an electron, Thomson scattering cross-section, respectively. The \(\gamma\gamma\) process cross-section is described by the following equation in the head-on collision approximation (Section 7, Chapter 5 in You 1998; Gould & Schreder 1967): \[\sigma_{\gamma\gamma}=\frac{3}{16}\sigma_{\rm T}(1-\beta^{2})\left[(2-\beta^{ 2})\ln\frac{1+\beta}{1-\beta}-2\beta(2-\beta^{2})\right], \tag{2}\] with \[\beta=\sqrt{1-\frac{m_{e}c^{2}}{\hbar\omega_{0}}}, \tag{3}\] where \(\omega_{0}\) is the frequency of two photons in their center-of-momentum frame. The \(\gamma\gamma\) process is important in the cascade process. The TeV photons are from external shock. In this region, Kann et al. (2023) detected several hundred keV photons, which have high \(\gamma\gamma\) process cross-section with TeV photons. keV photons in this region could absorb the TeV photons. The optical depth \(\tau\) is: \[\tau=\sigma nl, \tag{4}\] where the \(\sigma\) is the cross-section, either from Eq. (1) or Eq. (2), depending on the certain process, and the \(n\) is the reaction particle number density, and the \(l\) is thickness of the region. From Eq. (4), we can calculate whether the photon can escape from the region. In the following, we consider the optical depth for the TeV photons colliding with different possible objects. ### Absorbed by the TeV cascaded electron-position pairs A promising and self-consistent scenario could be the TeV cascaded electron-position pairs blocking the TeV photons. At the very early time (\(<1\) s), the cascade has not fully started yet, and the TeV photons escaped. Later on (\(\sim 2-4.85\) s), the cascade started, and the cascaded electron-position pairs further blocked the TeV photons via Compton scattering process. This is what we observed the dip in the TeV light curve. After that, with the radius of the TeV emitting region expanded, the optical depth drops down to less then unity, and the TeV light curve returned to a normal GRB afterglow in the optically thin case. The cascade process can be studied using Monte Carlo simulation (Mucke et al. 2000). Bottcher et al. (2013) developed a semi-analytical method to calculate the generation of cascade particles. The Eq. (2) is important in semi-analysis method and used in Huang et al. (2021) and Bottcher et al. (2013). We roughly consider that only TeV photons are injected into the system. The observable photons \(N_{e}^{\rm esc}\) can be written as follows (Bottcher et al. 2013): \[\dot{N}_{e}^{\rm esc}=(\dot{N}_{e}^{0}+\dot{N}_{e}^{\rm sec})\left[\frac{1-e^{ -\tau_{\gamma\gamma}(\epsilon)}}{\tau_{\gamma\gamma}(\epsilon)}\right], \tag{5}\] where \(\epsilon\) denotes the energy of photon, and \(\tau_{\gamma\gamma}(\epsilon)\) is the optical depth of photons due to \(\gamma\gamma\) absorption, and the \(\dot{N}_{e}^{0}\) represents the injection rate of TeV photons, and \(\dot{N}_{e}^{\rm sec}\) is the secondary photon component mainly arising from synchrotron radiation. By utilizing Eq. (5) and Eq. (2) as well as the expression for \(\dot{N}_{e}^{\rm sec}\), we can calculate the generation rate of electron-positron pairs by solving these equations. This approach enables us to simplify calculation. Before the detailed semianalyticalally calculation, we can do two estimations for the number of cascaded pairs, i.e., a conservative estimation and an aggressive estimation, to get the lower and the upper bounds. When the \(\gamma\gamma\) reaction occurs, energy conservation must be obeyed. Therefore, the total energy of two \(\gamma\) photons must be larger than \(2m_{e}c^{2}\). Since the external shock is moving with a bulk Lorentz factor \(\Gamma_{b}\sim 560\)(LHAASO Collaboration, 2023), the TeV photons are detected in the observer's frame. In the co-moving frame of external shock, the energy of photon is multiplied by a factor of \(\Gamma_{b}^{-1}\). Therefore, in the observer's frame, the total energy of two photons in the \(\gamma\gamma\) process must be larger than \(E_{m}\simeq 2m_{e}c^{2}\Gamma_{b}\sim 570\) MeV. A conservative estimation is that all the particles with energy less than \(E_{m}\) cannot continue the cascade reaction. Then, we can approximate that TeV photons will cascade and generate \(N\) particles, given by: \[N=\frac{4\pi D_{\rm L}^{2}\int_{0}^{t}f^{\prime}_{\rm r}d\tau^{\prime}}{E_{m}}. \tag{6}\] With our assumption, we can calculate the flux of blocked TeV photons. By extending the flux from \(4\sim 10\) s to \(1\sim 4.85\) s and making a difference (data are from LHAASO Collaboration (2023)), we can estimate the flux \(f_{t}\) of TeV photons in the \(\gamma\gamma\) process. Then we have the evolution of optical depth: \[\tau(t)=\frac{D_{L}^{2}\int_{0}^{t}f_{\tau}^{t}d\tau^{\prime}}{E_{m}R^{2}}\sigma _{\rm c}. \tag{7}\] The shock radius \(R\) is given by: \[R=2cT_{b}^{2}t. \tag{8}\] For GRB 221009A, with a luminosity distance of \(D_{\rm L}=2.1\times 10^{27}\) cm and a flux of \(f_{t}^{\prime}\sim 10^{-6}\) erg cm\({}^{-2}\) s\({}^{-1}\). The \(\tau(t)\) is: \[\tau(t)\sim 1\times 10^{-6}\frac{f_{\tau,-6}^{\prime}D_{L,27}^{2}}{E_{m,2}T_{ b,2}^{4}T_{b}^{3}t}\sim 2\times 10^{-9}t^{-1}, \tag{9}\] where \(Q_{i}=Q\times 10^{i}\), and \(E_{m}\) is in unit of MeV. This outcome is far less than the optical thick at \(t=4.85\) seconds for TeV photons, which is approximately \(\tau|_{(t=4.85~{}s)}\sim 1\). Therefore, the conservative estimation of the optical depth is not sufficient to block TeV photons. For an aggressive assumption, we choose \(E_{m}\) as 0.511 MeV, i.e., assuming the TeV photons are transferred into pairs with no energy waste. In this condition, the number density of electron and position is highest. However, it is impossible for the cascade matter to be so dense. Same as Eq. (9), we have: \[\tau(t)\sim 1\times 10^{-6}t^{-1}. \tag{10}\] We can see that even with the most aggressive estimation, the number density of electrons is not enough to block the TeV photons. Therefore, without knowing the details of the cascade, we can see the cascaded electron-positron pairs are not dense enough to block the early TeV photons. ### Absorbed by electrons from the external shock The TeV photons are believed to have originated from the external shock. The electrons in the external shock may scatter the TeV photons. We check the optical depth in this case. The number density of electrons in external shock is \(\sim\Gamma_{b}n_{0}\), where \(n_{0}\) is the number density of circum-burst medium. The optical depth is \(\tau=2\sigma_{\rm c}n_{0}\Gamma_{b}t\), which is about \(10^{-16}\) for typical values. What is more, \(\tau\) is proportional to the time, which is contrary to the expectation. Therefore, these electrons cannot absorb the TeV photons. ### Absorbed by electrons from the ejecta Wang et al. (2023) studied the possibility of the prompt phase generating the TeV photons emission from the hadronic process. Zhang et al. (2023) considered the possibility that the very high energy photons are from the reverse shock of the external shock. In both scenarios, the ejecta from the central engine may radiate the TeV photons. Here, we assume the TeV is from internal shock as Wang et al. (2023) described. We consider the condition of whether the internal shock electrons can block the TeV photons. The number of electrons \(N_{\rm e}\) is: \[N_{\rm e}=\frac{\eta E_{\gamma,{\rm iso}}}{\Gamma_{b}m_{p}c^{2}}, \tag{11}\] where \(E_{\gamma,{\rm iso}}\) is the isotropic equivalent energy of the GRB prompt emission, \(m_{p}\) is the rest mass of the proton, and \(\eta\) is the ratio between the kinetic energy v.s. the gamma-ray energy, which is roughly 1. By taking \(E_{\gamma,{\rm iso}}\sim 1\times 10^{55}\) erg, and \(\Gamma_{b}\approx 560\) (LHAASO Collaboration 2023), we can calculate the electron number of internal shock, which is about \(2\times 10^{55}\). The optical depth \(\tau=N_{\rm e}\sigma_{\rm c}/[4\pi(2cT_{b}^{2}t)^{2}]\), which is \(\approx 3\times 10^{-6}\) at \(t=1\) s. Considering the optical depth decreases with time as \(t^{-2}\), it becomes even smaller at a later time. Therefore, in the internal shock scenario, the electrons from the ejecta cannot block the TeV photons. For the reverse shock scenario as suggested by Zhang et al. (2023), the condition is the same. The difference is the reversely shocked materials evolve with time, and consequently, the total number of the shocked electrons is increasing, i.e., the thickness \(t\) of the shocked region is increasing. However, in the calculation of the optical depth, \(l\) is canceled. ### Absorbed by external shock photons In this section, we discuss the TeV photons absorbed by keV photons from the external shock itself. According to the QED calculation, the \(\sigma_{\gamma\gamma}\sim\sigma_{\rm T}\) and becomes maximum, when in a head-on collision and \(\hbar\omega t\omega^{\prime}=(m_{e}c^{2})^{2}\) (refer to Section 7, Chapter 5 in You 1998). We can calculate the maximum cross-section condition with the formula above. In the co-moving frame, the frequencies of the two head-on photons will be reduced to \(\omega/\Gamma_{b}\) and \(\omega^{\prime}/\Gamma_{b}\) in the comoving frame. Taking \(\Gamma_{b}=560\), for the reaction with \(\sim 1\) TeV photons in the largest cross-section condition, the lower energetic photon is \(\hbar\omega^{\prime}\sim 0.08\) MeV, which is just in the X-ray band. From the discussion above, we can calculate the absorption of TeV photons by the external shock keV photons according to the \(\gamma\gamma\) reaction channel. The \(\tau\) can be approximately calculated as: \[\tau(t)\approx\frac{D_{L}^{2}}{R^{2}}\int_{1}^{t}\int_{\omega_{\rm min}}^{ \omega_{\rm max}}\frac{f_{\omega,t}\sigma_{\gamma\gamma}}{\hbar\omega}{\rm d }\omega{\rm d}t^{\prime}. \tag{12}\] Kann et al. (2023) had released the X-ray afterglow observation, which can be seen in Fig. 12. The flux density \(f_{\omega,t}\) at \(t=4\) s and \(\sim 0.08\) MeV is about \(10^{-25}\) erg cm\({}^{-2}\) s\({}^{-1}\) Hz\({}^{-1}\), considering \(\alpha\sim-1.3\) and \(\beta\sim-0.75\) for \(f_{\omega,t}\propto t^{\alpha}\omega^{\beta}\)(Kann et al. 2023). By utilizing Eq. (2), we can calculate the cross-section and deduce the condition for large cross-section. We have opted for the energy range of 0.2-2000 keV, as this band has a large cross-section in reaction. The result is \(\tau(t)|_{t=4.8}\sim 10^{-5}(t/4s)^{-0.3}\). This is also too small, and the decreasing rate is too slow. What's more, it cannot explain the very first TeV emission. ## 3 Conclusion and discussion We have examined several scenarios to explain the unprecedented rapid rise in the TeV light curve of GRB 221009A, including cascading of the TeV photons and absorption by external shock photons or electrons. The cascade process can be semi-analytically calculated or roughly estimated. However, we find that even if all the TeV photons' energy is converted to the rest mass of electrons, it is still not dense enough for blocking the TeV photons. Additionally, we attempted to use external shock-accelerated electrons to block the TeV photons, but it was not successful either. We also checked using the afterglow itself to explain this phenomenon, but the flux of afterglow in the hundred keV photons is not high enough to absorb TeV photons. In conclusion, the early dip in the TeV light curve still remains to be explained. There are some other processes that may absorb the TeV emission. For example, the early TeV photons may also collide with the cosmic microwave background and/or the galactic infrared background. However, these sources do not change over time, which is not consistent with the idea that only the early TeV photons cannot escape. The prompt MeV photons may also absorb the TeV photons. Though this is generally considered unlikely as we have seen a full afterglow-like light curve. However, in the early time, as the TeV radiation radius can be small, it makes the absorption possible. We are not able to estimate the optical depth simply similar to the treatment in section 2.4, as the MeV photons are believed from internal shocks, which is different from the TeV origination. For this possibility, one should consider carefully the geometry configuration of the MeV-TeV collision, as well as the time and spectral evolution of the prompt MeV emission. It could be that there was no absorption at all if we neglect the very first emission at around 1 s, which has a quite large uncertainty. Then the early light curve before 5 s can be taken as a fast rise. Such a fast rise might be explained by energy injection to the external shock. If this is the case, one should consider what kind of energy injection can power so fast rise. ## Acknowledgements We thank the helpful discussions with Weilua Lei, Kai Wang, Xiang-Yu Wang, and the hospitality of Yao'an station of Purple Mountain Observatory. The English is polished by ChatGPT. This work is supported by the National Key R&D Program of China (2022SKA0130100) and China Postdoctoral Science Foundation (2023T160410).
2306.06961
Kilonovae of binary neutron star mergers leading to short-lived remnant neutron star formation
We study kilonova emission from binary neutron star (BNS) mergers for the case that a remnant massive neutron star (MNS) forms and collapses to a black hole within $20$ ms after the onset of the merger (which we refer to as "a short-lived case") by consistently employing numerical-relativity and nucleosynthesis results. We find that such kilonovae are fainter and last shorter than those for BNSs resulting in the formation of long-lived ($\gg 1\,{\rm s}$) MNSs, in particular in the optical band. The resulting light curves are too faint and last for a too short duration to explain the kilonova observation for the BNS associated with GW170817, indicating that the merger remnant formed in GW170817 is unlikely to have collapsed to a black hole within a short period of time ($\sim 20$ ms) after the onset of the merger. Our present result implies that early observation is necessary to detect kilonovae associated with BNSs leading to short-lived MNS formation in particular for the optical blue band as well as that kilonovae could be hidden by the gamma-ray burst afterglow for nearly face-on observation. We provide a possible approximate scaling law for near-infrared light curves with the given reference time and magnitude when the decline power of the ${\it z}$-band magnitude, $d M_{\it z}/d{\rm log}_{10}t$, reaches $2.5$. This scaling law suggests that the ${\it HK}$-band follow-up observation should be at least $1$ mag deeper than that for the ${\it z}$-band reference magnitude and earlier than 4 times the reference time.
Kyohei Kawaguchi, Sho Fujibayashi, Nanae Domoto, Kenta Kiuchi, Masaru Shibata, Shinya Wanajo
2023-06-12T08:47:15Z
http://arxiv.org/abs/2306.06961v1
# Kilonovae of binary neutron star mergers leading to short-lived remnant neutron star formation ###### Abstract We study kilonova emission from binary neutron star (BNS) mergers for the case that a remnant massive neutron star (MNS) forms and collapses to a black hole within 20 ms after the onset of the merger (which we refer to as "a short-lived case") by consistently employing numerical-relativity and nucleosynthesis results. We find that such kilonovae are fainter and last shorter than those for BNSs resulting in the formation of long-lived (\(>\) 1 s) MNSs, in particular in the optical band. The resulting light curves are too faint and last for a too short duration to explain the kilonova observation for the BNS associated with GW170817, indicating that the merger remnant formed in GW170817 is unlikely to have collapsed to a black hole within a short period of time (\(\sim\) 20 ms) after the onset of the merger. Our present result implies that early observation is necessary to detect kilonovae associated with BNSs leading to short-lived MNS formation in particular for the optical blue band as well as that kilonovae could be hidden by the gamma-ray burst afterglow for nearly face-on observation. We provide a possible approximate scaling law for near-infrared light curves with the given reference time and magnitude when the decline power of the \(z\)-band magnitude, \(dM_{z}/d\mathrm{log}_{10}t\), reaches 2.5. This scaling law suggests that the \(HK\)-band follow-up observation should be at least 1 mag deeper than that for the \(z\)-band reference magnitude and earlier than 4 times the reference time. keywords: gravitational waves - stars: neutron - nucleosynthesis - radiative transfer - hydrodynamics ## 1 Introduction Binary neutron star (BNS) mergers are among the most efficient gravitational-wave emitters in the universe and the most important sources of multi-messenger high-energy astrophysical phenomena, such as gamma-ray bursts (GRB, Paczynski, 1991; Nakar, 2007; Berger, 2014; Abbott et al., 2017), kilonovae (Li & Paczynski, 1998; Kulkarni, 2005; Metzger et al., 2010; Kasen et al., 2013; Tanaka & Hotokezaka, 2013), and synchrotron flares (Nakar & Piran, 2011; Hotokezaka & Piran, 2015; Hotokezaka et al., 2018; Margalit & Piran, 2020). Furthermore, BNS mergers are considered to be important production sites of elements heavier than iron in the universe (Lattimer & Schramm, 1974; Eichler et al., 1989; Freiburghaus et al., 1999; Cowan et al., 2021). All these facts imply that BNS mergers are unmissable research subjects from an astronomical point of view. They are also among the unique systems in the universe in which the most extreme (strongly self-gravitating, high-density, and high-temperature) environments in the universe are realized. Hence, the multi-messenger observation of BNS mergers is also an indispensable tool to extend our knowledge of fundamental physics. Quantitative prediction of the merger dynamics and outcomes is crucial to correctly interpret the observed signals. Since the first simultaneous detection of gravitational waves and electromagnetic (EM) signals from a BNS (GW170817/AT2017gfo; Abbott et al., 2017), remarkable progress has been achieved in the theoretical understanding, particularly, in the studies based on numerical simulations. For example, recent numerical studies revealed the quantitative nature of mass ejection from BNS mergers, for which the processes can be broadly divided into two phases: At the onset of the merger, a fraction of neutron-rich matter is ejected by tidal force and collisional shock heating (e.g., Rosswog et al., 1999; Ruffert et al., 2001; Hotokezaka et al., 2013). After the merger, a massive neutron star (MNS) or a black hole (BH) surrounded by a strongly magnetized hot and dense accretion torus is formed (e.g., Price & Rosswog, 2006; Kiuchi et al., 2018, 2022). The magnetized central objects and accretion tori are considered to launch relativistic jets and outflows by magnetic pressure and tension, viscous heating due to magneto-hydrodynamical turbulence, and neutrino irradiation. Quantitative properties of the ejecta and the nucleosynthetic element abundances for each phase are studied by various groups together with their dependence on binary parameters, such as NS masses and NS equations of state (EoS, Hotokezaka et al., 2013; Bauswein et al., 2013; Wanajo et al., 2014; Sekiguchi et al., 2015; Foucart et al., 2016; Sekiguchi et al., 2016; Radice et al., 2016; Dietrich et al., 2017; Bovard et al., 2017; Kiuchi et al., 2018; Dessart et al., 2009; Metzger & Fernandez, 2014; Perego et al., 2014; Just et al., 2015; Wu et al., 2016; Siegel & Metzger, 2017; Shibata et al., 2017; Lippuner et al., 2017; Fujibayashi et al., 2018; Siegel & Metzger, 2018; Ruiz et al., 2018; Fernandez et al., 2019; Christie et al., 2019; Perego et al., 2019; Miller et al., 2019; Fujibayashi et al., 2020, 20; Bernuzzi et al., 2020; Ciolfi & Kalinani (2020); Nedora et al. (2021); Foucart et al. (2020); Fernandez et al. (2020); Mosta et al. (2020); Shibata et al. (2021); Curtis et al. (2022); Fujibayashi et al. (2023); Kiuchi et al. (2022); Foucart et al. (2022); Just et al. (2023); Curtis et al. (2023); see Shibata & Hotokezaka (2019) for a review). The light curve modeling of EM counterparts, particularly for kilonovae, are also developed in this decade by employing numerical-simulation-based/motivated ejecta profiles and by performing radiative transfer simulations with realistic heating rates and/or detailed opacity tables (e.g., Kasen et al. (2013, 2015); Barnes et al. (2016); Wollaeger et al. (2018); Tanaka et al. (2018); Wu et al. (2019); Kawaguchi et al. (2018); Hotokezaka & Nakar (2020); Kawaguchi et al. (2020); Korobkin et al. (2021); Bulla et al. (2021); Zhu et al. (2021); Barnes et al. (2021); Nativi et al. (2020); Kawaguchi et al. (2021); Wu et al. (2022); Just et al. (2022); Just et al. (2023). However, there are still various open questions remaining. For example, whether the remnant NS has gravitationally collapsed into a BH or not is still being an open question for GW170817 due to the lack of the detection of post-merger gravitational waves in GW170817 (Abbott et al., 2017). Such information is important, because it is connected to the underlying physics of the uncompre-handed NS EoS (e.g., Margalit & Metzger (2017); Rezzolla et al. (2018); Shibata et al. (2019)). While we expect that the observation of the EM counterparts can provide a great hint to address this issue, it is still unclear from what observational features we can know about the fate of the remnant. Focusing particularly on the kilonova emission, a general consensus has not been yet reached for the property and origin of the ejecta in GW170817 (e.g., Kasliwal et al. (2017); Cowperthwaite et al. (2017); Kasen et al. (2017); Villar et al. (2017); Waxman et al. (2018); Kawaguchi et al. (2018); Kawaguchi et al. (2020); Bulla (2019); Almualla et al. (2021); Kedia et al. (2023); Bulla (2023). Determination of the ejecta property is crucial for understanding the post-merger evolution of the system and whether BNS mergers could be the major production site of \(r\)-process elements in the universe. To address these questions, quantitative understanding of the relation between the initial condition and/or underlying physics, and EM signals is important. For this purpose, conducting a study based on numerical simulations consistently starting from the merger to the phase of EM emission is a useful approach to link the observables that should be related to each other. In particular, for the kilonova modeling, it is important to accurately determine the ejecta profile for the rest-mass density and compositions at the time of kilonova emission (\(>0.1\) d). Previous studies showed that the ejecta profile induces significant special dependence in radioactive heating as well as strong geometrical effects in radiative transfer, which have great impact on the resultant light curves (Kasen et al., 2015; Wollaeger et al., 2018; Kawaguchi et al., 2020; Bulla, 2019; Zhu et al., 2020; Darbha & Kasen, 2020; Korobkin et al., 2021; Almualla et al., 2021; Kedia et al., 2023). However, there are still limited number of studies which provide the end-to-end modeling from the merger to observational outputs following the hydrodynamics evolution of all the ejecta components up to the time of kilonova emission (Kawaguchi et al. (2021, 2022); Just et al. (2023); see, however, Rosswog et al. (2014); Grossman et al. (2014); Collins et al. (2023); Neuweiler et al. (2023) for the studies focusing on the dynamical ejecta components, and Fernandez et al. (2015, 2017); Foucart et al. (2021) in the context of BH-NS mergers). Given the situation that a number of BNS mergers will be observed in the next decades, the EM counterpart prediction based on the consistent simulations by taking the BNS diversity into account is an urgent task for correctly interpreting the observed data. In this paper, we study the kilonova light curves of BNS mergers for the case that a remnant MNS forms and subsequently collapses to a BH within 20 ms after the onset of the merger (which we refer to as "a short-lived case") consistently employing numerical-relativity (NR) results of Kiuchi et al. (2022); Fujibayashi et al. (2023). This paper is organized as follows: In Section 2, we describe the method employed in this study. In Section 3, we describe the BNS models we study in this work. In Section 4, we present the property of the ejecta obtained by the long-term hydrodynamics evolution and the kilonova light curves obtained by radiative-transfer simulations. Finally, we discuss the implication of this paper in Section 5. Throughout this paper, \(c\) denotes the speed of light. ## 2 Method Merger ejecta of a BNS are expected to be homologously expanding at the time of kilonova emission (\(\gtrsim 0.1\) d). To obtain the ejecta profile in the homologously expanding phase, we follow the same procedures as in the previous work (Kawaguchi et al., 2021, 2022); adopting the outflow data obtained by NR simulations as the inner boundary condition (Fujibayashi et al., 2023), the hydrodynamics evolution of merger ejecta is calculated by employing an axisymmetric relativistic hydrodynamics code developed in Kawaguchi et al. (2021, 2022). In the following, to distinguish between the present simulation and NR simulation, we refer to the present hydrodynamics simulations as the HD simulations. In the hydrodynamics code, relativistic hydrodynamics equations in the spherical coordinates are solved taking into account the effect of fixed-background gravity of a non-rotating BH metric in the isotropic coordinates. Radioactive-decay heating of heavy elements is also taken into account by referring to the nucleosynthesis results computed for each ejecta fluid element in the NR simulation (see Fujibayashi et al. (2023) for the details). We employ the ideal-gas EoS with the adiabatic index of \(\Gamma=4/3\). For the HD simulations, the uniform grid spacing with \(N_{\theta}\) grid points is prepared for the polar angle \(\theta\), while for the radial direction, the following non-uniform grid structure is employed; the \(j\)-th radial grid point is given by \[\ln r_{j}=\ln\left(\frac{r_{\rm out}}{r_{\rm in}}\right)\frac{j-1}{N_{r}}+\ln r _{\rm in},\ j=1\cdots N_{r}+1, \tag{1}\] where \(r_{\rm in}\) and \(r_{\rm out}\) denote the inner and outer radii of the computational domain, respectively, and \(N_{r}\) denotes the total number of the radial grid points. In the present work, we employ \((N_{r},N_{\theta})=(2048,256)\), and \(r_{\rm in}\) and \(r_{\rm ext}\) are initially set to be \(8,000\) km and \(10^{3}\)\(r_{\rm in}\), respectively. We employ the same time origin for the HD simulations as in the NR simulations for the post-merger evolution. To import the outflow data from the NR simulations of Fujibayashi et al. (2023) to the present HD simulations, the time-sequential hydrodynamics property of the outflow is extracted at \(r=r_{\rm in}\) in the NR simulations, and is used as the boundary condition at the inner radius, \(r=r_{\rm in}\), of the HD simulations. The NR simulation data are run out at \(t>5\) s, and after then, the HD simulation is continued by setting a very small floor-value, which is negligible for the ejecta dynamics, to the rest-mass density of the inner boundary. To follow the evolution of ejecta even after the high velocity edge of the outflow reaches the outer boundary of our HD simulation, the radial grid points are added to the outside of the original outer boundary, while at the same time the innermost radial grid points are removed so as to keep the total number of the radial grid points. By this prescription, the value of \(r_{\rm in}\) is increased in the late phase of the HD simulations. The outermost radial grids are added so that the location of the outer radial boundary, \(r_{\rm out}\), is always \(10^{3}r_{\rm in}\). We note that the total mass lost by removing the inner radial grids is always much smaller (\(\lesssim 10^{-4}\,M_{\odot}\)) than the post-merger ejecta mass. The light curves of kilonovae are calculated using a wavelength-dependent radiative transfer simulation code (Tanaka and Hotokezaka, 2013; Tanaka et al., 2017, 2018; Kawaguchi et al., 2020; Kawaguchi et al., 2021). In this code, the photon transfer is simulated by a Monte Carlo method for given ejecta profiles composed of the density, velocity, and element abundance under the assumption of the homologous expansion. The time-dependent thermalization efficiency is taken into account following an analytic formula derived by Barnes et al. (2016). The ionization and excitation states are determined under the assumption of the local thermodynamic equilibrium (LTE) by using the Saha's ionization and Boltzmann excitation equations. The impact of this assumption will be discussed in Appendix A. For the photon-matter interaction, bound-bound, bound-free, and free-free transitions and electron scattering are taken into account for the transfer of optical and infrared photons (Tanaka and Hotokezaka, 2013; Tanaka et al., 2017, 2018). The formalism of the expansion opacity (Friend and Castor, 1983; Eastman and Pinto, 1993; Kasen et al., 2006) and the new line list derived in Domoto et al. (2022) are employed for the bound-bound transitions. In this line list, the atomic data of VALD (Piskunov et al., 1995; Kupka et al., 1999; Ryabchikova et al., 2015) or Kurucz's database (Kurucz and Bell, 1995) are used for \(Z=20\)-29, while the results of atomic calculations from Tanaka et al. (2020) are used for \(Z=30\)-88. For Sr II, Y I, Y II, Zr I, Zr II, Ba II, La III, and Ce III, which are the ions producing strong lines, line data are replaced with those calibrated with the atomic data of VALD and NIST database (Kramida et al., 2021). The radiative transfer simulations are performed from \(t=0.1\,\mathrm{d}\) to \(30\,\mathrm{\SIUnitSymbolDegree}\) employing the density and internal energy profiles of the HD simulations at \(t=0.1\,\mathrm{d}\). The spatial distributions of the heating rate and element abundances are determined by the table obtained by the nucleosynthesis calculations referring to the injected time and angle of the fluid elements. Note that the element abundances at \(t=1\,\mathrm{d}\) are used during the entire time evolution in the radiative transfer simulations to reduce the computational cost, but this simplified prescription gives an only minor systematic error on the resultant light curves as illustrated in Kawaguchi et al. (2021). ## 3 Model In this work, we employ the NR outflow profiles obtained in Fujibayashi et al. (2023) as the input for the HD simulations. The key quantities of each model are summarized in Table 1. The first four models listed in Table 1 are BNSs with the total gravitational mass (at the infinite separation) of \(2.7\,M_{\odot}\) but with various mass ratios in the range of 0.8-1.0. We also study an unequal mass BNS with a larger total gravitational mass (\(2.8\,M_{\odot}\)), which we refer to as SFHo-125155. The SFHo EoS (Steiner et al., 2013) supplemented by the Timmes (Helmholtz) EoS (Timmes and Swesty, 2000) for the low density part is employed. For all the models employing the SFHo EoS, a remnant MNS is formed after the merger, but it collapses to a BH within \(\approx 20\,\mathrm{ms}\). We note that these mass ranges of the BNSs with a short-lived remnant broadly cover the range of the mass estimation obtained by the gravitational-wave data analysis of GW170817 (Abbott et al., 2017, 2019). The BNS models which result in the formation of an MNS surviving for a long time (\(>1\,\mathrm{s}\); Fujibayashi et al., 2020; Shibata et al., 2021) are also shown in Table 1 for comparison purposes (see also Kawaguchi et al., 2021, 2022). The NR simulations are performed by a general-relativistic viscous neutrino-radiation hydrodynamics code with the dimensionless alpha viscous parameter of \(\alpha=0.04\)(Fujibayashi et al., 2020; Fujibayashi et al., 2020) except for MNS75a in which general-relativistic neutrino-radiation resistive-magnetohydrodynamics code is employed to take the magnetic dynamo effects into account (Shibata et al., 2021). The ejecta mass evaluated in the NR simulations is also listed in Table 1. The total ejecta mass increases as the mass ratio of the BNS deviates from unity due to the increase in the torus mass, and hence, the ejecta mass of the post-merger component. Broadly speaking, the mass of the dynamical ejecta tends to decrease as the binary becomes more asymmetric (but not so monotonically). This reflects the fact that, for an asymmetric binary, the tidal-interaction-driven component dominates the dynamical ejecta rather than the collisional shock-driven component, of which the launching mechanism is more efficient in mass ejection than the former. The total ejecta mass of the BNS merger for which the remnant MNS collapses to a BH in a short time is an order of magnitude smaller than that for the BNS which results in the formation of an MNS surviving for a long time (\(>1\,\mathrm{s}\); Fujibayashi et al., 2020; Shibata et al., 2021). Note that for the latter case, the total ejecta mass is dominated by the post-merger ejecta. ## 4 Results ### Ejecta profiles For all the models, we find that the total internal energy of ejecta is smaller by \(\approx 4\) order of magnitudes than the total kinetic energy at \(t=0.1\,\mathrm{d}\) and that the mass-averaged deviation of the velocity field from that in the later homologous expanding phase (\(v^{r}=r/t\) with \(v^{r}\) being the radial velocity) is as small as \(10^{-3}\) at \(t=0.1\,\mathrm{d}\). This shows that the homologous expansion is well achieved for \(t\geq 0.1\,\mathrm{d}\). The total mass in the computational domain measured at \(t=0.1\,\mathrm{d}\), \(M_{\mathrm{eje}}^{\mathrm{HD}}\), is listed in Table 1. Note that the matter is in the homologously expanding phase at \(t=0.1\,\mathrm{d}\), and hence, \(M_{\mathrm{eje}}^{\mathrm{HD}}\) can be regarded as the total ejecta mass. It is found that \(M_{\mathrm{eje}}^{\mathrm{HD}}\) is slightly smaller than \(M_{\mathrm{eje}}^{\mathrm{NR}}\) for some of the models. This is a consequence of the fact that a fraction of the matter falls back across the inner boundary as the pressure support from the inner boundary vanishes when the outflow data run out. While a fraction of the matter can actually experience such fall-back due to the deceleration by the pressure from the precededely matter, our treatment of suddenly vanishing pressure support on the inner boundary at the run-out time of NR data may artificially increase the mass of the fall-back matter. Nevertheless, as found in our previous studies (Kawaguchi et al., 2021, 2022), the contribution of such marginally unbound matter to the kilonova emission is minor because it has only low velocity and has only a small contribution to the emission due to the long diffusion time scale. First, we focus on the BNS models of which the total mass is \(2.7\,M_{\odot}\) to see the effect of the binary mass ratio. Fig. 1 shows the rest-mass density profiles at \(t=0.1\,\mathrm{d}\) obtained by the HD simulations for models SFHo-135135, SFHo-130140, SFHo-125145, and SFHo-120150. The dynamical ejecta component located at \(x/ct\gtrsim 0.05\) or \(z/ct\gtrsim 0.15\) exhibits a broadly spherical morphology in the rest-mass density structure. On the other hand, the post-merger ejecta component, which is present in \(x/ct\lesssim 0.05\) and \(z/ct\lesssim 0.15\), exhibits a mildly prolate shape (see Fig. 2 for a clearer distinction between the dynamical and post-merger ejecta components). These \begin{table} \begin{tabular}{c c c c c c} \hline Model & EoS & \((m_{1}\,[\,M_{\odot}],\,m_{2}\,[\,M_{\odot}\,])\) & MNS evolution & \(M^{\rm NR}_{\rm cig}(M^{\rm NR}_{\rm dyn},\,M^{\rm NR}_{\rm pop})\)\([10^{-2}\,M_{\odot}]\) & \(M^{\rm HD}_{\rm exp}\)\([10^{-2}\,M_{\odot}\,]\) \\ \hline \hline SFHo-135135 & SFHo & \((1.35,1.35)\) & short-lived & \(1.0\,(0.73,\,0.25)\) & \(1.0\) \\ SFHo-130140 & SFHo & \((1.30,1.40)\) & short-lived & \(1.0\,(0.48,\,0.50)\) & \(0.9\) \\ SFHo-125145 & SFHo & \((1.25,1.45)\) & short-lived & \(1.2\,(0.64,\,0.60)\) & \(1.1\) \\ SFHo-120150 & SFHo & \((1.20,1.50)\) & short-lived & \(1.6\,(0.45,\,1.1)\) & \(1.5\) \\ SFHo-125155 & SFHo & \((1.25,1.55)\) & short-lived & \(1.5\,(0.95,\,0.55)\) & \(1.4\) \\ \hline DD2-135135 & DD2 & \((1.35,1.35)\) & long-lived & \(7.6\,(0.15,\,7.5)\) & \(6.5\) \\ MNS75a & DD2 & \((1.35,1.35)\) & long-lived with strong dynamo & \(9.4\,(0.15,\,9.3)\) & \(8.4\) \\ \hline \end{tabular} \end{table} Table 1: Key model parameters. The columns describe the model name, the EoS adopted, the masses of the NSs, type of the MNS evolution, the ejecta mass evaluated in the NR simulations (\(M^{\rm NR}_{\rm ege}\), \(M^{\rm NR}_{\rm dyn}\), and \(M^{\rm NR}_{\rm pop}\) denote the total, dynamical, and post-merger masses, respectively; see Fujibayashi et al. 2020; Shibata et al. 2021; Fujibayashi et al. 2023), and the ejecta mass evaluated in the HD simulations at \(t=0.1\), \(M^{\rm HD}_{\rm ege}\), respectively. “short-lived”, “long-lived”, and “long-lived” orbit strong dynamo” denote the cases for which the remnant MNS collapses to a BH within 20 ms, survives for \(\gg 1\) s, and survives for \(\gg 1\) s with significant magnetic dynamo effects, respectively. The values for \(M^{\rm NR}_{\rm ege}\) are calculated by integrating the mass flux at the sphere with radius 8,000 km over time in 2D NR simulations. We then subtract from \(M^{\rm NR}_{\rm ege}\) the mass of dynamical ejecta \(M^{\rm NR}_{\rm dyn}\), which is evaluated in the corresponding 3D NR simulations with the Bernoulli criterion to obtain the contribution of the post-merger ejecta \(M^{\rm NR}_{\rm popa}\). Figure 1: Rest-mass density profiles at \(t=0.1\) d obtained by the HD simulations. The top-left, top-right, bottom-left, and bottom-right panels display the results for models SFHo-135135, SFHo-130140, SFHo-125145, and SFHo-120150, respectively. The gray curves in each panel denote the contour lines of \(10^{-15}\), \(10^{-14}\), \(10^{-13}\), \(10^{-12}\), \(10^{-11}\), \(10^{-10}\), and \(10^{-9}\)\(g/{\rm cm}^{3}\) from outside. Figure 3: The same as Figures 1 and 2 but for SFHo-125155. Figure 2: The same as Fig. 1 but for the electron fraction, \(Y_{e}\). The value of \(Y_{e}\) is evaluated when the temperature of the fluid element decreases to \(T=5\,\)GK. Note that only the region of which the rest-mass density at \(t=0.1\) d is higher than \(10^{-14}\) g/cm\({}^{3}\) is shown. characteristics of the density profile are in broad agreement with the ejecta profile obtained in our previous studies (Kawaguchi et al., 2021, 2022), in which BNSs result in long-lived MNSs (with the lifetime of \(>1\,\)s). Taking a closer look, the dynamical ejecta show a relatively more prolate shape for an equal-mass BNS (SFHo-135135), while relatively more oblate shapes are seen for unequal mass cases (SFHo-125145 and SFHo-120150). This reflects the fact that the tidally driven component which spreads preferentially toward the equatorial direction dominates in the dynamical ejecta for an asymmetric binary over the collisional-shock-driven component which spreads in a more spherical manner. Fig. 2 shows the electron fraction (\(Y_{e}\)) profiles at \(t=0.1\,\)d for models SFHo-135135, SFHo-130140, SFHo-125145, and SFHo-120150. Here, the value of \(Y_{e}\) is evaluated when the temperature of the fluid element decreases to \(T=5\,\)GK (\(=5\times 10^{9}\,\)K). A clear boundary-like feature starting from \(x/ct\approx 0.05\) on the equatorial plane to \(z/ct\lesssim 0.15\) along the polar axis is seen for all the models. This corresponds to the boundary between the dynamical and post-merger ejecta components. The dynamical ejecta has a clear angular dependence in the \(Y_{e}\) profile. With \(\theta\) being the angle measured from the polar axis, the value of \(Y_{e}\) of the dynamical ejecta is higher than 0.3 for \(\theta\lesssim 45^{\circ}\)-\(60^{\circ}\), while it is lower than 0.3 for \(\theta\gtrsim 45^{\circ}\)-\(60^{\circ}\). This clearly reflects the difference in the mass ejection mechanism; the former is shock-heating-driven and the latter is tidally driven. The dynamical ejecta for unequal mass BNSs have relatively more extended distribution and lower \(Y_{e}\) values along the equatorial direction than those for the equal-mass case. This also reflects the fact that the tidally driven component dominates the dynamical ejecta and the ejecta experience a relatively small rise in temperature resulting from the shock heating for the unequal mass cases. On the other hand, the post-merger ejecta has only weak angular dependence in the \(Y_{e}\) value, which is always \(\gtrsim 0.3\). These profiles of \(Y_{e}\) are also in broad agreement with the previous results of BNS mergers that result in long-lived remnant MNSs (Kawaguchi et al., 2021, 2022) and the results of BNS mergers in which the remnant survives for a moderately long time (0.1-1 s) (Just et al., 2023). Fig. 3 shows the rest-mass density and electron fraction profiles for model SFHo-125155. The qualitative features of the rest-mass density and \(Y_{e}\) profiles for this model are the same as those for other models with the total mass of 2.7 \(M_{\odot}\), but the oblate shape and low-\(Y_{e}\) value region are more pronounced than those for the models shown in Figs. 1 and 2. This reflects the fact that SFHo-125155 has the largest dynamical ejecta mass dominated by the tidally driven component as the consequence of the large asymmetry in the NS masses. Figs. 1-3 illustrate that the profiles of the rest-mass density and electron fraction depend sensitively on the total mass and mass ratio of the binaries. In the following we will show that the light curve and the spectral evolution depend on these differences, although the type of the remnant (either a short-lived or long-lived neutron star is formed as a remnant) has more impact on the brightness of the kilonova light curve. ### Kilonova light curves The left panel of Fig. 4 shows the results of the bolometric light curves obtained by radiative-transfer simulations. The solid and dashed curves denote, respectively, the total and isotropically equivalent bolometric luminosities (the latter measured from the polar direction, \(0^{\circ}\leq\theta\leq 20^{\circ}\)). For all the models, the bolometric light curves show approximately flat features with the luminosity of \(\sim 10^{41}\,\)erg/s for \(0.3\,\mathrm{d}\leq t\leq\)3-5 d, and decline rapidly after 3-5 d. As the ejecta mass increases, the epoch at which the bolometric light curve starts rapidly declining is delayed, and the luminosity after the decline becomes larger. This reflects the larger total optical depth and deposition energy for larger ejecta mass models. The right panel of Fig. 4 shows the ratios of the bolometric fluxes measured from the polar (\(0^{\circ}\leq\theta\leq 20^{\circ}\)) and equatorial directions (\(86^{\circ}\leq\theta\leq 90^{\circ}\)) to those of spherical average. The isotropically equivalent luminosities measured from the polar and equatorial directions are brighter and fainter by a factor of \(\approx 2\), respectively, at \(t\sim 1\,\)d due to the preferential diffusion of photons in the presence of optically thick dynamical ejecta around the equatorial plane (Kawaguchi et al., 2018; Kawaguchi et al., 2020). However, such effects become less significant in the late phase (\(\gtrsim 10\,\)d) as the optical depth of the ejecta decreases due to the expansion. The viewing-angle dependence of the bolometric light curves is sustained for a longer time scale as the binary becomes more asymmetric. This reflects the fact that the tidally driven component of dynamical ejecta has more mass and a lower value of \(Y_{e}\) for more asymmetric binaries, resulting in more opaque ejecta. None of the model light curves in the left panel of Fig. 4 can explain the observed brightness of the kilonova associated with GW170817 (AT2017gfo). The bolometric light curves are always below the observational data from 0.5 d to 17 d except for the last two data points in the plot. This is the case even if the enhancement of the brightness due to geometrical effects is taken into account (see the dashed curves in the left panel of Fig. 4, which denote the light curves measured from the polar direction). This is primarily due to the smallness of the ejecta mass, which leads to insufficient total radioactive deposition energy to explain the observation of AT2017gfo. Our results indicate that a BNS for which a remnant MNS collapses to a BH in a short time (\(t\lesssim 20\,\)ms) is unlikely to be the progenitor of GW170817. We note that our light curves are fainter than the results of Just et al. (2023), which considers the cases that a remnant MNS survives for a relatively longer time scale before it collapses to a BH (at \(t=0.1\)-1 s after the onset of the merger). This simply reflects the fact that the total ejecta mass is smaller for our present models. Fig. 5 shows the \(gzK\)-band light curves for all the models of the short-lived cases listed in Table 1. The obtained light curves show the broadly similar properties to those obtained by the observation of AT2017gfo as well as the previous studies for a kilonova with multiple ejecta components (e.g., Kasen et al., 2015; Wollaeger et al., 2018; Kawaguchi et al., 2018; Bulla, 2019); the optical emission lasts for a short time scale (\(\sim 1\,\)d), and the near-infrared (NIR) emission lasts for a longer time scale (\(\sim 10\,\)d). The emission becomes faint as the viewing angle measured from the axis of symmetry increases. This primarily reflects the spatial dependence of element abundances (see Figs. 2 and 3). The viewing-angle dependence is more pronounced for the emission in the optical wavelength (i.e., in the \(g\)-band) due to the so-called lanthanide-curtain effects in the presence of low-\(Y_{e}\) dynamical ejecta around the equatorial plane (Kasen et al., 2015; Wollaeger et al., 2018; Kawaguchi et al., 2020; Bulla, 2019; Zhu et al., 2020; Darbna and Kasen, 2020; Korobkin et al., 2021). Interestingly, the peak magnitudes in the NIR wavelengths (i.e., in the \(K\)-band) do not significantly differ among the models regardless of the difference in the ejecta mass. However, the time scale for the emission to sustain the brightness close to the peak becomes shorten as the total ejecta mass decreases. The light curves in the optical wavelengths observed from the polar direction also show similar shapes among the models except for the most asymmetric BNSs (SFHo-120150 and SFHo-125155) for which the \(g\)-band light curves are fainter by \(\geq 1\,\)mag than those for the other models. We find that the strong suppression of the optical emission for the most asymmetric BNS models is due to the fact that the polar regions are more polluted by the lanthanide elements. The difference in the brightness of the optical emission observed from the equatorial direction among the models simply reflects the difference in the dynamical ejecta mass (see Table 1). Our result implies that an earlier follow-up observation than in GW170817/AT2017gfo is needed to observe the kilonova emission in the optical band for the short-lived BNS formation. For example, for the hypothetical distance of 200 Mpc, the \(g\)-band emission can only be detected by the observation within 0.5-1 d with the sensitivity deeper than 22 mag, which requires telescopes larger than 2 m-classes (Nissanke et al., 2013). Also, such a detection can be achieved only for the case that the event is face-on, but we should note that it could be hidden by the GRB afterglow emission. In the \(z\) band, the emission lasts for a longer time scale, but yet the observation within 1 d is needed with 2 m and 4 m-class telescopes, respectively, to find kilonovae for the case of \(\theta\leq 45^{\circ}\). The NIR follow-up observation by a telescope larger than 4-m classes, such as VISTA (Ackley et al., 2020), can detect the kilonova emission up to 5 d after the onset of the merger with the hypothetical distance of 200 Mpc and 100 Mpc for face-on and edge-on events, respectively. However, since the field of view of an NIR telescope is not as large as that of the optical one (Sutherland et al., 2015), the improvement in the source localization by the gravitational-wave observation is crucial. ### Comparison with different BNS models Fig. 6 compares the \(gzK\)-band kilonova light curves for the BNS models for which the remnant MNS survives for a short time scale (SFHo-125145) and for a long time scale (DD2-135135, Fujibayashi et al. (2020c); Kawaguchi et al. (2022)), and for the case that significant magnetic dynamo effects are hypothetically present in a long-surviving remnant MNS (MNS75a, Shibata et al., 2021b; Kawaguchi et al., 2022). The time scale for the emission to rapidly decline is much shorter for the model with a short-lived remnant MNS than those for the models associated with the formation of a long-lived MNS simply because the ejecta mass for the short-lived MSN models is smaller by a factor of 5-10 than that for the latter cases. The brightness at the peak is also high for the case with a long-lived MNS, and the difference is more significant in a shorter wavelength. As already mentioned, none of the merger models that result in a short-lived remnant MNS can explain the peak kilonova brightness of AT2017gfo observed in the \(gz\)-band, nor the brightness in the \(K\)-band in the late phase (\(\geq 5\) d). This is likely to be the case even if we consider a possible enhancement in the optical-band emission due to the modification in the ionization states by the non-LTE effects (see Appendix A). On the other hand, the kilonova model of a BNS that results in a long-surviving MNS (DD2-135135) reproduces the peak brightness in the optical wavelengths as well as the brightness and declining time scale in the NIR wavelengths, although a deviation from the observation is present in the optical wavelengths in the late phase (\(t\geq 2\) d)1. This suggests that the formation of a short-lived remnant MNS is unlikely the case for GW170817 and the formation of an MNS which survives for a longer time scale (\(\gtrsim 0.1\) s) is more likely from the viewpoint of kilonova light curves. However, for the case that the significant magnetic dynamo effects are present in the long-surviving remnant MNS (MNS75a), the kilonova emission will be significantly brighter than the observed data (see the light curves of MNS75a in Fig. 6). This suggests that the remnant MNS of GW170817 should have not survived for too long time (i.e., over the time scale of the dynamo magnetic-field amplification) if the magnetic dynamo effect played a significant role in the post-merger phase (see also the discussion below for the viewpoint of the nucleosynthesis yields). Figure 4: (Left panel) The bolometric light curves for all the models considered in this paper. The solid and dashed curves denote the total and isotropically equivalent bolometric luminosities (the latter measured from the polar direction, \(0^{\circ}\leq\theta\leq 20^{\circ}\)), respectively. The isotropically equivalent bolometric luminosity observed in AT2017gfo is shown by the filled circles adopting the data in Waxman et al. (2018) with the distance of 40 Mpc. (Right panel) the ratios of the bolometric fluxes measured from the polar (the dashed curves; \(0^{\circ}\leq\theta\leq 20^{\circ}\)) and equatorial directions (the dotted curves; \(86^{\circ}\leq\theta\leq 90^{\circ}\)) to those of spherical average. Figure 5: \(gz\):\(K\)-band light curves. The top, middle, and bottom panels denote the light curves observed from \(0^{\circ}\leq\theta\leq 20^{\circ},41^{\circ}\leq\theta\leq 46^{\circ}\), and \(86^{\circ}\leq\theta\leq 90^{\circ}\), respectively. The purple, green, and red curves denote the \(g\), \(z\), and \(K\)-band light curves, respectively. The data points denote the observation data of AT2017gfo taken from Villar et al. (2017) with the distance of 40 Mpc. Figure 6: Comparison of the \(gz\)-band light curves among the models in which remnant MNSs survive for a short time scale (the dashed curves; SFHo-125145) and for a long time scale (the solid curves; DD2-135135, Fujibayashi et al. (2020); Kawaguchi et al. (2022)), and for the case in which significant magnetic dynamo effects are present in a long-surviving remnant MNS (the dotted curves; MNS75a, Shibata et al. (2021); Kawaguchi et al. (2022)). The \(g\), \(z\), and \(K\)-band light curves are shown in the top, middle, and bottom panels, respectively. ### Approximate scaling law of kilonova light curves While the peak brightness and the time scale of the emission differ among different BNS models and setups, Fig. 6 implies that the shapes of the light curves as well as their relative brightness among different wavelengths share similar behaviour among the models. To examine this idea, we compare the \(gzK\)-band light curves for various models and viewing angles with the time and magnitude of each light curve being scaled by those at a certain reference time. For this purpose, we chose the reference time for each light curve to be the decline time of the \(z\)-band emission, \(t_{\rm z,dec}\), defined as the time at which the decline power of the \(z\)-band magnitude, \(dM_{z}/d\rm log_{10}\it t\), reaches 2.5. Fig. 7 shows the reference time and \(z\)-band magnitude as functions of the viewing angle for various kilonova models (Kawaguchi et al., 2021, 2022). The reference time and magnitude largely vary among the models and viewing angles. As expected from Fig. 6, the reference time and magnitude tend to be earlier and fainter, respectively, for the short-lived cases than the long-lived cases. The viewing-angle dependence is more pronounced for the short-lived cases, which reflects the fact that the dynamical component has a larger fraction in the total ejecta compared to the long-lived cases. Fig. 8 compares the \(gzK\)-band light curves for various models and viewing angles, which are scaled with the reference time and \(z\)-band magnitude for each case. The \(g\)-band light curves show a large diversity among the models even after the scaling, for which we find no clear trend among the models and viewing-angles. On the other hand, although the reference time and magnitude largely vary among the models and viewing angles, the \(K\)-band light curves show relatively a less diversity after the scaling. In particular, the value of the \(K\)-band magnitude is always within \(\approx 1\) mag relative to the value of the reference \(z\)-band magnitude for \(0.6\leq t/t_{\rm z,dec}\leq 4\). We find that this is also the case for the \(H\) band. Hence, this suggests that the \(HK\)-band follow-up observation should be at least 1 mag deeper than the value of the \(z\)-band reference magnitude and earlier than 4 times the reference time. Once the kilonova candidate is found and the decline time is determined by the \(z\)-band observation in a few days after the event, this approximate scaling law can be used as a guideline for the NIR Figure 8: The \(gzK\)-band light curves for various models and viewing angles for which the time and magnitude are scaled by those at which the decline power of the \(z\)-band magnitude, \(dM_{z}/d\rm log_{10}\it t\), reaches 2.5. The solid, dashed, and dash-dotted curves denote the long-lived, short-lived, and long-lived dynamo cases, respectively, as in Fig. 7. The light curves observed from \(0^{\circ}\leq\theta\leq 20^{\circ}\), \(28^{\circ}\leq\theta\leq 35^{\circ},59^{\circ}\leq\theta\leq 64^{\circ}\), and \(86^{\circ}\leq\theta\leq 90^{\circ}\) are considered. The scaled observational data of AT2017gfo in the \(gzK\)-band taken from Villar et al. (2017) are also plotted by circles with error bars. Figure 7: The time (top) and AB absolute magnitude (bottom) at which the decline power of the \(z\)-band magnitude, \(dM_{z}/d\rm log_{10}\it t\), reaches to 2.5 as functions of the viewing angle. The solid, dashed, and dash-dotted curves denote the cases in which the remnant MNS survives for a long period (\(t\approx 1\)s; DD2-125 and DD2-135 in Fujibayashi et al., 2020; Kawaguchi et al., 2021, 2022), the remnant MNS collapses to a BH in a short time (\(t\lesssim 20\) ms; see Table 1), and the magnetic dynamo effects in the long-lived MNS are considered (MNS70a, MNS75a, and MNS80 in Shibata et al., 2021b; Kawaguchi et al., 2022). follow-up observation by letting us know how rapid and how deep the observation should be. For example, let us suppose the case for which an EM candidate is found in the \(z\) band and \(dM_{z}/d\log_{10}t\) reaches 2.5 with the \(z\)-band magnitude being 20 mag at 1.5 d after the gravitational-wave trigger. Then, our approximate scaling-law suggests that the follow-up observation deeper than 21 mag within 6 d is at least needed not to miss the peak brightness of the \(HK\)-band counterparts. Notably, the \(K\)-band emission tends to decline within \(t/t_{\rm z,dec}\approx 5\)-10 for the cases with a long-lived remnant MNS, while the \(K\)-band magnitude for the cases with a short-lived remnant MNS tends to keep the value close to the peak until a larger value of \(t/t_{\rm z,dec}\). The observational data of AT2017gfo in the \(gzK\)-band scaled in the same way tend to follow the trend of the cases with a long-lived remnant MNS, which also supports our hypothesis that the remnant MNS for GW170817 did not collapse to a BH within a short time (\(<20\) ms). ## 5 Discussions We found that the kilonova light curves of a BNS of which the remnant MNS survives for a short time are too faint and last for a too short duration to explain the brightness of the optical and NIR observation of GW170817/AT2017gfo. This is primarily due to the smallness of ejecta mass. Instead, kilonova models of a BNS which results in a long-surviving MNS (DD2-135135) are more consistent with the observation. This indicates that the remnant MNS of GW170817 might not have collapsed within a short time (\(\lesssim 20\) ms) but survived for a longer time (\(\gtrsim 0.1\) s). On the other hand, our previous study (Kawaguchi et al., 2022) indicated that, if the dynamo effects play a significant role for an efficient amplification of magnetic fields in a long-lived remnant MNS, the kilonova as well as the synchrotron emission stemming from the interaction between the ejecta fast tail and inter-stellar medium becomes too bright to be consistent with the EM observations associated with GW170817 (see also the discussion in Sarin et al. (2022)). Hence, the remnant MNS should have collapsed to a BH within the dynamo time scale of the magnetic-field growth, or the dynamo effect in the post-merger phase was subdominant. We find that the mass distribution of the ejecta in the polar region for the long-lived case is also compatible with the required property of the fast blue component, for which the origin is often discussed to be mysterious (e.g., Kasliwal et al., 2017; Cowperthwaite et al., 2017; Kasen et al., 2017; Villar et al., 2017; Waxman et al., 2018; Kawaguchi et al., 2018; Kawaguchi et al., 2020; Bulla, 2019; Almualla et al., 2021; Kedia et al., 2023; Bulla, 2023). Fig. 9 shows the isotropic equivalent ejecta mass, \(M_{\rm eje}^{\rm iso}(r^{\prime},\theta)\), for various models and latitudinal angles, which is defined by \[M_{\rm eje}^{\rm iso}(r^{\prime},\theta)=4\pi\int_{>r^{\prime}}\rho(r,\theta )r^{2}dr, \tag{2}\] where \(\rho\) denotes the rest-mass density. For the case of long-lived MNS formation (DD2-135135), the polar value of \(M_{\rm eje}^{\rm iso}\) for \(v^{\prime}\gtrsim 0.2\,c\) is larger than \(10^{-2}\,M_{\odot}\). This matches the property of the ejecta which is required to explain the luminosity and photo-spheric velocity of the blue component in AT2017gfo (see also Just et al. 2023 for similar findings). Such a polar ejecta component is originated from the dynamical ejecta component and the post-merger ejecta component of which the velocity is enhanced by neutrino-radiation from the MNS. While the spectral analysis with the non-LTE effects being taken into account is needed for a more quantitative argument, our finding suggests that the photo-spheric velocity of the blue component can be naturally explained by the setup obtained by NR simulations. Fig. 9 suggests that the presence of the diversity in the evolution of photos-spheric velocities reflects the different types of the MNS evolution. For the case of short-lived MNS formation (SFHo-125145), the value of \(M_{\rm eje}^{\rm iso}\) only reaches \(10^{-2}\,M_{\odot}\) for \(v_{r}<0.05\,c\), simply reflecting the smallness of the ejecta mass. This suggests that the photo-spheric velocity of the short-lived case is \(\lesssim 0.05\,c\) for \(t\gtrsim 1\) d. On the other hand, the result of MNS75a shows that an applicable amount of ejecta is distributed in the very high velocity components. This is due to the acceleration of ejecta in the presence of significant magnetic dynamo effects in the long-lived MNS, and the photo-spheric velocity of \(>0.8\,c\) is expected be observed in the early phase of emission for such a case. As described above, the BNS that results in a long-lived MNS is more likely the case for GW170817 than the BNS that results in a short-lived remnant MNS from the viewpoint of kilonova light curves. However, the calculated nucleosynthesis yields for such long-lived MNS cases (DD2-135135 and MNS75a in Fig. 10) exhibit overproduction of the nuclei between the first and second \(r\)-process abundance peaks (\(A\sim 80\)-130) when compared to the solar \(r\)-process abundances (see also Fujibayashi et al., 2020; Shibata et al., 2021b for the details, and Just et al., 2023 for similar results). This fact suggests that such long-lived MNSs should not be the major outcomes of BNSs that merge in a Hubble time if the dominant sources of \(r\)-process elements are BNS mergers. This implies that GW170817 may not be a typical type of BNS mergers in the universe. However, we should note that the total nucleosynthesis yields can be sensitive to the setups and physical ingredients of the numerical simulation. A latest work suggests that a more self-consistent magnetohydrodynamics treatment of angular momentum transfer could result in more production of elements heavier than the first \(r\)-process peak in the post-merger ejecta (Kiuchi et al., 2022). Hence, there may still exist a room that both the observation of GW170817 and the robustness of the solar abundance pattern (Cowan et al., 2021) can be explained by some configuration of a BNS, while we should remind that the presence of an MNS which survives for a long time Figure 9: Isotropic equivalent ejecta mass for various models and latitudinal angles. The solid, dashed, and dash-dotted curves denote the long-lived, short-lived, and long-lived dynamo cases, respectively, as in Fig. 7. The purple, green, and orange curves denote the results for \(\theta=0^{\circ}\), \(30^{\circ}\), and \(90^{\circ}\), respectively. scale (\(t>1\) s) with significant dynamo effects is unlikely the case of GW170817 as discussed above. For example, a BNS which results in a remnant MNS with significant dynamo effects but collapses to a BH at \(O(0.1)\) s can be a plausible model for interpreting GW170817 from this point of view. For the BNS resulting in a short-lived MNS, the kilonova emission lasts over a time scale appreciably shorter than that of GW170817/AT2017gfo, in particular for the optical band. This implies that for detecting kilonovae of this type, we need observation earlier than that for AT2017gfo. This is in particular the case for a large value of \(\theta\). It is also likely that the optical light curves could be more easily hidden by the afterglow light curves of GRBs for the small value of \(\theta\). Hence, the NIR light curves may be the primary target of the observation in the simultaneous detection of a GRB. In fact, the comparison of our model light curves with the observation of GRB130603B (Berger et al., 2013; Tanvir et al., 2013), with which a plausible kilonova candidate is associated, indicates that the \(r\)-band emission for the case of short-lived MNS formation (SFHo-125145) is likely hidden by the afterglow emission (see Fig. 11). The brightness in the \(H\) band for the case of short-lived MNS formation is also at most only comparable to that of the afterglow emission. Hence, the progenitor of GRB130603B was unlikely to be a BNS which results in the formation of a short-lived remnant MNS assuming the excess in the \(H\) band is due to the kilonova emission. This also indicates that GRB-associated kilonovae from BNS leading to short-lived MNS formation could be missed by being entirely hidden by the afterglows, which should result in a number of simultaneous detection of gravitational waves with short GRBs but lack of kilonovae in future. Indeed a statistical study shows that there are a substantial fraction of previous short GRBs that are not associated with kilonovae (Troja, 2023). As the brightness of AT2017gfo is known to be broadly comparable with the optical and NIR counterparts of GRB130603B (Rossi et al., 2020), the kilonova model light curves for the cases of long-lived MNS formation (DD2-135135 and MNS75a) are also consistent with the observation of GRB130603B; while the \(r\)-band emission is hidden by the afterglow emission, the \(H\) band emission for the long-lived cases is brighter than the afterglow emission, and is consistent with the observed excess. This suggests that the progenitor of GRB130603B is likely to be a BNS which results in the formation of a MNS that survives more than \(\sim 10\) ms. In Watson et al. (2019); Domoto et al. (2021, 2022); Gillanders et al. (2022), spectral features observed in the data of AT2017gfo are Figure 11: Comparison between the optical and NIR observation in GRB130603B and various kilonova models. The \(r\)- and \(H\)-band light curves in the observer frame are calculated by employing the redshift value of the source (\(z=0.356\), Thone et al., 2013; Cucchiara et al., 2013). The solid, dashed, and dash-dotted curves denote the long-lived, short-lived, and long-lived dynamo cases, respectively, as in Fig. 7. The gray dashed lines denote the GRB afterglow light curves. The observational data points for the \(r\)- and \(H\)-band magnitudes (circles) in the GRB130603B observation and the GRB afterglow model light curves are taken from Tanvir et al. (2013). Figure 10: Comparison of nucleosynthesis yields among the cases in which remnant MNSs survive for a short time (red; SFHo-125145) and for a long time (blue; DD2-135135, Fujibayashi et al., 2020), and for the case in which significant magnetic dynamo effects are present in a long-sruving remnant MNS (olive; MNS75a, Shibata et al., 2021). The \(r\)-process residuals to the solar system abundances (Lodders et al., 2009) are also shown by gray curves, which are scaled to match the abundance of \({}^{153}\)Eu for SFHo-125145 as well as that for DD2-135135. interpreted as the p-Cygni profiles by Sr (note, however, Perego et al. (2022); Tarumi et al. (2023) suggested that the spectral features could be also well interpreted by the absorption lines by He if non-LTE effects are considered). Recently, Sneppen et al. (2023) performed a more detailed analysis for those spectral features, and show that the Sr distribution of the ejecta should have nearly spherical morphology. Fig. 12 shows the Sr mass density profiles at \(t=1\) d for SFHo-135135 and DD2-135135 (Fujibayashi et al., 2020; Kawaguchi et al., 2022). The Sr distribution with the velocity larger than 0.15 \(c\) approximately exhibits a spherical morphology for SFHo-135135. On the other hand, the Sr distributions for DD2-135135 as well as the low-velocity part (\(<0.15\,c\)) for SFHo-135135 show mildly prolate shapes. These aspherical features, which are in broad agreement with the results of Just et al. (2023), are inconsistent with the implication of Sneppen et al. (2023). Detailed quantitative spectral analysis taking into account various uncertainties is nevertheless needed to clarify how severe the current tension from the observational implication is, which we leave it for a future task. ## Acknowledgements KK thanks Masaomi Tanaka and Eli Waxman for the valuable discussions. We also thank Kenta Hotokezaka for helpful discussions. Numerical computation was performed on Yukawa21 at Yukawa Institute for Theoretical Physics, Kyoto University and the Sakura, Cobra, Raven clusters at Max Planck Computing and Data Facility. The simulations were performed on Fugaku provided by RIKEN through the HPCI System Research Project (Project ID: hp220174, hp230084), and the Cray XC50 at CfCA of the National Astronomical Observatory of Japan. ND acknowledges support from Graduate Program on Physics for the Universe (GP-PU) at Tohoku University. This work was supported by Grant-in-Aid for Scientific Research (JP20H00158, JP21K13912, JP23H04900, 22KJ0317, 23H01772) of JSPS/MEXT.
2310.05556
WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather Conditions
Depth estimation models have shown promising performance on clear scenes but fail to generalize to adverse weather conditions due to illumination variations, weather particles, etc. In this paper, we propose WeatherDepth, a self-supervised robust depth estimation model with curriculum contrastive learning, to tackle performance degradation in complex weather conditions. Concretely, we first present a progressive curriculum learning scheme with three simple-to-complex curricula to gradually adapt the model from clear to relative adverse, and then to adverse weather scenes. It encourages the model to gradually grasp beneficial depth cues against the weather effect, yielding smoother and better domain adaption. Meanwhile, to prevent the model from forgetting previous curricula, we integrate contrastive learning into different curricula. By drawing reference knowledge from the previous course, our strategy establishes a depth consistency constraint between different courses toward robust depth estimation in diverse weather. Besides, to reduce manual intervention and better adapt to different models, we designed an adaptive curriculum scheduler to automatically search for the best timing for course switching. In the experiment, the proposed solution is proven to be easily incorporated into various architectures and demonstrates state-of-the-art (SoTA) performance on both synthetic and real weather datasets. Source code and data are available at \url{https://github.com/wangjiyuan9/WeatherDepth}.
Jiyuan Wang, Chunyu Lin, Lang Nie, Shujun Huang, Yao Zhao, Xing Pan, Rui Ai
2023-10-09T09:26:27Z
http://arxiv.org/abs/2310.05556v2
WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather Conditions ###### Abstract Depth estimation models have shown promising performance on clear scenes but fail to generalize to adverse weather conditions due to illumination variations, weather particles, etc. In this paper, we propose WeatherDepth, a self-supervised robust depth estimation model with curriculum contrastive learning, to tackle performance degradation in complex weather conditions. Concretely, we first present a progressive curriculum learning scheme with three simple-to-complex curricula to gradually adapt the model from clear to relative adverse, and then to adverse weather scenes. It encourages the model to gradually grasp beneficial depth cues against the weather effect, yielding smoother and better domain adaption. Meanwhile, to prevent the model from forgetting previous curricula, we integrate contrastive learning into different curricula. Drawn the reference knowledge from the previous course, our strategy establishes a depth consistency constraint between different courses towards robust depth estimation in diverse weather. Besides, to reduce manual intervention and better adapt to different models, we designed an adaptive curriculum scheduler to automatically search for the best timing for course switching. In the experiment, the proposed solution is proven to be easily incorporated into various architectures and demonstrates state-of-the-art (SoTA) performance on both synthetic and real weather datasets. ## I Introduction Depth estimation builds a bridge between 2D images and 3D scenes and has numerous potential applications such as 3D reconstruction [12], autonomous driving, etc. In recent years, due to the high costs of GT-depth collection from LiDARs and other sensors, researchers have turned to self-supervised solutions by exploiting photometric consistency between the depth-based reconstructed images and the target images. However, there is a sharp drop in depth precision when it comes to adverse weather conditions because weather particles spoil the consistency assumption and illumination variations produce an inevitable domain gap. Recent works tried to mitigate the performance degradation by restoring clear weather scenes [14], extracting features consistent with sunny conditions [29, 15], knowledge distillation from clear scenes [21, 8], etc. However these solutions do not account for the fact that weather comes in varying degrees and categories, and their data augmentation cannot reflect real situations well (Fig. 2), which hinders the potential of the estimation algorithm under weather conditions. In this paper, we propose a self-supervised robust depth estimation model (named WeatherDepth) to address the above issues through curriculum contrastive learning. On the one hand, we simulate the progressive advances from clear to relatively adverse, and then adverse weather scenes, building three simple-to-complex curricula with adverse weather to different degrees. Concretely, we first train a base model on sunny data with clear structures to satisfy photometric consistency. This allows the model to obtain better-generalizable local optima for pre-training on more complex scenarios [2]. Then we optimize the model on relative adverse weather images with light effects, ground snow and water, which share a part of common regions with the clear domain and inspire the model to gradually grasp the depth cues against the missing textures and contrasts. Finally, we train the model on adverse data with the addition of weather particles (e.g., raindrops), further boosting the capability of handling complex noise patterns and violations of self-supervised assumptions. On the other hand, predefined curriculum learning alone may lead to catastrophic forgetting [24] due to the substantial inter-domain difference in each stage. To this end, we embed light-weight contrastive learning designs in different curricula. Specifically, as shown in Fig. 3, we first establish **one** contrastive mode between two clear image with different traditional enhancements[11]. This forces the network to Fig. 1: **Typical examples on real weather images.** Compared with Robust-Depth* (the SoTA robust depth estimation model under adverse weather), our WeatherDepth* produces more accurate results against (a) snowflakes, (b) raindrops on the lens, and (c) water surface reflections. Note both solutions adopt the same baseline model (MonoViT). become more robust to this depth-irrelevant information and get prepared for the weather variation in later courses. Then, we build **three** more challenging contrastive modes between the sunny scene and randomly selected rainy/snowy/foggy weather scenes. It effectively prevents the network from solely focusing on resisting weather changes and completely biasing its domain to the new weather. In the last curriculum, we contrast three adverse weather against three relative adverse weather, constructing **nine** contrastive modes with the goal of improving the cross-weather robustness and relieving the problem of forgetting. These increasingly challenging contrastive modes( range from 1 to 3 to 9) formulate another curriculum learning process based on contrastive difficulty, which guides the training easily to be converged. Moreover, we propose an adaptive curriculum scheduler to automatically switch curricula. It reduces manual intervention and produces smoother course transitions. To train an expected model and shrink the domain gap to real weather conditions, we combine GAN and PBR techniques [22] to build the WeatherKITTI dataset with diverse categories and magnitude of weather. Compared with existing augmented weather data [21, 13, 14], it renders a more realistic weather scene, as shown in Fig. 2. Finally, we incorporate the proposed curriculum contrastive learning scheme into three popular depth estimation baselines (PlaneDepth, WaveletMonodepth, and MonoViT) to evaluate its effectiveness. Experimental results show the proposed WeatherDepth models outperform the existing SoTA solutions on both synthetic and real weather datasets. To our knowledge, this is the first work to apply curriculum contrastive learning to depth estimation. To sum up, the main contributions are summarized as follows: * To adapt to adverse weather without knowledge forgetfulness, we propose a curriculum contrastive learning strategy with robust weather adaption. It can be applied to various SoTA depth estimation schemes and could be extended to other simple-to-complex prediction tasks. * To reduce manual intervention and better adapt to different models, an adaptive curriculum scheduler is designed to automatically switch the course by searching for the best timing. Besides, we built the WeatherKITTI dataset to narrow the domain gap to real weather situations. * We conduct extensive experiments to prove the universality of our curriculum contrastive learning scheme on various architectures and the superior performance over the existing SoTA solutions. ## II Related work ### _Self-supervised Depth Estimation_ Since the pioneering work of Zhou et al. [31] showed that only using geometric constraints between consecutive frames can achieve excellent performance, researchers have continued to explore the cues and methods to train self-supervised models through videos [11, 25, 29] or stereo image pairs [19, 7, 10]. Afterward, methods including data augmentation[16], self-distillation[23, 1], indoor scenes aiding[27] etc. have been introduced to self-supervised models, pushing their inference performance closer to supervised models. Our model adopts both supervised training manners, monocular and stereo, to verify the scalability of our method. ### _Adverse Condition Depth Estimation_ Recently, the progress in depth prediction for typical scenes has opened up opportunities for tackling estimation in more challenging environments. Liu et al. [15] boost the nighttime monocular depth estimation (MDE) performance by using a Generative Adversarial Network (GAN) to render nighttime scenes and leveraging the pseudo-labels from daytime estimation to supervise the night-time training. Then, Zhao et al. [29] consider rainy nighttime additionally. In this work, to fully extract features from both scenes, they used two encoders trained separately on night and day image pairs and applied the consistency constraints at the feature and depth domains. The first MDE model under weather conditions was proposed in [21]. This work introduced a semi-augmented warp, which exploits the consistency between the clear frames while using the augmented prediction. Moreover, bi-directional contrast was incorporated in this work to improve the accuracy, although this doubles the training time. In [8], instead of using the KITTI dataset which only contains dry and sunny scenes, NuScenes and Oxford RobotCar datasets were adopted, for its real rainy and night scenarios. They first train a baseline on sunny scenes, then fix these net weights and transfer-train another network for weather scenes with day distill loss. Besides the above methods that combine data augmentation and various strategies, there are also other solutions [14] trying to estimate depth after removing the weather influence on the image. Our approach synthesizes the strengths of previous works, utilizing a single end-to-end encoder-decoder network architecture to build an efficient and effective solution. Fig. 2: **Comparison of simulated adverse weather.** The other augmentations in the third column are from previous weather depth estimation studies [21, 14, 13], which also adopt data augmentation. Obviously, our WeatherKITTI augmentation is significantly more natural than their result. ## III Method In this section, we elaborate on the key components and algorithms of the proposed method. ### _Preliminary_ The proposed WeatherDepth is built on self-supervised depth estimation. Given a target input image \(I\in\mathbb{R}^{C\times H\times W}\) and an auxiliary reference image \(I^{\prime}\) from the stereo pair or adjacent frames, the self-supervised model \(\mathcal{F}:I\to d\in\mathbb{R}^{H\times\hat{W}}\) is expected to predict the disparity map \(d\). With known baseline \(b\) and focal length \(f\), we compute the depth map \(D=bf/d\). Then we can warp \(I^{\prime}\) to the target view using the projected coordinates that are generated from \(D\), relative camera poses \(T_{I^{\prime}\to I}\) (from pose network or extrinsics) and intrinsics \(K\): \[\hat{P}=I^{\prime}\langle\operatorname{Proj}(D,T_{I^{\prime}\to I},K)\rangle. \tag{1}\] The above equation describes such a warping process, in which \(\hat{P}\) denotes the warped image. The photometric reconstruction loss is then defined as: \[l_{ph}(d)=\alpha\frac{1-\operatorname{SSIM}(I,\hat{P})}{2}+\beta|I-\hat{P}|. \tag{2}\] Based on stereo geometry, \(l_{ph}\) equals 0 when \(d\) is perfectly predicted. By minimizing the above loss, we can obtain the desired depth estimation. In addition, our WeatherDepth also adopts the semi-augmented warping from[21]: \[\tilde{P}=I^{\prime}\left\langle\operatorname{Proj}\left(\tilde{D},T_{I^{ \prime}\to I},K\right)\right\rangle, \tag{3}\] where \(\tilde{D}\) is the depth estimated using augmented images, \(\tilde{I^{\prime}}\) is our semi-augmented warp result that will replace \(\hat{I^{\prime}}\) to calculate \(l_{ph}\). We leveraged the consistency between unaugmented images and the estimated depth of augmented images, avoiding the inconsistency between \(l^{\prime}_{ang}\) and \(I\) caused by weather variations. ### _Curriculum Selection_ Due to the diversity of lighting and noise patterns in adverse weather, training directly on adverse weather data can easily lead to underfitting. To address this issue, we designed three curricula to gradually adapt to the new data domain. In curriculum designs, we obey the two principles. (1) The curriculum scenarios should follow the real-world weak-to-strong weather variation. (2) The curricula should be organized in a simple-to-complex order. To this end, we define our first curriculum as sunny scenes with slight adjustments on brightness, contrast, and saturation to make our model robust to these depth-invariant conditions. Then we simulate the relative adverse weather by incorporating the ground water reflections, ground snow, and droplets on the lens in the second course, which are not fully considered by previous works[21, 8]. These effects not only create wrong depth cues (like Fig. 1 (b,c)) but also change the texture of the origin scenes. In the last stage, we further introduce the raindrops, fog-like rain [22], the veiling effect, and snow streaks [5]. Because these particles are only visible in extremely adverse weather. ### _Curriculum Contrastive Learning_ To prevent the problem of forgetting the previous curricula, we embed contrastive learning into our curriculum learning process in an efficient manner. As depicted in line 6 of Algorithm 1, in ContrastStep, we use the TrainStep model to directly infer the depth of \(I_{cst}\) Fig. 3: **WeatherDepth pipeline** Through three progressive stages, our model-agnostic approach can estimate depth reliably under weather environments. Except for the last stage, we input the loss of estimation model into the curriculum scheduler, in order to change the level properly. And we input image pairs \(I_{ang}\) and \(I_{cst}\) to obtain depth maps \(D_{ang}\) and \(D_{cst}\). \(D_{cst}\) is detached as the contrastive target to compute contrastive loss, which is weighted and backpropagated together with the original loss. Here \(I_{clr}\), \(I_{csr}\) and \(I_{aug}\) should have the same depth since they are different weather augmentations of the same image, and weather changes do not affect the scene itself. Moreover, prediction will be more accurate in previous levels, for weather variations are less severe. Therefore, as shown in Fig. 3, we can contrast the depth results from different curriculum stages to obtain the contrastive loss: \[L_{\text{cst}}=\begin{cases}\log(|D_{aug}-D_{csr}|+1),&\text{if }S_{aug}>S_{csr}\\ \log(|D_{aug}-D_{csr}|+1),&\text{if }S_{csr}>S_{aug},\end{cases} \tag{4}\] where \(S_{aug}\) is the current curriculum stage, \(S_{csr}\) is the stage of the contrastive weather. \(D_{aug}\) and \(D_{csr}\) are the depth predictions of the training image and contrastive inference respectively. The underlining signifies that the gradients are cut off during backpropagation. Then the final loss is: \[L_{\text{backward}}=L_{\text{model}}+w_{curr}\cdot L_{\text{cst}}, \tag{5}\] where \(L_{\text{model}}\) is the self-supervised model loss, \(w_{curr}\) is the contrastive weight. Considering the model needs to adapt to weather changes when entering a new stage, we initialize it to a small value and update \(w_{curr}\) each epoch according to: \[w_{curr}=\begin{cases}w_{cat},&\text{if }r=0\\ \max(w_{max}w_{csr},\lambda w_{curr}),&\text{if }r\neq 0\text{ and }r\mid 2\\ w_{curr},&\text{others},\end{cases} \tag{6}\] where \(\lambda\) is a constant \(>1\). As the model gradually adapts to the curriculum stage, we want to steadily increase the consistency constraint. \(r\) is the number of epochs trained in the current stage. With the integration of contrast, our curriculum learning has considered both weather changes curricula and contrastive difficulty alterations. ### _Adaptive Curriculum Scheduler_ As mentioned in [24], curriculum learning paradigms typically consist of two key components: the Difficulty Measurer and the Training Scheduler. The former is used to assign a learning priority to each data/task, while the latter decides when to introduce hard data into training. In this work, our Difficulty Measurer is pre-defined, whose benefits have been elaborated in section I. However, for different baseline models, the curriculum scheduler needs to be designed separately. A pre-defined switching mode (switching time) for a certain model may not adapt well to other networks. To this end, we check whether the network has fitted well in the current stage based on the change of self-supervised loss(\(L_{model}\)), as shown in Fig. 3 and lines 10-20 in Algorithm 1. The contrastive loss is not included, because as stated before, the weight of contrastive learning itself varies across epochs, adding it would make the model extremely unstable. Actually, this strategy is inspired by early stopping methods[18], which effectively reduces training time and avoids overfitting at a certain stage. ``` 0: Clear data \(I_{L}^{clr},I_{R}^{clr}\), augmented data (only left) \(I_{i}^{aug}\), contrast data (only left) \(I_{i}^{est}\), curriculum patience \(P_{i}\) (\(i=1,2,3\),indicating the augmentation magnitude) 1: Let level \(l=1\), patience \(p=0\) 2:for each epoch do 3: Update contrastive weight \(w_{curr}\) 4:for each batch do 5:\(D^{aug}\),\(L_{model}\)=TrainStep(\(I_{i}^{aug},I_{L}^{clr},I_{R}^{clr}\)) 6:\(D^{est}\)=InferenceStep(\(I_{i}^{est}\)) 7:\(L_{csr}\)=ContrastStep(\(D^{aug},D^{est}\)) 8: recordloss.Append(\(L_{model}\)) 9:endfor 10: recordkey.Append(Average(recordloss)) 11:if recordkey[-1] - recordkey [-2] \(>\) threshold then 12:\(p=p+1\) 13:endif 14:if\(p\geq P_{i}\)then 15: Reset \(w_{curr}\) and \(p=0\) 16: Switch to next-level data 17:if\(l\) = maxlevel and \(P_{i}\geq 3\)then 18: Load best epoch for the level \(l\) 19:endif 20:\(l=l+1\) 21:endif 22:endfor ``` **Algorithm 1** Curriculum Scheduler for WeatherDepth ### _Method Scalability_ To prove our expansiveness, we choose three popular models with tremendous differences in architecture as our baselines. The reasons are summarised as follows: * **WaveletMonodepth[19]**: In this solution, wavelet transform is taken as the depth decoder, which trades off depth accuracy with speed. This is well suited for scenarios with different accuracy requirements. * **PlaneDepth[23]**: This model uses Laplace mixture models based on orthogonal planes to estimate depth. It predicts depth for each plane respectively and computes the final depth by summing it over the Laplace distribution probabilities. Besides, PlaneDepth achieves current **SoTA** performance on self-supervised depth estimation. * **MonoViT[30]**: Different from convolutional networks, MonoViT takes the transformer model as an encoder to improve image feature extraction. It represents the category of transformer-based models. ## IV Experiments ### _Dataset_ **WeatherKITTI Dataset**: Based on KITTI [9], we establish a weather-augmented dataset to enhance depth estimation models with generalization to real weather. It contains three kinds of most common weather types: rain, snow, and fog. As shown in Fig. 3, each weather type has two levels of magnitudes: relative adverse and adverse. The former includes light effects, ground snow and water, which are rendered by a CycleGAN model [32] trained on CADC [17], ACDC[20] and NuScenes[3] dataset. The latter further adds noticeable weather particles through physically-based rendering. Following SoTA weather rendering pipelines [22, 5], we generate adverse rains and snow masks. For foggy conditions, based on the atmospheric scattering model, we construct the second and third-stage foggy simulation augmentations under 150m and 75m visibility, respectively. Our rendered dataset covers all the images in the training and test scenes, totaling 284,304 (47,384\(\times\)6) images. To benchmark the robustness of models, we put the original KITTI and our weather-augmented KITTI together, named as the WeatherKITTI dataset. **DrivingStereo[26]**: To characterize the performance of our models in real-world conditions, we use this dataset that provides 500 real rainy and foggy data images. **CADC**: [17] is one of few snowy datasets, however, its data is in sequential order. Therefore, we sampled 1 in every 3 sequential images and obtained 510 images as test data. As shown in Fig. 1(a,b), this dataset contains real-world scenes with lens droplets, heavy snowfall, ground snow, etc. For depth GT generation, since LiDARs can be inaccurate under snowy conditions [28], we utilize the DROR[4] algorithm to filter out erroneous depths caused by snowflakes and generate the final depth GTs. In addition, invalid sky and ego-vehicle regions(without Lidar points) are removed, with a final image resolution of 1280\(\times\)540 pixels. ### _Implement Details_ WaveletMonodepth, PlaneDepth, and MonoViT are three kinds of typical depth estimation models, and their performances are currently the best. Hence, we adapt the proposed curriculum contrastive learning scheme into these three baselines to validate our generalization, named as WeatherDepth, WeatherDepth\({}^{*}\), WeatherDepth\({}^{\dagger}\) in Table I. Most hyperparameters(learning rate, optimizer, training image resolution, etc.) are the same as their origin implementation[30][23][19]. All models are trained on the WeatherKITTI dataset automatically with our scheduler. For stereo training, we use Eigen split [6]. As for monocular training, we follow Zhou split [31] with static scenes being removed. In our contrastive learning, we set \(P_{1}=P_{2}=1\) and \(w_{max}=10\) (all symbols are the same with Algorithm 1) for the three models. The model-specific minor modifications are shown in Table I. For all models, we strictly follow the original setup of [30, 23, 19] to train and evaluate our models. In particular, for MonoViT, we use monocular training for 30 epochs and test on \(640\times 192\) resolution. The contrast depth is not detached during MonoViT training. For PlaneDepth, we only adopt the first training stage [23], training for 60 epochs in a stereo manner and testing with \(1280\times 384\) resolution. For WaveletMonodepth, we train the Resnet-50 version for 30 epochs in a stereo manner and test with the resolution of \(1024\times 320\). In addition, to compare with Robust-Depth\({}^{*}\)[21] fairly, we only adopt the WeatherDepth\({}^{*}\) in comparative experiments as shown in Table II, because both Robust-Depth\({}^{*}\) and WeatherDepth\({}^{*}\) adopt the same baseline (MonoViT) and training/testing resolutions. As for WeatherDepth and WeatherDepth\({}^{\dagger}\), they adopt different resolutions as used in [23, 19] and different baselines. To prevent unfair comparison, we only report the performance gains with the proposed curriculum contrastive learning scheme, shown in Table III and Table IV. ### _WeatherKITTI Results_ We show detailed comparative experiments between our method and current SoTA models in Table II(a). The Eigen raw split [6] is used for evaluation following common practice, and we report the average results under 7 different weather conditions. In particular, Robust-Depth\({}^{*}\)[21]is the latest SoTA model that tries to tackle weather conditions like us, but our WeatherDepth\({}^{*}\) greatly outperforms it with the same baseline (MonoViT). These results sufficiently demonstrate that our method is able to handle weather variations and domain changes. As for the other two baselines, our WeatherDepth and WeatherDepth\({}^{\dagger}\) have also shown a significant gain in Table III (a) and Table IV (a), which implies that our strategy can be generalized to other typical depth estimation methods. #### Iv-D1 Rain We show the quantitative results on real rainy data from the DrivingStereo dataset in Table II (b). Our method is still more accurate than the existing solutions, which demonstrates our model can adapt to the variations of rainy scenes. Especially, our scheme reduces the errors from water reflections and lens droplets as shown in Fig. 1(c). #### Iv-D2 Snow As depicted in Table II (c), our method reaches the SoTA performance on the new CADC dataset. This further suggests that our method enables the depth estimation models to ignore the erroneous depth cues (Fig. 1(a)(b)) due to the progressive introduction of ground snow and snowflakes, which is very challenging for depth estimation tasks. #### Iv-D3 Fog In Table II (d), we collect the results of fog scene evaluation for our model and SoTA frameworks. Unfortunately, Although our final model outperforms MonoViT itself, it falls slightly behind Robust-Depth. This is because the fog density in the DrivingStereo dataset is relatively light, while our model aims to address more adverse weather conditions (using more adverse fog augmentation). The domain adaptation of our model is biased after the second-stage fog augmentations. In the (b-d) of Table III and Table IV, we can notice a similar trend as that of Table II, which further demonstrates the generalization capacity of the proposed strategy on different models(WaveletMonodepth, MonoViT, and PlaneDepth) and on different real weather datasets(Rain, Snow, and Fog). ### _Ablation Study_ We have demonstrated the superiority of our method on synthetic and real adverse weather data. Next, we conduct experiments to validate the effectiveness of each component. For clarity, we only take WeatherDepth\({}^{*}\) in the ablation study. As for the other models, they show similar performance and will be demonstrated in the supplement materials. #### Iv-E1 Learning Strategy We define the direct mixed training manner as "w/o CC", in which each weather condition has \(1/n\) probability of being selected for training. As shown in Tables V(a-e), since mixed training attempts to estimate depth from erroneous depth cues introduced by weather variation, this strategy performs poorly on our WeatherKITTI dataset. Moreover, it degrades the performance across all three real weather scenes, further validating the effectiveness of our proposed curriculum contrastive learning.
2305.01586
An Alternative to WSSS? An Empirical Study of the Segment Anything Model (SAM) on Weakly-Supervised Semantic Segmentation Problems
The Segment Anything Model (SAM) has demonstrated exceptional performance and versatility, making it a promising tool for various related tasks. In this report, we explore the application of SAM in Weakly-Supervised Semantic Segmentation (WSSS). Particularly, we adapt SAM as the pseudo-label generation pipeline given only the image-level class labels. While we observed impressive results in most cases, we also identify certain limitations. Our study includes performance evaluations on PASCAL VOC and MS-COCO, where we achieved remarkable improvements over the latest state-of-the-art methods on both datasets. We anticipate that this report encourages further explorations of adopting SAM in WSSS, as well as wider real-world applications.
Weixuan Sun, Zheyuan Liu, Yanhao Zhang, Yiran Zhong, Nick Barnes
2023-05-02T16:35:19Z
http://arxiv.org/abs/2305.01586v2
# An Alternative to WSSS? ###### Abstract The Segment Anything Model (SAM) has demonstrated exceptional performance and versatility, making it a promising tool for various related tasks. In this report, we explore the application of SAM in Weakly-Supervised Semantic Segmentation (WSSS). Particularly, we adapt SAM as the pseudo-label generation pipeline given only the image-level class labels. While we observed impressive results in most cases, we also identify certain limitations. Our study includes performance evaluations on PASCAL VOC and MS-COCO, where we achieved remarkable improvements over the latest state-of-the-art methods on both datasets. We anticipate that this report encourages further explorations of adopting SAM in WSSS, as well as wider real-world applications. The code is available at [https://github.com/weixuansun/wsss_sam](https://github.com/weixuansun/wsss_sam). ## 1 Introduction Recent advancements in large foundation models have had a significant impact on the development of downstream deep-learning tasks. Large language models (LLMs) pre-trained on massive text corpus, e.g., ChatGPT1 have demonstrated exceptional performance on a variety of Natural Language Processing (NLP) tasks. Concurrently, multi-modal pre-trained models including CLIP [32] and BLIP [25] have been successfully applied to a range of vision and/or language tasks. The impact of these models is revolutionary in their respective domains. Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) The Segment Anything Model (SAM) [17] is a recent image segmentation model that has demonstrated outstanding performance in various applications. SAM owes its impressive performances to two factors. First, the model is trained on a large visual dataset named SA-1B containing over 1B masks of 11M images. This dataset is collected in a model-in-the-loop manner with multiple levels of human guidance. The rich training data contributes to its ability to perform zero-shot segmentation on unseen images of varied distributions. Second, SAM is designed and trained to accept versatile prompts as input. To date, it supports points (via users clicking the mouse) or bounding boxes as prompts. This greatly improves the user experience when interacting with the model. In this work, we focus on assessing the performance of SAM on image-level Weakly-Supervised Semantic Segmentation (WSSS). WSSS is initially proposed to reduce the label cost of fully supervised semantic segmentation [31]. Instead of exhaustively labeling every pixel within an image for its class, WSSS resorts to weaker, yet cheaper alternatives. Examples include only labeling spare points [3, 15], bounding boxes [8, 22, 35, 38], or scribbles [28, 41, 42] in an image, or using the image-level classification results as the label. The latter is the most common approach in recent work [13, 20, 39, 40, 45, 47, 49, 53], as such classification results can be readily obtained through any pre-trained vision backbones. Pseudo segmentation labels are then extracted from these weak labels and are used to train the specific segmentation model that targets a certain domain or industrial application. In this report, we aim to adapt SAM as the pseudo-label generation block in WSSS and examine its performance. We note that the training approach used for SAM includes pixel labeled images, which means that it cannot be regarded a weakly supervised approach. However, given the availability of a foundation approach for segmentation, we aim to compare the performances of traditional WSSS approaches with a foundation approach that is given the WSSS labels for the particular dataset. This is not a 'fair' comparison, but we believe it is a practically valuable comparison. Further, the Segment Anything paper states explicitly that at the start of the first training stage, "SAM was trained using common public segmentation datasets". The paper does not give further information so the training may have had exposure to fully labeled images that are similar to some among the standard datasets for WSSS. However, given that the SAM paper evaluates a zero-shot instance segmentation on COCO, we infer that it is unlikely that COCO data was used for training. We show quantitative and qualitative comparisons with state-of-the-art methods on image-level WSSS methods. The main observation is that SAM performs favorably in most unprompted settings, while it fails in certain cases due to the issue of semantic obscurity. It suggests that SAM could potentially supersede the existing WSSS process as a practical approach. However, note that the question remains on how to effectively and efficiently apply it in real-world applications. Further, as stated, SAM does not qualify as a weakly-supervised approach. ## 2 SAM for Weakly-Supervised Semantic Segmentation SAM is a powerful segmentation model comprised of a large image encoder, a prompt encoder, and a lightweight mask decoder. We refer readers to [17] on architectural details. The model currently supports point or bounding boxes as input prompts. We empirically confirm that it performs well with such types of prompts, highlighting its ability in handling weak supervision in these formats. However, to date, SAM does not support text as prompts -- the form of supervision used in image-level WSSS. To this end, we enlist Grounded-Segment-Anything2 for our experiments on WSSS, which is an implementation that combines off-the-shelf image grounding methods with SAM. Specifically, for every image, we concatenate the Figure 1: SAM generated pseudo-labels compared to the ground-truth in PASCAL VOC. In most cases, SAM performs closely to the human annotations. Figure 2: We observe that in some cases SAM performs better than the human annotated ground-truth. Notably, SAM is able to capture crisp boundaries, more detailed structures and finer-grained semantic classes. class labels in word(s) with full stops as the text prompt and generate grounded bounding boxes via Grounded-DINO [30]. Note that the text prompt is firstly tokenized into sub-words and grounding is then performed at the token level. Therefore, if a token (sub-word) is grounded in an image, we consider it as the grounding of its parent word. This approach enables us to map an arbitrary grounding back to one of the pre-defined class labels. For example, class "potted plant" is tokenized into four tokens: "pot", "#ted", "#pl" and "#ant", where the special character "#" signifies that the corresponding token is an appending sub-word of the preceding token. The bounding box is assigned to the class "potted plant" whenever any of the four tokens is grounded. Once the grounded bounding boxes are generated, we feed them into SAM to produce instance masks. Finally, we join the instance segmentation masks into semantic segmentation pseudo-labels. ## 3 Experimental Results ### Experimental setup DatasetsWe apply SAM on two common WSSS datasets in our experiments, PASCAL VOC and MS-COCO. PASCAL VOC has 10,582 training and 1,449 validation images. MS-COCO is relatively larger-scaled, with 80k training and 40k validation images. Implementation DetailsWe use Grounded-DINO [30] to generate bounding boxes from text prompts with Swin-T as the grounding backbone. For SAM, we use the ViT-H model to extract pseudo segmentation masks from bounding boxes. For the final semantic segmentation network, we use Deeplab-v2 [4] with ResNet-101 [12] as backbone. For MS-COCO, the model is pre-trained on ImageNet1k [9]. For PASCAL VOC, the model is pre-trained on COCO ground-truth following the setting of [4]. state-of-the-art methods with clear margins. We note that the segmentation performances are approaching the fully-supervised results on PASCAL VOC. ### Ms-Coco We show pseudo-label and segmentation performance comparison on MS-COCO in Table 3 and Table 4. Likewise in the PASCAL VOC, The SAM-based method achieves an appreciable segmentation mIoU of 55.6, which surpasses existing methods immensely. Compared to PASCAL VOC, MS-COCO is a lager dataset with more semantic classes and complex images that include multiple objects, this encouraging result further demonstrates that SAM could handle annotations of complex scenes. ## 5 Discussion ### Why Don't We Directly Use SAM? One might question the need for a Weakly Supervised Semantic Segmentation (WSSS) pipeline given the impressive performance of SAM. However, we argue that a WSSS pipeline is still beneficial for certain application scenarios. For instance, in many narrow-domain use cases, only specific semantic classes are of interest, which renders the open-vocabulary setup unnecessary and even prone to introducing errors. In contrast, a WSSS pipeline can be trained to respond to only a selected set of classes, which is more favored. Moreover, industrial or mobile environments that are often resource-limited and time-sensitive cannot accommodate the use of SAM, with its considerable VRAM usage and low inference speed. In such cases, training a specific semantic segmentation network in the WSSS fashion for the downstream tasks remains meaningful. ### Pseudo Label Quality SAM can generate high-quality segmentation masks as shown in Fig. 1. In addition, we observe that the pseudo-labels generated via SAM are more accurate than the manual annotations in some cases. As shown in Fig. 2, human annotations often contain imprecise polygons and ignore fine-grained details. In contrast, SAM is able to capture crisp boundaries, more detailed structures, and finer-grained semantic classes. We conjecture the possibility that SAM can be used to guide, or even correct human annotations when building future datasets. \begin{table} \begin{tabular}{l l c c} \hline \hline Methods & Venue & w/ saliency & Val \\ \hline AuxSegNet [48] & ICCV2021 & ✓ & 33.9 \\ EPS [11] & CVPR2022 & ✓ & 35.7 \\ L2G [14] & CVPR2022 & ✓ & 44.2 \\ \hline Wang et al. [43] & IJCV2020 & & 27.7 \\ Ru et al. [34] & CVPR2022 & & 38.9 \\ SEAM [44] & CVPR2020 & & 31.9 \\ CONTA [51] & NeurIPS2020 & & 32.8 \\ CDA [36] & ICCV2021 & & 33.2 \\ Ru et al. [33] & IJCV2022 & & 36.2 \\ URN [26] & AAAI2022 & & 41.5 \\ MCTformer [49] & CVPR2022 & & 42.0 \\ OCR [7] & CVPR2023 & & 42.5 \\ ESOL [24] & NeurIPS2022 & & 42.6 \\ SIPE [5] & CVPR2022 & & 43.6 \\ RIB [19] & NeurIPS2020 & & 43.8 \\ CLIP-ES [29] & CVPR2023 & & 45.4 \\ \hline **SAM** & & & **55.6** \\ \hline \hline \end{tabular} \end{table} Table 4: Segmentation performance comparison of WSSS methods on MS COCO. w/ saliency: the method adopts extra saliency information. Best number is in bold. \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & w/ saliency & Pseudo \\ \hline AdvCAM (CVPR2021) [20] & & 35.8 \\ IRN (CVPR2018) [1] & & 42.4 \\ ReCAM (CVPR2022) [6] & ✓ & 46.3 \\ \hline **SAM** & & **66.8** \\ \hline \hline \end{tabular} \end{table} Table 3: Performances of pseudo segmentation labels on MS COCO _train_ set. SAM pseudo masks outperform previous methods by a significant margin. Best number is in bold. \begin{table} \begin{tabular}{l c c} \hline \hline Methods & w/ saliency & Val & Test \\ \hline NSRM [50] & CVPR2021 & ✓ & 70.4 & 70.2 \\ InferCam [39] & WACV2022 & ✓ & 70.8 & 71.8 \\ EDAM [46] & CVPR2021 & ✓ & 70.9 & 70.6 \\ EPS [23] & CVPR2021 & ✓ & 71.0 & 71.8 \\ DRS [16] & AAAI2021 & ✓ & 71.2 & 71.4 \\ L2G [14] & CVPR2022 & ✓ & 72.1 & 71.7 \\ Du et al. [10] & CVPR2022 & ✓ & 72.6 & 73.6 \\ \hline PSA [2] & CVPR2018 & & 61.7 & 63.7 \\ SEAM [44] & CVPR2020 & & 64.5 & 65.7 \\ CDA [36] & ICCV2021 & & 66.1 & 66.8 \\ ECS-Net [37] & ICCV2021 & & 66.6 & 67.6 \\ Du et al. [10] & CVPR2022 & & 67.7 & 67.4 \\ CPN [52] & ICCV2021 & & 67.8 & 68.5 \\ AdvCAM [20] & CVPR2021 & & 68.1 & 68.0 \\ Kweon et al. [18] & ICCV2021 & & 68.4 & 68.2 \\ ReCAM [6] & CVPR2022 & & 68.5 & 68.4 \\ SIPE [5] & CVPR2022 & & 68.8 & 69.7 \\ URN [26] & AAAI2022 & & 69.5 & 69.7 \\ ESOL [24] & NeurIPS2022 & & 69.9 & 69.3 \\ PMM [27] & ICCV2021 & & 70.0 & 70.5 \\ VWL-L [33] & IJCV2022 & & 70.6 & 70.7 \\ Lee et al. [21] & CVPR2022 & & 70.7 & 70.1 \\ MCTformer [49] & CVPR2022 & & 71.9 & 71.6 \\ OCR [7] & CVPR2023 & & 72.7 & 72.0 \\ CLIP-ES [29] & CVPR2023 & & 73.8 & 73.9 \\ \hline **SAM** & & & **77.2** & **77.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of WSSS methods on PASCAL VOC 2012 _val_ and _test_ sets. w/ saliency: the method adopts extra saliency information. Best numbers are in bold. ### The Issue of Semantic Obscurity When generating pseudo-labels, we encounter the issue of "semantic obscurity", which arises from the subjectivity of human annotations. A typical example is shown in Fig. 3, where ground-truth annotations in PASCAL VOC for "dining table" always include all objects that are placed on the table -- "plate", "bowl", and "food" etc. However, the output of SAM stays true to the concept of "table" and contains nothing else. We suspect the issue is caused by the discrepancy between PASCAL VOC and the SAM SA-1B dataset, particularly the granularity of the annotations. Granted, one could argue that the PASCAL VOC dataset is flawed in this regard. Nevertheless, to assess the true performance of SAM on said dataset, we attempt to relieve this issue by including some assist words, such as "bowl", "plate", "food", "fruit", "glass", and "dishes" for the affected "dining table" class; and likewise, "halter" and "saddle" for the "horse" class. We derive these words through manual examination. Note that this mitigation strategy is by no means comprehensive, nor generalizable to datasets of larger scales (e.g., MS-COCO), as it might interfere with other classes. A systematic approach to addressing the semantic obscurity issue would be to construct hierarchical-structured semantic classes with better prompts. We leave this to future work. ### Conclusion and Future Work This report presents a preliminary investigation into the application of SAM as a foundation model in Weakly Supervised Semantic Segmentation (WSSS). Results from experiments conducted on two datasets demonstrate that SAM can yield competitive performance without the need for model fine-tuning, while for existing WSSS methods, the classification re-training and pseudo-label generation are burdensome necessities. Notably, we are not aiming to achieve SOTA results in a fair comparison, as SAM is trained with massive amounts of data and a training process that includes fully labeled images. This research concentrates on the direct application of the SAM as a foundation model, which enables a streamlined and simplified approach to WSSS. Moving forward, a potential direction would be to adapt SAM into an automatic labeling tool for various real-world applications. Specifically, we shall explore the use of hierarchical-structured semantic classes and better prompts, which address the issue of semantic obscurity. We might also wish to investigate using SAM for segmenting "stuff" (i.e., non-object, background) classes such as "sky", "sea" and "road", in order to improve the overall scene understanding ability.
2310.02239
MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens
The effectiveness of Multimodal Large Language Models (MLLMs) demonstrates a profound capability in multimodal understanding. However, the simultaneous generation of images with coherent texts is still underdeveloped. Addressing this, we introduce a novel interleaved vision-and-language generation method, centered around the concept of ``generative vokens". These vokens serve as pivotal elements contributing to coherent image-text outputs. Our method is marked by a unique two-stage training strategy for description-free multimodal generation, which does not necessitate extensive descriptions of images. We integrate classifier-free guidance to enhance the alignment of generated images and texts, ensuring more seamless and contextually relevant multimodal interactions. Our model, MiniGPT-5, exhibits substantial improvement over the baseline models on multimodal generation datasets, including MMDialog and VIST. The human evaluation shows MiniGPT-5 is better than the baseline model on more than 56\% cases for multimodal generation, highlighting its efficacy across diverse benchmarks.
Kaizhi Zheng, Xuehai He, Xin Eric Wang
2023-10-03T17:49:04Z
http://arxiv.org/abs/2310.02239v3
# MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens ###### Abstract Large Language Models (LLMs) have garnered significant attention for their advancements in natural language processing, demonstrating unparalleled prowess in text comprehension and generation. Yet, the simultaneous generation of images with coherent textual narratives remains an evolving frontier. In response, we introduce an innovative interleaved vision-and-language generation technique anchored by the concept of "generative vokens", acting as the bridge for harmonized image-text outputs. Our approach is characterized by a distinctive two-staged training strategy focusing on description-free multimodal generation, where the training requires no comprehensive descriptions of images. To bolster model integrity, classifier-free guidance is incorporated, enhancing the effectiveness of tokens on image generation. Our model, MiniGPT-5, exhibits substantial improvement over the baseline Divter model on the MMDialog dataset and consistently delivers superior or comparable multimodal outputs in human evaluations on the VIST dataset, highlighting its efficacy across diverse benchmarks. ## 1 Introduction In the recent development of larger-scale vision-and-language models, multimodal feature integration is not just a evolving trend but a critical advancement shaping a wide array of applications, from multimodal dialogue agents to cutting-edge content creation tools. With the surge in research and development in this domain, vision-and-language models such as (Wu et al., 2023; Li et al., 2023; Tsimpoukelli et al., 2021; Alayrac et al., 2022) are on the brink of an era where they are expected to comprehend and generate both text and image content seamlessly. This multi-faceted ability is crucial, as it fosters enhanced interactions across various domains like virtual reality, media, and e-commerce. Essentially, the task is to enable models to coherently synthesize, recognize, and respond using both visual and textual modalities, harmonizing the information flow and creating cohesive narratives. However, as we tread the path towards blending textual and visual modalities and achieving the interleaved vision-and-language generation, as illustrated in 1, we recognize that it is driven by the pressing need for more integrated and fluid multimodal interactions in large language models. However, this journey is riddled with multiple challenges. First, while the current state-of-the-art Large Language Models (LLMs) (OpenAI, 2023; Chiang et al., 2023; Ouyang et al., 2022) excel in understanding text and processing text-image pairs, they falter in the nuanced art of generating images. Second, moving away from conventional tasks that benefited from exhaustive image descriptions, the emerging interleaved vision-and-language tasks (Sharma et al., 2018) lean heavily on topic-centric data, often skipping on thorough image descriptors (Huang et al., 2016). Even after being trained on massive datasets, it is challenging to align generated text with corresponding images. Lastly, as we push the boundaries with LLMs, the large memory requirements beckon us to devise more efficient strategies, especially in downstream tasks. Addressing these challenges, we present MiniGPT-5, an innovative interleaved vision-and-language generation technique anchored by the concept of "generative vokens". By amalgamating the Stable Diffusion mechanism with LLMs through special visual tokens (Tan & Bansal, 2020) - "generative vokens", we develop a new approach for multimodal generation. Meanwhile, our proposed two-stage training methodology underlines the importance of a description-free foundational stage, prepping the model to thrive even in data-scarce scenarios. Our generic stages, free from domain-specific annotations, make our solution distinct from existing works. To ensure that the generated text and images are in harmony, our dual-loss strategy comes into play, further enhanced by our innovative generative voken approach and classifier-free guidance. Our parameter-efficient fine-tuning strategy optimizes training efficiency and addresses memory constraints. Building on these techniques, our work signifies a transformative approach. As shown in Figure 2, using ViT (Vision Transformer) and Qformer (Li et al., 2023b) along with the large language models, we adapt multimodal inputs into generative vokens, seamlessly combined with the high-resolution Stable Diffusion 2.1 model (Rombach et al., 2022b) for context-aware image generation. Incorporating images as auxiliary input with instruction tuning approaches and pioneering both the text and image generation loss, we amplify the synergy between text and visuals. In summary, our contributions are primarily threefold: * We propose to use multimodal encoders representing a novel and generic technique that has proved more effective than LLM and also inversion to generative vokens, and combine it with Stable Diffusion to generate interleaved vision-and-language outputs (multimodal language model that can do multimodal generation). * We highlight a new two-staged training strategy for the description-free multimodal generation. The unimodal alignment stage harvests the high-quality text-aligned visual features from large text-image pairs. The multimodal learning stage ensures the visuals and text prompt can well coordinate for generation. The inclusion of classifier-free guidance during the training phase further refines generation quality. * Compared with other multimodal generation models, we achieved state-of-the-art performance on the CC3M dataset. We also established unprecedented benchmarks on prominent datasets, including VIST and MMDialog. Figure 1: MiniGPT-5 is a unified model for interleaved vision-and-language comprehension and generation. Besides the original multimodal comprehension and text generation abilities, MiniGPT-5 can provide appropriate, coherent multimodal outputs. Related Work Text-to-Image GenerationTo transform textual descriptions into their corresponding visual representations, text-to-image models (Reed et al., 2016; Dhariwal and Nichol, 2021; Saharia et al., 2022; Rombach et al., 2022b;a; Gu et al., 2023) employ complex architectures and sophisticated algorithms, bridging the gap between textual information and visual content. These models are adept at interpreting the semantics of input text and translating them into coherent and pertinent images. A notable recent contribution in this field is Stable Diffusion 2 (Rombach et al., 2022b), which employs a diffusion process to generate conditional image features and subsequently reconstructs images from these features. Our research aims to leverage this pre-trained model, enhancing its capabilities to accommodate both multimodal input and output. Multimodal Large Language ModelsAs Large Language Models (LLMs) become increasingly impactful and accessible, a growing body of research has emerged to extend these pretrained LLMs into the realm of multimodal comprehension tasks (Zhu et al., 2023; Li et al., 2023; Dai et al., 2023; OpenAI, 2023; Li et al., 2023; Alayrac et al., 2022). For example, to reproduce the impressive multimodal comprehension ability in GPT-4 (OpenAI, 2023), MiniGPIT-4 (Zhu et al., 2023) proposes a projection layer to align pretrained vision component of BLIP- (Li et al., 2023b) with an advanced open-source large language model, Vicuna (Chiang et al., 2023). In our work, we utilize the MiniGPT-4 as the base model and extend the model's capabilities to multimodal generation. Multimodal Generation with Large Language ModelsTo augment the LLM's capabilities in seamlessly integrating vision and language generation, recent studies have introduced a variety of innovative methods (Ge et al., 2023; Sun et al., 2021; Koh et al., 2023; Sun et al., 2023; Yu et al., 2023). For instance, CM3Leon (Yu et al., 2023) presents a retrieval-augmented, decoder-only architecture designed for both text-to-image and image-to-text applications. Similarly, Emu (Sun et al., 2023b) employs the pretrained EVA-CLIP (Sun et al., 2023a) model to convert images into one-dimensional features and fine-tunes the LLAMA (Touvron et al., 2023) model to generate cohesive text and image features through autoregressive techniques. On the other hand, both GILL (Koh et al., 2023) and SEED (Ge et al., 2023) explore the concept of mapping vokens into the text feature space of a pretrained Stable Diffusion model; GILL employs an encoder-decoder framework, while SEED utilizes a trainable Q-Former structure. In contrast to these approaches, our model takes a more direct route by aligning voken features with visual information. Additionally, we introduce several training strategies aimed at enhancing both image quality and contextual coherence. ## 3 Method In order to endow large language models with multimodal generation capabilities, we introduce a structured framework that integrates pretrained multimodal large language models and text-to-image generation models. To address the discrepancies across model domains, we introduce special visual tokens--termed "generative vokens"--that are able to direct training on raw images. Moreover, we advance a two-stage training method, coupled with a classifier-free guidance strategy, to further enhance the quality of generation. Subsequent sections will provide a detailed exploration of these elements. ### Multimodal Input Stage Recent advancements in multimodal large language models, such as MiniGPT-4, have primarily concentrated on multimodal comprehension, enabling the processing of images as sequential input. To expand their capabilities to multimodal generation, we introduce generative vokens designed for outputting visual features. Additionally, we employ cutting-edge, parameter-efficient fine-tuning techniques within the Large Language Model (LLM) framework for multimodal output learning. A more detailed introduction to these developments is provided in the following paragraphs. Multimodal Encoding:Each text token is embedded into a vector \(e_{\text{text}}\in\mathbf{R}^{d}\), while the pretrained visual encoder transforms each input image into the feature \(e_{\text{img}}\in\mathbf{R}^{32\times d}\). These embeddings are concatenated to create the input prompt features. Adding Vokens in LLM:Since the original LLM's \(V\) vocabulary only includes the textual tokens, we need to construct a bridge between the LLM and the generative model. Therefore, we introduce a set of special tokens \(V_{\text{img}}=\{[\text{IMG}1],[\text{IMG}2],\dots,[\text{IMG}n]\}\) (default \(n=8\)) as generative vokens into the LLM's vocabulary \(V\). The LLM's output hidden state for these vokens is harnessed for subsequent image generation, and the positions of these vokens can represent the insertion of the interleaved images. With all pretrained weights \(\theta_{\text{pretrained}}\) in MiniGPT-4 fixed, the trainable parameters include extra input embedding \(\theta_{\text{token,input}}\) and output embedding \(\theta_{\text{token,output}}\). Parameter-Efficient Fine-Tuning (PEFT):Parameter-efficient fine-tuning (PEFT) (Houlsby et al., 2019; Hu et al., 2021; Li and Liang, 2021) is critical in training large language models (LLMs). Despite this, its application in multimodal settings remains largely unexplored. We use PEFT over the MiniGPT-4 (Zhu et al., 2023) encoder to train a model to understand instructions or prompts better, enhancing its performance in novel and even zero-shot tasks. More specifically, we tried prefix tuning (Li and Liang, 2021) and LoRA over the entire language encoder --Vicuna (Chiang et al., 2023) used in MiniGPT-4. Combined with the instruction tuning, it notably amplifies multimodal generation performance across various datasets, such as VIST and MMDialog. ### Multimodal Output Generation To accurately align the generative tokens with the generative model, we formulate a compact mapping module for dimension matching and incorporate several supervisory losses, including text space loss and latent diffusion model loss. The text space loss assists the model in learning the correct positioning of tokens, while the latent diffusion loss directly aligns the tokens with the appropriate visual features. Since the generative vokens' features are directly guided by images, our method does not need comprehensive descriptions of images, leading to description-free learning. Text Space Generation:We first jointly generate both text and vokens in the text space by following the casual language modeling. During the training, we append the vokens to the positions of ground truth images and train the model to predict vokens within text generation. Specifically, the generated tokens are represented as \(T=\{t_{1},t_{2},\dots,t_{m}\}\), where \(t_{i}\in V\cup V_{\text{img}}\), and the causal language modeling loss is defined as: \[L_{\text{text}}:=-\sum_{i=1}^{m}\log p(t_{i}|e_{\text{text}},e_{\text{img}}, t_{1},\dots,t_{i-1};\theta_{\text{pretrained}},\theta_{\text{token,input}},\theta_{\text{token,output}}),\text{ where }t_{i}\in V\cup V_{\text{img}} \tag{1}\] Figure 2: The overview structure of MiniGPT-5 pipeline. We leverage the pretrained multimodal large language model (MiniGPT-4) and text-to-image generation model (Stable Diffusion 2.1) to create a unified multimodal generation pipeline. The input image encoder includes a ViT, Gformer, and linear layer, pretrained by MiniGPT-4. The orange blocks include learnable parameters, while the blue blocks are fixed during training. More details can be find in Section 3. **Mapping Voken Features for Image Generation:** Next, we align the output hidden state \(h_{\text{voken}}\) with the text conditional feature space of the text-to-image generation model. To map the voken feature \(h_{\text{voken}}\) to a feasible image generation conditional feature \(e_{\text{text,encoder}}\in\mathbf{R}^{L\times\hat{d}}\) (where \(L\) is the maximum input length of text-to-image generation text encoder, and \(\hat{d}\) is the dimension of encoder output feature in text-to-image generation model). We construct a feature mapper module, including a two-layer MLP model \(\theta_{\text{MLP}}\), a four-layer encoder-decoder transformer model \(\theta_{\text{enc-dec}}\), and a learnable decoder feature sequence \(q\). The mapping feature \(\hat{h}_{\text{voken}}\) is then given by: \[\hat{h}_{\text{voken}}:=\theta_{\text{enc-dec}}(\theta_{\text{MLP}}(h_{\text{ voken}}),q)\in\mathbf{R}^{L\times\hat{d}} \tag{2}\] **Image Generation with Latent Diffusion Model (LDM):** To generate appropriate images, the mapping feature \(\hat{h}_{\text{voken}}\) is used as a conditional input in the denoising process. Intuitively, \(\hat{h}_{\text{voken}}\) should represent the corresponding text features that guide the diffusion model to generate the ground truth image. We employ the loss of the latent diffusion model (LDM) for guidance. During the training, the ground truth image is first converted to latent feature \(z_{0}\) through the pretrained VAE. Then, we obtain the noisy latent feature \(z_{t}\) by adding noise \(\epsilon\) to \(z_{0}\). A pretrained U-Net model \(\epsilon_{\theta}\) is used to calculate the conditional LDM loss as: \[L_{LDM}:=\mathbb{E}_{\epsilon\sim\mathcal{N}(0,1),t}\left[\left\lVert \epsilon-\epsilon_{\theta}\left(z_{t},t,\hat{h}_{\text{voken}}\right)\right\rVert _{2}^{2}\right] \tag{3}\] This comprehensive approach ensures a coherent understanding and generation of both textual and visual elements, leveraging the capabilities of pretrained models, specialized tokens, and innovative training techniques. ### Training Strategy Given the non-negligible domain shift between text and image domains, we observe that direct training on a limited interleaved text-and-image dataset can result in misalignment and diminished image quality. Consequently, we adopt two distinct training strategies to mitigate this issue. The first strategy encompasses the incorporation of the classifier-free guidance (Ho and Salimans, 2022) technique, which amplifies the effectiveness of the generative tokens throughout the diffusion process. The second strategy unfolds in two stages: an initial pre-training stage focusing on coarse feature alignment, followed by a fine-tuning stage dedicated to intricate feature learning. **Classifier-free Guidance (CFG):** To enhance the coherence between the generated text and images, we first leverage the idea of Classifier-free Guidance for multimodal generation. Classifier-free guidance is introduced in the text-to-image diffusion process. This method observes that the generation model \(P_{\theta}\) can achieve improved conditional results by training on both conditional and unconditional generation with conditioning dropout. In our context, our objective is to accentuate the trainable condition \(h_{\text{voken}}\) and the generation model is fixed. During training, we replace \(h_{\text{voken}}\) with zero features \(h_{0}\in\mathbf{0}^{n\times d}\) with a 10% probability, obtaining the unconditional feature \(\hat{h}_{0}=\theta_{\text{enc-dec}}(\theta_{\text{MLP}}(h_{0}),q)\). During inference, \(\hat{h}_{0}\) serves as negative prompting, and the refined denoising process is expressed as: \[\log\widehat{\mathrm{P}_{\theta}}\left(\epsilon_{t}\mid z_{t+1}, \hat{h}_{\text{voken}},\hat{h}_{0}\right)= \log\mathrm{P}_{\theta}\left(\epsilon_{t}\mid z_{t+1},\hat{h}_{0} \right)+\] \[\gamma\left(\log\mathrm{P}_{\theta}\left(\epsilon_{t}\mid z_{t+1},\hat{h}_{\text{voken}}\right)-\log\mathrm{P}_{\theta}\left(\epsilon_{t}\mid z _{t+1},\hat{h}_{0}\right)\right) \tag{4}\] **Two-stage Training Strategy:** Recognizing the non-trivial domain shift between pure-text generation and text-image generation, we propose a two-stage training strategy: Unimodal Alignment Stage (**UAS**) and Multimodal Learning Stage (**MLS**). Initially, we align the voken feature with image generation features in single text-image pair datasets, such as CC3M, where each data sample only contains one text and one image and the text is usually the caption of the image. During this stage, we utilize captions as LLM input, enabling LLM to generate vokens. Since these datasets include the image descriptive information, we also introduce an auxiliary loss to aid voken alignment, minimizing the distance between the generative feature \(\hat{h}_{\text{voken}}\) and the caption feature from the text encoder \(\tau_{\theta}\) in the text-to-image generation model: \[L_{\text{CAP}}:=\text{MSE}(\hat{h}_{\text{token}},\tau_{\theta}(c)) \tag{5}\] The unimodal alignment stage loss is expressed as \(L_{\text{UAS}}=\lambda_{1}*L_{\text{text}}+\lambda_{2}*L_{\text{LDM}}+\lambda_{ 3}*L_{\text{CAP}}\), with selected values \(\lambda_{1}=0.01,\lambda_{2}=1,\lambda_{3}=0.1\) to rescale the loss into a similar numerical range. After the unimodal alignment stage, the model is capable of generating images for single text descriptions but struggles with interleaved vision-and-language generation, which includes multiple text-image pairs and requires complicated reasoning for both text and image generation. To address this, in the multimodal learning stage, we further fine-tune our model with PEFT parameters by interleaved vision-and-language datasets, such as VIST, where the data sample has several steps with text-image and texts are sequentially relevant. During this stage, we construct three types of tasks from the dataset, encompassing (1) text-only generation: given the next image, generating the related text; (2) image-only generation: given the next text, generating the related image, and (3) multimodal generation: generating text-image pair by given context. The multimodal learning stage loss is given by \(L_{\text{MLS}}=\lambda_{1}*L_{\text{text}}+\lambda_{2}*L_{\text{LDM}}\). More implementation details can be found in appendix A. ## 4 Experiments To assess the efficacy of our model, we conducted a series of evaluations across multiple benchmarks. These experiments aim to address several key questions: (1) Can our model generate plausible images and reasonable texts? (2) How does our model's performance stack up against other state-of-the-art models in both single-turn and multi-turn interleaved vision-and-language generation tasks? (3) What impact does the design of each module have on overall performance? In the subsequent subsections, we will delve into the datasets and experimental settings used for these evaluations, followed by a comprehensive analysis of our model's performance. We use three datasets: CC3M (Sharma et al., 2018), VIST (Huang et al., 2016), and MMDialog (Feng et al., 2022). More details about datasets and data format can be found in appendix B. ### Experimental Settings BaselinesFor a comprehensive evaluation of our performance in multimodal generation, we conducted comparative analyses with several prominent baseline models: the Fine-tuned Unimodal Generation Model, GILL, and Divter. * **Fine-tuned Unimodal Generation Model**: To facilitate fair comparisons in both image and text generation, we fine-tuned two separate models, Stable Diffusion 2.1 and MiniGPT-4, utilizing the VIST dataset. Within the Stable Diffusion 2.1 model, the U-Net parameters were unfrozen. For MiniGPT-4's LLM part, LoRA parameters were fine-tuned. * **GILL** (Koh et al., 2023)1: GILL is a recent innovation that allows the LLM to generate vokens using a pre-trained text-to-image generation model for single-image generation. Unlike our method, which employs conditional generation loss guidance, GILL minimizes the Mean Squared Error (MSE) loss between the text-to-image text encoding feature and voken features, similar to \(L_{CAP}\) in our approach. Since their method requests image descriptions for training, we compare with it just on the unimodal alignment stage. Footnote 1: To ensure fair comparisons, given the variations in the valid data within the CC3M dataset and the original use of Stable Diffusion 1.5 in GILL, we made adjustments. Specifically, we switched their text-to-image generation model to Stable Diffusion 2.1 and retrained it on our specific CC3M data, following the guidelines in their official implementation. ([https://github.com/kohjingyu/gill](https://github.com/kohjingyu/gill)) * **Divter** (Sun et al., 2021): Divter is a state-of-the-art conversational agent developed for multimodal dialogue contexts. It introduces a customized transformer structure for generating multimodal responses. Divter's methodology includes pretraining on a vast corpus of text-only dialogues and text-image pairs, followed by finetuning on a selected set of multimodal response data. The MMDialog dataset regards Divter's method as the baseline. **Metrics** To comprehensively assess the model performance across image, text, and multimodal dimensions, we employ a diverse set of metrics. For evaluating the quality and diversity of generated images, we utilize the Inception Score (IS) (Salimans et al., 2016), and Frechet Inception Distance (FID) (Heusel et al., 2017). Textual performance is gauged through metrics such as BLEU (Papineni et al., 2002), Rouge-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and Sentence-BERT (S-BERT) (Reimers and Gurevych, 2019) scores. On the multimodal front, we leverage CLIP-based metrics (Rombach et al., 2022b) to assess the congruence between generated content and ground truth. CLIP-I evaluates the similarity between generated and ground-truth images, while CLIP-T focuses on the congruence between generated images and ground-truth text. To address potential misalignments in the multimodal generation, such as when the ground truth is text-only, but the output is multimodal, we utilize MM-Relevance (Feng et al., 2022). This metric calculates the F1 score based on CLIP similarities, providing a nuanced evaluation of multimodal coherence. We also employ the Human Preference Score (HPS) v2 (Wu et al., 2023c) to assess the extent to which the generated images align with the input text prompts based on human preferences. Recognizing that the generated multimodal output might be meaningful yet differ from the ground truth, we also incorporate human evaluation to assess the model's performance. We examine the model's effectiveness from three perspectives: (1) Language Continuity - assessing if the produced text aligns seamlessly with the provided context, (2) Image Quality - evaluating the clarity and relevance of the generated image, and (3) Multimodal Coherence - determining if the combined text-image output is consistent with the initial context. ### Experimental Results In this section, we will quantitatively analyze our model performance on different benchmarks for different training stages. The qualitative examples can be found in Fig. 4. #### 4.2.1 Multimodal Learning Stage In this subsection, we present the performance of different models on the VIST (Huang et al., 2016) and MMDialg (Feng et al., 2022) datasets. Our evaluations span both vision (image-related metrics) and language (textual metrics) domains to showcase the versatility and robustness of the proposed models. **VIST Final-Step Evaluation** Our first set of experiments involves a single-step evaluation where, given the last step's prompt, the model aims to generate the corresponding image. Table 1 summarizes the results for this setting. The MiniGPT-5 in all three settings can outperform the fine-tuned SD 2, showing the benefits of the MiniGPT-5 pipeline. Notably, the MiniGPT-5 (LoRA) model consistently surpasses other variants in terms of the CLIP Score across multiple prompt types, especially when both image and text prompts are combined. On the other hand, the FID scores highlight the MiniGPT-5 (prefix) model's competitiveness, indicating a possible trade-off between image embedding quality (reflected by the CLIP Score) and the diversity and realism of the images (captured by the FID score). When compared to the model (MiniGPT-5 w/o UAS) that undergoes direct training on the VIST without incorporating the unimodal alignment stage, it is evident that while the model retains the capability to generate meaningful images, there is a notable drop in image quality and coherence. This observation underscores the significance of our two-stage training strategies. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{No Context} & \multicolumn{3}{c}{Text Context} & \multicolumn{3}{c}{Image Context} & \multicolumn{3}{c}{Image-Text Context} \\ \cline{2-13} Model & CLIP-I(t) & IS (t) & FID (t) & CLIP-I(t) & IS (t) & FID (t) & CLIP-I(t) & IS (t) & FID (t) & CLIP-I(t) & IS (t) & FID (t) \\ \hline Zero-shot SD 2 & 0.57 & **23.62** & 61.26 & 0.59 & 23.24 & 62.60 & - & - & - & - & - \\ Fine-tuned SD 2 & 0.59 & -23.24 & **5.829** & 0.51 & 24.37 & **5.74** & **-** & **-** & **-** & **-** & **-** \\ MiniGPT-1 (S-BERT) & 0.00 & -23.19 & 61.25 & 0.63 & **2.586** & 61.34 & - & 0.66 & **-** & **-** & **-** & **-** & **-** \\ MiniGPT-5 (LoRA) & **0.61** & 23.20 & 61.44 & **0.64** & 23.86 & 61.34 & - & 0.66 & **-** & **-** & **-** & **-** & **-** \\ MiniGPT-5 (LoRA) & 0.55 & 16.32 & 73.02 & 0.57 & 16.31 & 73.97 & 0.58 & 16.70 & 75.88 & 0.58 & 16.99 & 76.51 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance metrics for different models with various prompt types on VIST final step image generation. For ‘No Context’, only the current step’s text is provided. The ‘Text Context’ uses all history texts, the ‘Image Context’ employs all preceding images, and ‘Image-Text Context’ provides a combination of both past images and texts. VIST Multi-Step EvaluationIn a detailed and comprehensive evaluation, we systematically provide models with prior history context and subsequently assess the generated images and narrations at each following step. Tables 2 and 3 outline the results of these experiments, encapsulating the performance in both image and language metrics, respectively. The findings demonstrate that MiniGPT-5 is capable of generating coherent, high-quality images utilizing long-horizontal multimodal input prompts across all data, without compromising the original model's ability for multimodal comprehension. This accentuates the efficacy of our model in diverse settings. VIST Human EvaluationTo assess the quality of multimodal generation, we test both our model and the baseline on the VIST validation set. For each task, given a preceding multimodal sequence, models are tasked with producing the subsequent scenario. To ensure a fair comparison, we employ the fine-tuned MiniGPT-4, which is exclusively trained to generate narrations without any vokens. Subsequently, these narrations are incorporated directly into the Stable Diffusion 2 via the text-to-image pipeline. We select a random sample of 5,000 sequences, with each requiring evaluation by two workers. These evaluators are tasked with determining the superior multimodal output based on three criteria: Language Continuity, Image Quality, and Multimodal Coherence. This assessment is facilitated using Amazon Mechanical Turk (Crowston, 2012), with a representative example (Fig. 5) provided in the appendix. As depicted in Table 4, our model, MiniGPT-5, is found to generate more fitting text narrations in 57.18% of instances, deliver superior image quality in 52.06% of cases, and produce more coherent multimodal outputs in 57.62% of the scenarios. This data distinctly showcases its enhanced multimodal generation capabilities when compared to the two-stage baseline that employs narrations for text-to-image prompts without the inclusion of vokens. accepts the input of image descriptions and produces corresponding images, mirroring typical text-to-image tasks but incorporating generative tokens. The results indicate that although our model can have better generation on multi-turn scenarios, Stable Diffusion 2 achieves the best outcomes across all metrics for single-image generation. Since our model attempts to align with the pretrained text encoder of Stable Diffusion 2 in this stage, there is a slight gap in performance due to the limitation of data amount. Compared with the observations on the VIST dataset, we can conclude that MiniGPT-5 can correctly extract features from long-horizontal multimodal information instead of single text input. This indicates the future directions on how to align LLMs with generative models efficiently. On the other hand, our model outperforms another state-of-the-art multimodal generation model, GILL, on all metrics. Our model generates more coherent and high-quality images that closely resemble those produced by the pretrained stable diffusion model. To further evaluate the effectiveness of our design, we conducted several ablation studies, and more ablation studies about voken number and CFG scales can be found in appendix C. Evaluation of Different Loss Guidance:As described in Sec. 3.3, we introduced an auxiliary loss, denoted as \(L_{CAP}\) for CC3M training. To assess the impact of this loss and determine if the single caption loss alone can generate high-quality images like GILL, we trained our model without the caption loss \(L_{CAP}\) (alignment between the mapped generative voken features and the caption features from stable diffusion text encoder) and the conditional latent diffusion loss \(L_{LDM}\) (alignment between the mapped generative voken features and conditional features for latent diffusion process of ground truth images) separately. The results, as shown in Table 6, indicate that the caption loss significantly aids in generating better images, and the conditional latent diffusion loss further enhances performance in terms of coherence and image quality. Evaluation of Classifier-Free Guidance (CFG):To assess the effectiveness of the CFG strategy, we trained our model without CFG dropoff. During inference, the model utilized the original CFG denoising process, which utilized the empty caption feature from Stable Diffusion 2's text encoder as negative prompt features. The results in Table 6 demonstrate that all metrics are worse without CFG, indicating that the CFG training strategy improves the image generation quality. Evaluation with Human Preference Score (HPS):To better evaluate our model's effectiveness and its individual components, we employed the Human Preference Score v2 (HPSv2) (Wu et al., 2023b). Figure 3 presents the count of images generated by each model with the highest HPS. Notably, MiniGPT-5 consistently outshines its competitors, underscoring the significance of the losses and the classifier-free guidance technique implemented in our approach. ## 5 Conclusion In this paper, we introduce MiniGPT-5, designed to augment the capabilities of LLMs for multimodal generation by aligning the LLM with a pre-trained text-to-image generation model. Our approach demonstrates substantial improvements, as evidenced by comprehensive experiments. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & MiniGPT-5 & Fine-tuned MiniGPT-4 + SD 2 & Tie \\ \hline Language Continuity (\%) & **57.18** & 28.51 & 14.31 \\ Image Quality (\%) & **52.06** & 35.98 & 11.96 \\ Multimodal Coherence (\%) & **57.62** & 23.24 & 19.14 \\ \hline \hline \end{tabular} \end{table} Table 4: VIST Human Evaluation on 5,000 samples for multimodal generation from Language Continuity, Image Quality, and Multimodal Coherence aspects. The results indicate, in more than 70% cases, the MiniGPT-5 is better or on par with the two-stage baseline. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & IS (\(\uparrow\)) & BLEU-1 (\(\uparrow\)) & BLEU-2 (\(\uparrow\)) & Rouge-L (\(\uparrow\)) & MM-Relevance (\(\uparrow\)) \\ \hline Divter & **20.53** & 0.0944 & 0.0745 & 0.1119 & 0.62 \\ MiniGPT-5 & 19.63 & **0.2221** & **0.1546** & **0.1119** & **0.67** \\ \hline \hline \end{tabular} \end{table} Table 5: Multimodal generation results on MMDialog test set. In order to compare with their baseline, we use the same metrics reported in Table 3 of MMDialog paper (Feng et al., 2022). Through this work, we aspire to set a new benchmark in multimodal generative models, opening doors to applications previously deemed challenging due to the disjointed nature of existing image and text synthesis paradigms.
2303.02301
Locally universal C*-algebras with computable presentations
The Kirchberg Embedding Problem (KEP) asks if every C*-algebra embeds into an ultrapower of the Cuntz algebra $\mathcal{O}_2$. In an effort to provide a negative solution to the KEP and motivated by the recent refutation of the Connes Embedding Problem, we establish two computability-theoretic consequences of a positive solution to KEP. Both of our results follow from the a priori weaker assumption that there exists a locally universal C*-algebra with a computable presentation.
Alec Fox, Isaac Goldbring, Bradd Hart
2023-03-04T02:52:24Z
http://arxiv.org/abs/2303.02301v1
# Locally universal \(\mathrm{C}^{*}\)-algebras with computable presentations ###### Abstract. The Kirchberg Embedding Problem (KEP) asks if every \(\mathrm{C}^{*}\)-algebra embeds into an ultrapower of the Cuntz algebra \(\mathcal{O}_{2}\). In an effort to provide a negative solution to the KEP and motivated by the recent refutation of the Connes Embedding Problem, we establish two computability-theoretic consequences of a positive solution to KEP. Both of our results follow from the a priori weaker assumption that there exists a locally universal \(\mathrm{C}^{*}\)-algebra with a computable presentation. Goldbring was partially supported by NSF grant DMS-2054477. Hart was funded by the NSERC. ## 1. Introduction The recent landmark quantum complexity result known as \(\mathrm{MIP}^{*}=\mathrm{RE}\)[11] yielded a negative solution to a famous problem in the theory of von Neumann algebras, namely the **Connes Embedding Problem** (CEP). The CEP, posed in Connes' seminal paper [4], asks if every tracial von Neumann algebra embeds into a tracial ultrapower of the hyperfinite \(\mathrm{II}_{1}\) factor. The negative solution to the CEP can be used to give a negative solution to an analogous problem in the theory of \(\mathrm{C}^{*}\)-algebras known as the **Blackadar-Kirchberg Problem** (or **MF Problem**), which asked if every stably finite \(\mathrm{C}^{*}\)-algebra embeds into an ultrapower of the universal UHF algebra (see [9, Proposition 6.1]). The Blackadar-Kirchberg Problem can be viewed as the "finite" \(\mathrm{C}^{*}\)-algebra analog of CEP. In this paper, we consider the "infinite" \(\mathrm{C}^{*}\)-algebra analog of CEP known as the **Kirchberg Embedding Problem** (KEP). KEP asks if every \(\mathrm{C}^{*}\)-algebra embeds into an ultrapower of the Cuntz algebra \(\mathcal{O}_{2}\).1 Footnote 1: Incidentally, one might ask if there is an “infinite”, that is, type III, von Neumann algebra analog of CEP. For example, one might ask if every von Neumann algebra embeds with expectation into the Ocneanu ultrapower of the hyperfinite \(\mathrm{III}_{1}\) factor \(\mathcal{R}_{\infty}\). In [1], Ando, Haagerup, and Winslow prove that this question is equivalent to CEP itself (and thus has a negative answer). The Kirchberg Embedding Problem was studied model theoretically by the second author and Sinclair in [10]. In that paper, KEP is shown to be equivalent to the statement that there is a \(\mathrm{C}^{*}\)-algebra that is both nuclear and **existentially ## 1. Introduction A **weakly stable \(C^{*}\)-algebra** is a _weakly stable_ if it is a weakly stable \(C^{*}\)-algebra. A **weakly stable \(C^{*}\)-algebra** is a weakly stable \(C^{*}\)-algebra if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebra** is a weakly stable \(C^{*}\)-algebra if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebra** is a weakly stable \(C^{*}\)-algebra if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebra** is a weakly stable \(C^{*}\)-algebra if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebra** is a weakly stable \(C^{*}\)-algebra if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebra** is a weakly stable \(C^{*}\)-algebra if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable. A **weakly stable \(C^{*}\)-algebras** are weakly stable \(C^{*}\)-algebras if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly stable if it is weakly if it is weakly stable if it is weakly if it is weakly stable if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly if it is weakly weakly if Our second result belongs to the area of quantum complexity theory. In [11], a particular quantum complexity class \(\mathrm{MIP}^{\mathrm{co}}\) is defined and it is shown that all of the languages in this class are coRE - complements of recursively enumerable sets. It is left as an open question if the converse is true, namely, does \(\mathrm{MIP}^{\mathrm{co}}\) coincide with the class coRE? In the notation \(\mathrm{MIP}^{\mathrm{co}}\), the "co" stands for "commuting" and comes from the use of **commuting operator strategies** in its definition. In this paper, we define a relaxed version \(\mathrm{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\) of "almost-commuting" operator strategies and show that, assuming the existence of a locally universal \(\mathrm{C}^{*}\)-algebra with a computable presentation, all languages which belong to the corresponding quantum complexity class \(\mathrm{MIP}^{\mathrm{co}}_{\mathrm{op},\delta}\) are actually recursively enumerable (and thus decidable!). We end the introduction with some notation and terminology concerning ultraproducts of \(\mathrm{C}^{*}\)-algebras. Throughout this paper, all \(\mathrm{C}^{*}\)-algebras are assumed to be unital and all \(*\)-homomorphisms are assumed to preserve the unit. Moreover, \(\mathcal{U}\) always denotes a nonprincipal ultrafilter on \(\mathbb{N}\). Given a \(\mathrm{C}^{*}\)-algebra \(C\), its **ultrapower with respect to \(\mathcal{U}\)**, denoted \(\mathrm{C}^{\mathcal{U}}\), is the quotient of the Banach algebra \(\ell^{\infty}(\mathbb{N},C)\) of all uniformly norm-bounded sequences from \(C\) by the ideal of elements \((A_{m})_{m\in\mathbb{N}}\) for which \(\lim_{\mathcal{U}}\|A_{m}\|=0\). It is well-known that \(\mathrm{C}^{\mathcal{U}}\) is a \(\mathrm{C}^{*}\)-algebra once again. Given \((A_{m})_{m\in\mathbb{N}}\in\ell^{\infty}(\mathbb{N},C)\), we denote its coset in \(\mathrm{C}^{\mathcal{U}}\) by \((A_{m})_{\mathcal{U}}\). The authors would like to thank William Slofstra, Thomas Vidick and Henry Yuen for useful discussions regarding this work. ## 2. Computable weak stability Let \(\mathcal{G}\) be a set of noncommuting indeterminates, which we call **generators**. By a set of **relations** for \(\mathcal{G}\) we mean a set of relations of the form \(\|\mathrm{p}(x_{1},\ldots,x_{n})\|\leq a\), where \(\mathrm{p}\) is a \(*\)-polynomial in \(n\) noncommuting variables with no constant term, \(x_{1},\ldots,x_{n}\) are elements of \(\mathcal{G}\), and \(a\) is a nonnegative real number. We also require that, for every generator \(x\in\mathcal{G}\), there is a relation of the form \(\|x\|\leq M\) in \(\mathcal{R}\). A **representation** of \((\mathcal{G},\mathcal{R})\) is a function \(j:\mathcal{G}\to A\), where \(A\) is a \(\mathrm{C}^{*}\)-algebra, such that \(\|\mathrm{p}(j(x_{1}),\ldots,j(x_{n}))\|\leq a\) for every relation \(\|\mathrm{p}(x_{1},\ldots,x_{n})\|\leq a\) in \(\mathcal{R}\). The **universal \(\mathrm{C}^{*}\)-algebra** of \((\mathcal{G},\mathcal{R})\) is a \(\mathrm{C}^{*}\)-algebra \(A\) along with a representation \(\iota:\mathcal{G}\to A\) of \((\mathcal{G},\mathcal{R})\) such that, for all other representations \(j:\mathcal{G}\to B\) of \((\mathcal{G},\mathcal{R})\), there is a unique *-homomorphism \(\varphi:A\to B\) such that \(\varphi(\iota(x))=j(x)\) for all \(x\in\mathcal{G}\). If the universal \(\mathrm{C}^{*}\)-algebra of \((\mathcal{G},\mathcal{R})\) exists, then it is unique up isomorphism and will be denoted by \(\mathrm{C}^{*}\langle\mathcal{G}|\mathcal{R}\rangle\). Note that \(\mathrm{C}^{*}\langle\mathcal{G}|\mathcal{R}\rangle\) is generated by the image of the generators. If \(\mathcal{G}\) is a sequence \(\bar{x}\), then we may write \(\mathrm{C}^{*}\langle\bar{x}|\mathcal{R}\rangle\) instead of \(\mathrm{C}^{*}\langle\bar{\mathrm{G}}|\mathcal{R}\rangle\). Given that we remain in the context of unital \(\mathrm{C}^{*}\)-algebras throughout this paper, we implicitly assume that we have a distinguished generator for the unit and include relations stating that it is a self-adjoint idempotent which acts as a multiplicative identity. A C\({}^{*}\)-algebra A is **finitely presented** if it is of the form \(A=C^{*}\langle\mathcal{G}|\mathcal{R}\rangle\) for finite sets \(\mathcal{G}\) and \(\mathcal{R}\). Let C be a C\({}^{*}\)-algebra. A **presentation** of C is a pair \(C^{\dagger}:=(C,(a_{n})_{n\in\mathbb{N}})\), where \(\{a_{n}\ :\ n\in\mathbb{N}\}\) is a subset of C that generates C (as a C\({}^{*}\)-algebra). Elements of the sequence \((a_{n})_{n\in\mathbb{N}}\) are referred to as **special points** of the presentation while elements of the form \(p(a_{i_{1}},\ldots,a_{i_{k}})\) for \(p\) a \(*\)-polynomial with coefficients from \(\mathbb{Q}(i)\) (a **rational polynomial**) are referred to as **rational points** of the presentation. We say that \(C^{\dagger}\) is a **computable presentation** of C if there is an algorithm such that, upon input a rational point \(p\) of \(C^{\dagger}\) and \(k\in\mathbb{N}\), returns a rational number \(q\) such that \(\|p\|-q|<2^{-k}\). If \(A^{\dagger}\) and \(C^{\dagger}\) are presentations of C\({}^{*}\)-algebras, we say a *-homomorphism \(\varphi:A\to C\) is a **computable map** from \(A^{\dagger}\) to \(C^{\dagger}\) if there is is an algorithim such that, upon input of a rational point \(p\) of \(A^{\dagger}\) and \(k\in\mathbb{N}\), returns a rational point \(q\) of \(C^{\dagger}\) such that \(\|\varphi(p)-q\|<2^{-k}\). The **standard presentation** of a universal C\({}^{*}\)-algebra \(C^{*}\langle\bar{x}|\mathcal{R}\rangle\) has \(\bar{x}\) as its distinguished generating set. A relation \(\|p(x_{1},\ldots,x_{n})\|\leq a\) is called **rational** if \(p\) is a rational polynomial and \(a\) is a nonnegative dyadic rational. A presentation \(A^{\dagger}\) of a C\({}^{*}\)-algebra A is called **c.e.** if it is the standard presentation \(C^{*}\langle\bar{x}|\mathcal{R}\rangle\) of a c.e. set of rational relations \(\mathcal{R}\). When \(\bar{x}\) and \(\mathcal{R}\) are both finite, we say that \(A^{\dagger}\) is **finitely c.e.** We will need the following fact of the first-named author [7]: **Fact 1**.: _If \(A\) is a simple C\({}^{*}\)-algebra, then any c.e. presentation \(A^{\dagger}\) of \(A\) is computable._ In particular, given that the standard presentation of \(\mathcal{O}_{2}\) is clearly (finitely) c.e., we conclude from the previous fact that it is a computable presentation. It is sensible to write a relation of the form \(\|p(x_{1},\ldots,x_{n})\|\leq 0\) in the more familiar form \(p(x_{1},\ldots,x_{n})=0\). Note also that in any c.e. presentation of a C\({}^{*}\)-algebra, we can replace an arbitrary relation of the form \(\|p(x_{1},\ldots,x_{n})\|\leq a\) by the two relations \(p(x_{1},\ldots,x_{n})-g=0\) and \(\|g\|\leq a\), where \(g\) is a new generator. **Definition 2**.: A finitely presented C\({}^{*}\)-algebra \(A=C^{*}\langle\bar{x}\mid p_{i}(\bar{x})=0,\|x_{k}\|\leq C_{k}\rangle\) is **weakly stable** if for every \(\epsilon>0\) there is a \(\delta>0\) such that for all C\({}^{*}\)-algebras \(B\) if \(\overline{z}\in B\) satisfies \(\|p_{i}(\overline{z})\|\leq\delta\) for all i and \(\|z_{k}\|\leq C_{k}+\delta\) for all k, then there exists a *-homorphism \(\varphi:A\to B\) such that \(\|\varphi(x_{k})-z_{k}\|\leq\epsilon\) for all k. The following are all examples of weakly stable C\({}^{*}\)-algebras; see [2] for proofs. **Examples 3**.: 1. C\({}^{*}(\mathbb{F}_{n})\), the full universal C\({}^{*}\)-algebra of the free group on n generators. 2. \(M_{n}(\mathbb{C})\), or, more generally, any finite-dimensional \(C^{*}\)-algebra. 3. Cuntz algebras \(\mathcal{O}_{n}=C^{*}\langle s_{1},\ldots,s_{n}\mid s_{i}^{*}s_{j}=\delta_{ij}1, \sum s_{i}s_{i}^{*}=1\rangle\). 4. The Toeplitz algebra \(\mathcal{T}\), the universal \(C^{*}\)-algebra generated by an isometry. Notice that in all of the above examples, the \(C^{*}\)-algebra is either nuclear or else not simple. In fact, we have: **Proposition 4**.: _If KEP has a positive solution, then any simple weakly stable \(C^{*}\)-algebra is exact._ Proof.: Suppose that \(A\) is a simple, weakly stable \(C^{*}\)-algebra. By a positive solution to KEP, there is an embedding \(A\hookrightarrow\mathcal{O}_{2}^{\mathbb{U}}\). Since \(A\) is weakly stable (recall that \(A\) is finitely presented and hence there are only finitely many conditions to be satisfied), there is a *-homomorphism \(A\to\mathcal{O}_{2}\), which, since \(A\) is simple, must be an embedding. It follows that \(A\) is exact. We now introduce a computable version of weak stability. **Definition 5**.: A finitely c.e. presentation \(A^{\dagger}=C^{*}\langle\overline{x}\mid p_{i}(\overline{x})=0,\|x_{k}\|\leq C _{k}\rangle\) is **computably weakly stable** if there is an algorithm which when given \(n\in\mathbb{N}\) returns \(m\in\mathbb{N}\) such that for all \(C^{*}\)-algebras \(B\) if \(\overline{z}\in B\) satisfies \(\|p_{i}(\overline{z})\|\leq 2^{-m}\) for all \(i\) and \(\|z_{k}\|\leq C_{k}+2^{-m}\) for all \(k\), then there exists a *-homorphism \(\varphi:A\to B\) such that \(\|\varphi(x_{k})-z_{k}\|<2^{-n}\) for all \(k\). While the definition as written depends on the choice of relations, we show in the following lemma the notion is computably robust, so any finite set of rational relations that gives the same presentation will work. **Lemma 6**.: _Let_ \[A^{\dagger}=C^{*}\langle x_{1},\ldots,x_{j}\mid p_{i}(\overline{x})=0,\|x_{k} \|\leq C_{k}\rangle\] _and_ \[A^{\#}=C^{*}\langle y_{1},\ldots,y_{\ell}\mid q_{i}(\overline{y})=0,\|y_{k}\| \leq D_{k}\rangle\] _be finitely c.e. presentations of a \(C^{*}\)-algebra \(A\). If there exists an isomorphism computable from \(A^{\#}\) to \(A^{\dagger}\) and \(A^{\dagger}\) is computably weakly stable, then so is \(A^{\#}\)._ Proof.: Fix a computable isomorphism \(\psi\) from \(A^{\#}\) to \(A^{\dagger}\). Let \(n\in\mathbb{N}\). For each \(k=1,\ldots,\ell\), we can compute a rational *-polynomial \(t_{k}(\overline{x})\) such that \(\|t_{k}(\overline{x})-\psi(y_{k})\|<2^{-(n+2)}\). Let \(d\) be such that for any \(C^{*}\)-algebra \(B\) if \(\|u_{i}-v_{i}\|_{B}<2^{-d}\) for \(i=1,\ldots,j\) then \(\|t_{k}(\overline{u})-t_{k}(\overline{v})\|_{B}<2^{-(n+2)}\) for all \(k\). Let \(s\) witness that \(A^{\dagger}\) is computably weakly stable on input \(d\). For \(m\in\mathbb{N}\), let \(W_{m}=C^{*}\langle w_{1},\ldots,w_{\ell}\mid\|q_{i}(\overline{w})\|\leq 2^{-m}, \|w_{k}\|\leq D_{k}+2^{-m}\rangle\). By [7, Theorem 3.3], given \(m\in\mathbb{N}\) and a rational *-polynomial \(r(\overline{w})\) we can effectively enumerate a decreasing sequence of rationals that converges to \(\left\|r(\overline{w})\right\|_{W_{m}}\). Enumerate over all \(m\in\mathbb{N}\) and rational *-polynomials \(r_{1}(\overline{w}),\ldots,r_{j}(\overline{w})\) and accept if \(\left\|p_{i}(r_{1}(\overline{w}),\ldots,r_{j}(\overline{w}))\right\|_{W_{m}}<2 ^{-s}\) for all \(i\) and \(\left\|r_{k}(\overline{w})\right\|_{W_{m}}<C_{k}+2^{-s}\) for all \(k\) and \(\left\|t_{k}(\tau_{1}(\overline{w}),\ldots,r_{j}(\overline{w}))-w_{k}\right\|_{ W_{m}}<2^{-(n+2)}\) for all \(k\). Note an acceptance must happen since there is an embedding from \(A\) into \(\prod_{ll}W_{m}\) which sends \(\psi(y_{k})\) to \(w_{k}\). Hence, for any \(\epsilon>0\) there exist rational *-polynomials \(r_{1}(\overline{w}),\ldots,r_{j}(\overline{w})\) in \(\prod_{ll}W_{m}\) such that \(\left\|p_{i}(r_{1}(\overline{w}),\ldots,r_{j}(\overline{w}))\right\|<\epsilon\) for all \(i\) and \(\left\|r_{k}(\overline{w})\right\|<C_{k}+\epsilon\) for all \(k\) and \(\left\|t_{k}(r_{1}(\overline{w}),\ldots,r_{j}(\overline{w}))-w_{k}\right\|<\epsilon\) for all \(k\). Let \(B\) be a \(C^{*}\)-algebra and \(\overline{z}\in B\) such that \(\left\|q_{i}(\overline{z})\right\|\leq 2^{-m}\) for all \(i\) and \(\left\|z_{k}\right\|\leq D_{k}+2^{-m}\) for all \(k\). Then \(\left\|p_{i}(r_{1}(\overline{z}),\ldots,r_{j}(\overline{z}))\right\|<2^{-s}\) for all \(i\) and \(\left\|r_{k}(\overline{z})\right\|<C_{k}+2^{-s}\) for all \(k\) and \(\left\|t_{k}(r_{1}(\overline{z}),\ldots,r_{j}(\overline{z}))-z_{k}\right\|<2^ {-(n+2)}\) for all \(k\). By the choice of \(s\), there exists \(\varphi:A\to B\) such that \(\left\|\varphi(x_{k})-r_{k}(\overline{z})\right\|<2^{-d}\) for \(k=1,\ldots,j\). Then for \(k=1,\ldots,\ell\), \[\left\|\varphi(\psi(y_{k}))-z_{k}\right\|\] \[\leq\left\|\varphi(\psi(y_{k}))-\varphi(t_{k}(\overline{x}))\right\|\] \[\qquad+\left\|\varphi(t_{k}(\overline{x}))-t_{k}(r_{1}(\overline {z}),\ldots,r_{j}(\overline{z}))\right\|\] \[\qquad+\left\|t_{k}(r_{1}(\overline{z}),\ldots,r_{j}(\overline{z }))-z_{k}\right\|\] \[\leq\left\|\psi(y_{k})-t_{k}(\overline{x})\right\|\] \[\qquad+\left\|t_{k}(\varphi(x_{1}),\ldots,\varphi(x_{j}))-t_{k}( r_{1}(\overline{z},\ldots,r_{j}(\overline{z})))\right\|\] \[\qquad+\left\|t_{k}(r_{1}(\overline{z}),\ldots,r_{j}(\overline{z }))-z_{k}\right\|\] \[<2^{-n}.\] We next aim to show that all of the above examples of weakly stable \(C^{*}\)-algebras are actually computably weakly stable. The following result is folklore. **Lemma 7**.: _Suppose that \(0<\delta<\epsilon<1\). Then for any unital \(C^{*}\)-algebra \(A\) and \(a\in A\), if \(\left\|a^{*}a-1\right\|\leq\delta\) and \(\left\|a\,a^{*}-1\right\|\leq\delta\), then there is a unitary \(u\in A\) such that \(\left\|a-u\right\|<\epsilon\)._ Non-effective versions of the following results can be found in [3] or [12]. **Lemma 8**.: _Suppose that \(0<\epsilon<1\) and \(0<\delta<\epsilon^{2}/8\). Then for any \(C^{*}\)-algebra \(A\) and \(a\in A\), if \(\left\|a\right\|\leq 2\), \(\left\|a-a^{*}\right\|\leq\delta\), and \(\left\|a-a^{2}\right\|\leq\delta\), then there is a projection \(p\in A\) such that \(\left\|a-p\right\|<\epsilon\)._ Proof.: Let \(x=(a+a^{*})/2\), so \(x\) is self-adjoint and \(\left\|a-x\right\|\leq\delta/2<\epsilon/2\). Also, by expanding, we see \(\left\|x-x^{2}\right\|\leq\left\|a-a^{2}\right\|+\left\|a\right\|\left\|a-a^{ *}\right\|/2\leq 2\delta<\epsilon^{2}/4\). Hence the spectrum of \(x\), \(\sigma(x)\subseteq[-\epsilon/2,\epsilon/2]\cup[1-\epsilon/2,1+\epsilon/2]\). Let \(f\) be continuous on \(\sigma(x)\) such that \(f=0\) on \([-\epsilon/2,\epsilon/2]\) and \(f=1\) on \([1-\epsilon/2,1+\epsilon/2]\). Let \(p=f(x)\). Then \(p\) is a projection and \(\|a-p\|\leq\|a-x\|+\|x-f(x)\|<\epsilon\). **Corollary 9**.: _If \(A_{1}^{\dagger}\) and \(A_{2}^{\dagger}\) are computably weakly stable, then so is \(A_{1}^{\dagger}\oplus A_{2}^{\dagger}\)._ **Lemma 10**.: _Suppose \(0<\epsilon<1\) and \(0<\delta<2^{-16}\epsilon^{8}\). Then for any \(C^{*}\)-algebra \(A\), \(a\in A\), and projections \(p_{1},p_{2}\in A\), if \(\|a^{*}a-p_{1}\|\leq\delta\) and \(\|aa^{*}-p_{2}\|\leq\delta\), then there is a partial isometry \(v\) such that \(\|a-v\|<\epsilon,v^{*}v=p_{1}\), and \(vv^{*}=p_{2}\)._ Proof.: Let \(b=p_{2}ap_{1}\). We have \(\|a(1-p_{1})\|^{2}=\|(1-p_{1})(a^{*}a-p_{1})(1-p_{1})\|\leq\delta\), and similarly \(\|(1-p_{2})a\|^{2}=\|(1-p_{2})a\aa^{*}(1-p_{2})\|\leq\delta\). Hence \(\|a-b\|\leq\|a(1-p_{1})\|+\|(1-p_{2})a\|\leq 2\sqrt{\delta}\). Furthermore, \[\begin{split}&\big{\|}(b^{*}b)^{2}-b^{*}b\big{\|}\\ &\leq\big{\|}(b^{*}b)^{2}-(a^{*}a)^{2}\big{\|}+\big{\|}(a^{*}a)^{2 }-p_{1}^{2}\big{\|}+\|p_{1}-a^{*}a\|+\|a^{*}a-b^{*}b\|\\ &\leq 20\|b-a\|+4\|a^{*}a-p_{1}\|\\ &\leq 49\sqrt{\delta}.\end{split}\] Let \(\gamma=7\delta^{1/4}\). Then \(\sigma(b^{*}b)\) is a subset of \([-\gamma,\gamma]\cup[1-\gamma,1+\gamma]\). Let \(f\) be a continuous function on \(\sigma(b^{*}b)\) such that \(f=0\) on \([-\gamma,\gamma]\) and \(f(x)=\sqrt{x}\) for all \(x\in[1-\gamma,1+\gamma]\). Let \(w=bf(b^{*}b)\). Then \(w^{*}w=b^{*}bf(b^{*}b)^{2}\) and \(ww^{*}=bb^{*}f(bb^{*})^{2}\) are projections. Also, \(\|b-w\|^{2}=\|(b^{*}-w^{*})(b-w)\|=\|b^{*}b-w^{*}w\|\leq\gamma\), so \(\|a-w\|\leq\|a-b\|+\|b-w\|\leq 2\sqrt{\delta}+\sqrt{\gamma}<\epsilon\). Note \(w^{*}w\leq p_{1}\) and \(ww^{*}\leq p_{2}\). Then \(p_{1}-w^{*}w\) is a projection with norm \[\|p_{1}-w^{*}w\|\leq\|p_{1}-a^{*}a\|+\|a^{*}a-b^{*}b\|+\|b^{*}b-w^{*}w\|<1\] and \(p_{2}-ww^{*}\) is a projection with norm \[\|p_{2}-ww^{*}\|\leq\|p_{2}-a\aa^{*}\|+\|aa^{*}-bb^{*}\|+\|bb^{*}-ww^{*}\|<1.\] Thus \(w^{*}w=p_{1}\) and \(ww^{*}=p_{2}\). **Corollary 11**.: _If \(A^{\dagger}\) is computably weakly stable, then so is \(M_{n}(A^{\dagger})\)._ We now have that the following standard presentations are computably weakly stable: **Examples 12**.: 1. \(C^{*}(\mathbb{F}_{n})\). 2. \(\mathbb{C}\). More generally, the universal \(C^{*}\)-algebra generated by \(n\) projections. 3. \(M_{n}(\mathbb{C})\). More generally, any finite-dimensional \(C^{*}\)-algebra. 4. Cuntz algebras \(\mathcal{O}_{n}\). 5. The Toeplitz algebra \(\mathcal{T}\). For (1), apply Lemma 7; for (2), apply Lemma 8; for (3), apply Corollary 9 and Lemma 10; and for (4), apply Lemma 8 and Lemma 10. Note that the previous presenations are all actually computable. The following is the main result of this section: **Theorem 13**.: _If there exists a locally universal \(C^{*}\)-algebra \(B\) with a computable presentation \(B^{\dagger}\), then every computably weakly stable presentation \(A^{\dagger}\) of a \(C^{*}\)-algebra \(A\) is computable._ Proof.: Since \(A^{\dagger}\) is c.e., by [7, Theorem 3.3] there is an effective procedure which, when given a rational point \(q\) of \(A^{\dagger}\), enumerates a decreasing sequence of rationals that converges to \(\|q\|\). It is enough to show we can do the same from below. Given a rational point \(q(\overline{x})\) of \(A^{\dagger}\), we proceed as follows. Enumerate through all \(j\in\mathbb{N}\). For each \(j\), determine \(n\in\mathbb{N}\) such that if \(\|x_{k}-y_{k}\|<2^{-n}\) for all \(k\) then \(\|q(\overline{x})-q(\overline{y})\|<2^{-j}\) in all \(C^{*}\)-algebras. Let \(m\in\mathbb{N}\) be given, upon input \(n\), by the effective procedure which witnesses that \(A^{\dagger}\) is computably weakly stable. Enumerate all tuples \(\overline{r}\) of rational points of \(B^{\dagger}\) such that \(\|p_{i}(\overline{r})\|<2^{-m}\) for all \(i\) and \(\|r_{k}\|<C_{k}+2^{-m}\) for all \(k\). Enumerate over all dyadic rationals \(d\). If ever \(\|q(\overline{r})\|>2^{-j}+d\) where \(d\) is greater than all previous outputs, then output \(d\). By the choice of \(m\), there exists a *-homorphism \(\varphi:A\to B\) such that \(\|\varphi(x_{k})-r_{k}\|<2^{-n}\). Then \(\|q(\overline{x})\|\geq\|\varphi(q(\overline{x}))\|\geq\|q(\overline{r})\|-2^ {-j}>d\). We show this sequence does converge to \(\|q\|\) from below. Indeed, let \(d<\|q(\overline{x})\|\) and let \(j\in\mathbb{N}\) be such that \(\|q(\overline{x})\|-d>2^{-j}\). Let \(n\) and \(m\) as above. Since \(B\) is locally universal, for small \(\epsilon>0\) there exists \(\overline{z}\) in \(B\) such that \(\|p_{i}(\overline{z})\|<2^{-m}-\epsilon\) for all \(i\) and \(\|z_{k}\|<C_{k}+2^{-m}-\epsilon\) for all \(k\) and \(\|q(\overline{z})\|>d+2^{-j}+\epsilon\). So, there exist rational points \(\overline{r}\) of \(B^{\dagger}\) such that \(\|p_{i}(\overline{r})\|<2^{-m}\) for all \(i\) and \(\|r_{k}\|<C_{k}+2^{-m}\) for all \(k\) and \(\|q(\overline{r})\|>d+2^{-j}\). By Fact 1, we immediately obtain the following: **Corollary 14**.: _If there exists a simple locally universal \(C^{*}\)-algebra \(B\) with a c.e. presentation \(B^{\dagger}\)\((\)in particular, if KEP holds\()\), then every computably weakly stable presentation \(A^{\dagger}\) of a \(C^{*}\)-algebra \(A\) is computable._ ## 3. A quantum complexity result A **nonlocal game with \(n\) questions and \(k\) answers** is a pair \(\mathfrak{G}=(\pi,D)\), where \(\pi\) is a probability distribution on \([n]\) and \(D:[n]\times[n]\times[k]\times[k]\to\{0,1\}\) is called the **decision predicate for the game**. Here, \([n]:=\{1,\ldots,n\}\) and analogously for \([k]\). We also refer to the pair \((n,k)\) as the **dimensions** of \(\mathfrak{G}\). We view two players, henceforth referred to as Alice and Bob, playing \(\mathfrak{G}\) as follows: a pair of questions \((x,y)\in[n]\times[n]\) is randomly chosen according to \(\pi\) and then Alice and Bob somehow respond with a pair of answers \((a,b)\in[k]\times[k]\); they win the game if \(D(x,y,a,b)=1\) and otherwise they lose the game. In order to describe their strategies for playing \(\mathfrak{G}\), we need the notion of POVMs. Recall that a **positive operator-valued measure** or **POVM** on a Hilbert space \(\mathcal{H}\) is a finite collection \(A_{1},\ldots,A_{k}\) of positive operators on \(\mathcal{H}\) such that \(A_{1}+\cdots+A_{k}=I\). We refer to \(k\) as the **length** of the POVM. More generally, one can use the same definition to define a POVM in any \(C^{*}\)-algebra. For each \(k\), let \(\varphi_{k}(X)\) denote the formula \[\max\left(\max_{1\leq i\leq k}\inf_{Z_{i}}\|Z_{i}^{*}Z_{i}-X_{i}\|,\|\sum_{i=1} ^{k}X_{i}-I\|\right)\] in the \(k\) variables \(X=(X_{1},\ldots,X_{k})\). The following lemma is easy but will be used throughout the paper: **Lemma 15**.: _For each \(\epsilon>0\) and \(k\geq 1\), there is \(\delta>0\) such that: for any \(C^{*}\)-algebra \(C\) and any elements \(A_{1},\ldots,A_{k}\) from the unit ball of \(C\), if \(\varphi_{k}(A_{1},\ldots,A_{k})<\delta\), then there is a POVM \(B_{1},\ldots,B_{k}\) in \(C\) such that \(\max_{1\leq i\leq k}\|A_{i}-B_{i}\|<\epsilon\)._ We use POVMs to define strategies for nonlocal games. First suppose that \(\mathfrak{G}\) is a nonlocal game with dimensions \((n,k)\) and \(C\) is a \(C^{*}\)-algebra. A \(\mathfrak{G}\)**-measurement in \(C\)** is a tuple \(A:=(A^{x})_{x\in[n]}\) of POVMs in \(C\), each of which has length \(k\). Of course, the notion of a \(\mathfrak{G}\)-measurement in \(C\) only depends on the dimensions of the nonlocal game, but the terminology will prove useful in the sequel. Thus, corresponding to each possible question and answer pair \((x,a)\in[n]\times[k]\), we will have a positive element \(A^{x}_{a}\in C\), and for each \(x\in[n]\), we have \(\sum_{a\in[k]}A^{x}_{a}=I\). A \(\mathfrak{G}\)**-strategy in \(C\)** is a tuple \(\sigma:=(A,B,\phi)\), where \(A\) and \(B\) are \(\mathfrak{G}\)-measurements in \(C\) and \(\phi\in S(C)\) is a state on \(C\). Given a \(\mathfrak{G}\)-strategy \(\sigma\) in \(C\) as above, we define the corresponding correlation matrix \(p_{\sigma}\in[0,1]^{n^{2}k^{2}}\) by \(p_{\sigma}(a,b|x,y)=\phi(A^{x}_{a}\bullet B^{y}_{b})\), where, for any \(A,B\in C\), we define \(A\bullet B\coloneqq\frac{1}{2}(A^{1/2}BA^{1/2}+B^{1/2}AB^{1/2})\).2 The intuition behind this definition is that if Alice and Bob play \(\mathfrak{G}\) according to the strategy \(\sigma\), then upon receiving the question pair \((x,y)\), they both measure their portion of the state \(\phi\) using their POVMs \(A^{x}\) and \(B^{y}\); since we are not assuming that these measurements commute, we take the average of the results obtained from when Alice measures first and from when Bob measures first. Consequently, \(p_{\sigma}(a,b|x,y)\) is the probability that they answer the question pair \((x,y)\) with the answer pair \((a,b)\) when using the strategy \(\sigma\). Of course, if each \(A_{a}^{x}\) and \(B_{b}^{y}\) commute, the above definition degenerates to the usual situation of calculating \(p_{\sigma}(a,b|x,y)=\phi(A_{a}^{x}B_{b}^{y})\) and we call the strategy \(\sigma\)**commuting**. Fix a positive real number \(\delta>0\). We call the strategy \(\sigma\)\(\delta\)**-op-almost commuting** if \(\sum_{a,b\in[k]}\|[A_{a}^{x},B_{b}^{y}]\|<\delta\) for all \((x,y)\in[n]\times[n]\). Note that the notion of a \(\delta\)-op-commuting strategy depends only on the pair of \(\mathfrak{G}\)-measurements, whence it makes sense to say that a pair of such measurements is a \(\delta\)**-op-almost commuting pair**. Given a \(\mathfrak{G}\)-strategy \(\sigma\) in \(C\), we define the **value of \(\mathfrak{G}\) when playing according to \(\sigma\)** to be the expected value Alice and Bob have of winning the game when playing according to \(\sigma\), that is, \[\operatorname{val}(\mathfrak{G},\sigma)\coloneqq\sum_{x,y}\pi(x,y)\sum_{a,b}D (x,y,a,b)p_{\sigma}(a,b|x,y).\] In the sequel, it will behoove us to define, for every pair \((A,B)\) of \(\mathfrak{G}\)-measurements in \(C\), the element \[\mathfrak{G}(A,B)\coloneqq\sum_{x,y}\pi(x,y)\sum_{a,b}D(x,y,a,b)(A_{a}^{x} \bullet B_{b}^{y}).\] With this notation, we have \(\operatorname{val}(\mathfrak{G},\sigma)=\phi(\mathfrak{G}(A,B))\). If one considers the supremum of \(\operatorname{val}(\mathfrak{G},\sigma)\) as \(\sigma\) ranges over all commuting \(\mathfrak{G}\)-strategies in \(\mathcal{B}(\mathcal{H})\), one obtains the **commuting value of \(\mathfrak{G}\)**, denoted \(\operatorname{val}^{\operatorname{co}}(\mathfrak{G})\). Similarly, given \(\delta>0\), we define the \(\delta,\operatorname{op}\)**-commuting value of \(\mathfrak{G}\)**, denoted \(\operatorname{val}^{\operatorname{co}}_{\delta,\operatorname{op}}(\mathfrak{G})\), to be the supremum of \(\operatorname{val}(\mathfrak{G},\sigma)\) as \(\sigma\) ranges over all \(\delta\)-op-almost commuting \(\mathfrak{G}\)-strategies in \(\mathcal{B}(\mathcal{H})\). Recall that a **language** (in the sense of complexity theory) is simply a subset of \(2^{<N}\), that is, is a set of finite sequences of bits. **Definition 16**.: We say that a language \(L\) belongs to \(\operatorname{MIP}^{\operatorname{co}}\) if there is an efficient mapping from sequences of bits \(z\) to nonlocal games \(\mathfrak{G}_{z}\) such that: * If \(z\in L\), then \(\operatorname{val}^{\operatorname{co}}(\mathfrak{G}_{z})=1\). * If \(z\notin L\), then \(\operatorname{val}^{\operatorname{co}}(\mathfrak{G}_{z})\leq\frac{1}{2}\). In [13], it was shown that if \(L\) belongs to \(\operatorname{MIP}^{\operatorname{co}}\), then \(L\) belongs to the complexity class coRE of sets whose complement is c.e. and it was asked if this inclusion is in fact an equality. The main result of this section shows that the analogous question has a negative answer for a suitably defined almost-commuting version of the class \(\operatorname{MIP}^{\operatorname{co}}\): **Definition 17**.: Fix a computable function \(\delta:\mathbb{N}\to[0,1]\). We say that a language \(L\) belongs to \(\operatorname{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\) if there is an efficient mapping from sequences of bits \(z\) to nonlocal games \(\mathfrak{G}_{z}\) such that: * If \(z\in L\), then \(\operatorname{val}^{\mathrm{co}}_{\delta(|z|),\mathrm{op}}(\mathfrak{G}_{z})=1\). * If \(z\notin L\), then \(\operatorname{val}^{\mathrm{co}}_{\delta(|z|),\mathrm{op}}(\mathfrak{G}_{z}) \leq\frac{1}{2}\). **Theorem 18**.: _If there is a locally universal \(C^{*}\)-algebra \(C\) that has a computable presentation \((\)in particular, if KEP holds\()\), then for every computable function \(\delta:\mathbb{N}\to[0,1]\), every language in \(\operatorname{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\) is recursively enumerable._ Proof.: Suppose that \(C\) is a locally universal \(C^{*}\)-algebra with a computable presentation \(C^{\#}\). Fix a computable function \(\delta:\mathbb{N}\to[0,1]\) and suppose that \(L\) belongs to \(\operatorname{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\). Here is the algorithm that shows that \(L\) is recursively enumerable. Suppose that one inputs the sequence of bits \(z\). Set \(\mathfrak{G}:=\mathfrak{G}_{z}\). Start enumerating pairs of \(\delta(|z|)\)-almost commuting \(\mathfrak{G}\)-measurements \((A,B)\) in \(C\) that consist only of rational points of \(C^{\#}\); this is possible since the presentation is computable. If \((A,B)\) is such a \(\delta(|z|)\)-op-almost commuting pair, approximate \(\|\mathfrak{G}(A,B)\|\) to within error \(\frac{1}{4}\). If this approximation exceeds \(\frac{1}{2}\), then declare that \(z\in L\). Here is why the algorithm works. We first show that if \(z\in L\), then the algorithm will tell us so. As above, set \(\mathfrak{G}:=\mathfrak{G}_{z}\). Fix \(\epsilon>0\) small enough and let \(\sigma:=(A,B,\phi)\) be a \(\delta(|z|)\)-op-almost commuting \(\mathfrak{G}\)-strategy in \(\mathcal{B}(\mathcal{H})\) such that \(\operatorname{val}(\mathfrak{G},\sigma)>1-\epsilon\). It follows that \(\|\mathfrak{G}(A,B)\|>1-\epsilon\). Let \(D\) be the \(C^{*}\)-algebra generated by the coordinates of \(A\) and \(B\) and consider an embedding of \(D\) into \(C^{\omega}\). It follows from Lemma 15 that there are \(\delta(|z|)\)-op-almost commuting \(\mathfrak{G}\)-measurements \(\bar{A}\) and \(\bar{B}\) in \(C\) for which \(\|\mathfrak{G}(\bar{A},\bar{B})\|>1-2\epsilon\). Without loss of generality, one can assume that the coordinates of \(\bar{A}\) and \(\bar{B}\) are rational points of \(C^{\#}\). If \(\epsilon\) is small enough, then approximating \(\|\mathfrak{G}(\bar{A},\bar{B})\|\) to within error \(\frac{1}{4}\) will exceed \(\frac{1}{2}\). Thus, our algorithm will eventually tell us that \(z\in L\). We now check that the algorithm makes no mistakes, that is, if the algorithm tells us that \(z\in L\), then in fact \(z\) does belong to \(L\). If the algorithm tells us that \((A,B)\) is a \(\delta(|z|)\)-op-almost commuting pair of \(\mathfrak{G}\)-measurements in \(C\) for which \(\|\mathfrak{G}(A,B)\|>\frac{1}{2}\), then there will be some state \(\phi\) on \(C\) such that \(\phi(\mathfrak{G}(A,B))>\frac{1}{2}\). Setting \(\sigma:=(A,B,\phi)\), we see that \(\operatorname{val}^{\mathrm{co}}_{\mathrm{op},\delta}(\mathfrak{G})>\frac{1}{2}\). Consequently, \(z\in L\), as desired. We note two things about Theorem 18. First, as is the case for \(\operatorname{MIP}^{\mathrm{co}}\), one can show that every language in \(\operatorname{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\) is coRE (regardless of the truth of KEP), whence the theorem implies that every language in \(\operatorname{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\) is actually decidable provided that KEP holds. Second, the class \(\operatorname{MIP}^{\mathrm{co}}_{\delta,\mathrm{op}}\) differs from the class \(\operatorname{MIP}^{\mathrm{co}}_{\delta}\) introduced by Coudron and Slofstra in [5]; in particular, every language in their class is decidable without any KEP assumption.
2307.04287
Generalizing Graph ODE for Learning Complex System Dynamics across Environments
Learning multi-agent system dynamics has been extensively studied for various real-world applications, such as molecular dynamics in biology. Most of the existing models are built to learn single system dynamics from observed historical data and predict the future trajectory. In practice, however, we might observe multiple systems that are generated across different environments, which differ in latent exogenous factors such as temperature and gravity. One simple solution is to learn multiple environment-specific models, but it fails to exploit the potential commonalities among the dynamics across environments and offers poor prediction results where per-environment data is sparse or limited. Here, we present GG-ODE (Generalized Graph Ordinary Differential Equations), a machine learning framework for learning continuous multi-agent system dynamics across environments. Our model learns system dynamics using neural ordinary differential equations (ODE) parameterized by Graph Neural Networks (GNNs) to capture the continuous interaction among agents. We achieve the model generalization by assuming the dynamics across different environments are governed by common physics laws that can be captured via learning a shared ODE function. The distinct latent exogenous factors learned for each environment are incorporated into the ODE function to account for their differences. To improve model performance, we additionally design two regularization losses to (1) enforce the orthogonality between the learned initial states and exogenous factors via mutual information minimization; and (2) reduce the temporal variance of learned exogenous factors within the same system via contrastive learning. Experiments over various physical simulations show that our model can accurately predict system dynamics, especially in the long range, and can generalize well to new systems with few observations.
Zijie Huang, Yizhou Sun, Wei Wang
2023-07-10T00:29:25Z
http://arxiv.org/abs/2307.04287v1
# Generalizing Graph ODE for Learning Complex System Dynamics across Environments ###### Abstract. Learning multi-agent system dynamics has been extensively studied for various real-world applications, such as molecular dynamics in biology, multi-body system in physics, and particle dynamics in material science. Most of the existing models are built to learn single system dynamics, which learn the dynamics from observed historical data and predict the future trajectory. In practice, however, we might observe multiple systems that are generated across different environments, which differ in latent exogenous factors such as temperature and gravity. One simple solution is to learn multiple environment-specific models, but it fails to exploit the potential commonalities among the dynamics across environments and offers poor prediction results where per-environment data is sparse or limited. Here, we present GG-ODE (**G**eneralized **G**raph **O**rdinary **D**ifferential **E**quations), a machine learning framework for learning continuous multi-agent system dynamics across environments. Our model learns system dynamics using neural ordinary differential equations (ODE) parameterized by Graph Neural Networks (GNNs) to capture the continuous interaction among agents. We achieve the model generalization by assuming the dynamics across different environments are governed by common physics laws that can be captured via learning a shared ODE function. The distinct latent exogenous factors learned for each environment are incorporated into the ODE function to account for their differences. To improve model performance, we additionally design two regularization losses to (1) enforce the orthogonality between the learned initial states and exogenous factors via mutual information minimization; and (2) reduce the temporal variance of learned exogenous factors within the same system via contrastive learning. Experiments over various physical simulations show that our model can accurately predict system dynamics, especially in the long range, and can generalize well to new systems with few observations. Graph Neural Networks; Neural ODE; Dynamical Systems; Representation Learning + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical reasoning. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. + Footnote †: ccs: Information systems — Physical data models; Computing methodologies — Spatial and physical. way (Kang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). These Graph-ODE methods have demonstrated the power of capturing long-range dynamics, and are capable of learning from irregular-sampled partial observations (Kang et al., 2017). They usually assume all the data are generated from one single system, and the goal is to learn the system dynamics from historical trajectories to predict the future. In practice, however, we might observe data that are generated from multiple systems, which can differ in their environments. For example, we may observe particle trajectories from systems that are with different temperatures, which we call exogenous factors. These exogenous factors can span over a wide range of settings such as particle mass, gravity, and temperature (Beng et al., 2016; Wang et al., 2019; Wang et al., 2019) across environments. One simple solution is to learn multiple environment-specific models, but it can fail to exploit the potential commonalities across environments and make accurate predictions for environments with sparse or zero observations. In many useful contexts, the dynamics in multiple environments share some similarities, yet being distinct reflected by the (substantial) differences in the observed trajectories. For example, considering the movements of water particles within multiple containers of varying shapes, the trajectories are driven by both the shared pair-wise physical interaction among particles (i.e. fluid dynamics) and the different shapes of the containers where collisions can happen when particles hit the boundaries. Also, the computational cost for training multiple environment-specific models would be huge. More challengingly, the exogenous factors within each environment can be latent, such as we only know the water trajectories are from different containers, without knowing the exact shape for each of them. Therefore, how to learn a single efficient model that can generalize across environments by considering both their commonalities and the distinct effect of per-environment latent exogenous factors remains unsolved. This model, if developed, may help us predict dynamics for systems under new environments with very few observed trajectories. Inspired by these observations, in this paper, we propose Generalized Graph ODE (GG-ODE), a general-purpose continuous neural simulator that learns multi-agent system dynamics across environments. Our key idea is to assume the dynamics across environments are governed by common physics laws that can be captured via learning a shared ODE function. We introduce in the ODE function a learnable vector representing the distinct latent exogenous factors for each environment to account for their differences. We learn the representations for the latent exogenous factors from systems' historical trajectories through an encoder by optimizing the prediction goal. In this way, different environments share the same ODE function framework while incorporating environment-specific factors in the ODE function to distinguish them. However, there are two main challenges in learning such latent exogenous factor representations. Firstly, since both the latent initial states for agents and the latent exogenous factors are learned through the historical trajectory data, how can we differentiate them to guarantee they have different semantic meanings? Secondly, when inferring from different time windows from the same trajectory, how can we guarantee the learned exogenous factors are for the same environment? Towards the first challenge, we enforce the orthogonality between the initial state encoder and the exogenous factor encoder via mutual information minimization. For the second challenge, we reduce the variance of learned exogenous factors within the same environment via a contrastive learning loss. We train our model in a multi-task learning paradigm where we mix the training data from multiple systems with different environments. In this way, the model is expected to fast adapt to other unseen systems with a few data points. We conduct extensive experiments over a wide range of physical systems, which show that our GG-ODE is able to accurately predict system dynamics, especially in the long range. The main contributions of this paper are summarized as follows: * We investigate the problem of learning continuous multi-agent system dynamics across environments. We propose a novel framework, known as GG-ODE, which describes the dynamics for each system with a shared ODE function and an environment-specific vector for the latent exogenous factors to capture the commonalities and discrepancies across environments respectively. * We design two regularization losses to guide the learning process of the latent exogenous factors, which is crucial for making precise predictions in the future. * Extensive experiments verify the effectiveness of GG-ODE to accurately predict system dynamics, especially in the long range prediction tasks. GG-ODE also generalizes well to unseen or low-resource systems that have very few training samples. ## 2. Problem Definition We aim to build a neural simulator to learn continuous multi-agent system dynamics automatically from data that can be generalized across environments. Throughout this paper, we use boldface uppercase letters to denote matrices or vectors, and regular lowercase letters to represent the values of variables. We consider a multi-agent dynamical system of \(N\) interacting agents as an evolving interaction graph \(\mathbf{G}^{t}=\{\mathcal{V},\mathcal{E}^{t}\}\), where nodes are agents and edges are interactions between agents that can change over time. For each dynamical system, we denote \(e\in E\) as the environment from which the data is acquired. We denote \(\mathbf{X}^{t,e}\in\mathcal{X}\) as the feature matrix for all \(N\) agents and \(\mathbf{x}^{t,e}_{i}\) as the feature vector of agent \(i\) at time \(t\) under environment \(e\). The edges between agents are assigned if two agents are within a connectivity radius \(R\) based on their current locations \(\mathbf{p}^{t,e}_{i}\) which is part of the node feature vector, i.e. \(\mathbf{p}^{t,e}_{i}\in\mathbf{x}^{t,e}_{i}\). They reflect the local interactions of agents and the radius is kept constant over time (Wang et al., 2019). Our model input consists of the trajectories of \(N\) agents over \(K\) timestamps \(\mathbf{X}^{t,e,e}=\{\mathbf{X}^{t,e},\mathbf{X}^{t,e},\ldots,\mathbf{X}^{t_{K},e}\}\), where the timestamps \(t_{1},t_{2}\cdots t_{K}\) can have non-uniform intervals and be of any continuous values. Our goal is to learn a generalized simulator \(s_{\theta}:\mathbf{X}^{t_{1},\mathbf{e},e}\to\mathbf{Y}^{t_{K+1},e}\) that predicts node dynamics in the future for any environment \(e\). Here \(\mathbf{Y}^{t,e}\in\mathbf{Y}\) represents the targeted node dynamic information at time \(t\), and can be a subset of the input features. We use \(\mathbf{y}^{t,e}_{i}\) to denote the targeted node dynamic vector of agent \(i\) at time \(t\) under environment \(e\). ## 3. Preliminaries and Related Work ### Dynamical System Simulations with Graph Neural Networks (GNNs) Graph Neural Networks (GNNs) are a class of neural networks that operate on graph-structured data by passing local messages (Kipf and Welling, 2015; Kipf and Welling, 2016; Kipf and Welling, 2017; Kipf and Welling, 2018; Kipf and Welling, 2019). They have been extensively employed in various applications such as node classification (Kipf and Welling, 2019; Kipf and Welling, 2019), link prediction (Kipf and Welling, 2019; Kipf and Welling, 2019), and recommendation systems (Kipf and Welling, 2019; Kipf and Welling, 2019; Kipf and Welling, 2019). By viewing each agent as a node and interaction among agents as edges, GNNs have shown to be efficient for approximating pair-wise node interactions and achieved accurate predictions for multi-agent dynamical systems (Kipf and Welling, 2019; Kipf and Welling, 2019; Kipf and Welling, 2019). The majority of existing studies propose discrete GNN-based simulators where they take the node features at time \(t\) as input to predict the node features at time \(t\)+1. To further capture the long-term temporal dependency for predicting future trajectories, some work utilizes recurrent neural networks such as RNN, LSTM or self-attention mechanism to make prediction at time \(t\) +1 based on the historical trajectory sequence within a time window (Kipf and Welling, 2019; Kipf and Welling, 2019; Kipf and Welling, 2019). However, they all restrict themselves to learn a one-step state transition function. Therefore, when successively apply these one-step simulators to previous predictions in order to generate the rollout trajectories, error accumulates and impairs the prediction accuracy, especially for long-range prediction. Also, when applying most discrete GNNs to learn over multiple systems under different dynamical laws (environments), they usually retain the GNNs individually for dealing with each specific system environment (Kipf and Welling, 2019; Kipf and Welling, 2019), which yields a large computational cost. ### Ordinary Differential Equations (ODEs) for Multi-agent Dynamical Systems The dynamic nature of a multi-agent system can be captured by a series of nonlinear first-order ordinary differential equations (ODEs), which describe the co-evolution of states for a set of \(N\) dependent variables (agents) over continuous time \(t\in\mathbb{R}\) as (Bergman et al., 2017; Kipf and Welling, 2019): \(\mathbf{z}_{i}^{t}:=\frac{d\mathbf{z}_{i}^{t}}{dt}=g\left(\mathbf{z}_{1}^{t},\mathbf{z}_{2}^{ t}\cdots z_{N}^{t}\right).\) Here \(\mathbf{z}_{i}^{t}\in\mathbb{R}^{d}\) denotes the state variable for agent \(i\) at timestamp \(t\) and \(g\) denotes the ODE function that drives the system move forward. Given the initial states \(\mathbf{z}_{1}^{0},\cdots z_{N}^{0}\) for all agents and the ODE function \(g\), any black box numerical ODE solver such as Runge-Kutta(Kutta, 1999) can solve the ODE initial-value problem (IVP), of which the solution \(\mathbf{z}_{i}^{T}\) can be evaluated at any desired time as shown in Eqn 1. \[\mathbf{z}_{i}^{T}=\mathbf{z}_{i}^{0}+\int_{t=0}^{T}g\left(\mathbf{z}_{1}^{t},\mathbf{z}_{2}^{ t}\cdots\mathbf{z}_{N}^{t}\right)dt \tag{1}\] Traditionally, the ODE function \(g\) is usually hand-crafted based on some domain knowledge such as in robot motion control (Kipf and Welling, 2019) and fluid dynamics (Kipf and Welling, 2019), which is hard to specify without knowing too much about the underlying principles. Even if the exact ODE functions are given, they are usually hard to scale as they require complicated numerical integration (Kipf and Welling, 2019; Kipf and Welling, 2019). Some recent studies (Kipf and Welling, 2019; Kipf and Welling, 2019; Kipf and Welling, 2019) propose to parameterize it with a neural network and learn it in a data-driven way. They combine the expressive power of neural networks along with the principled modeling of ODEs for dynamical systems, which have achieved promising results in various applications (Kipf and Welling, 2019; Kipf and Welling, 2019; Kipf and Welling, 2019). ### GraphODE for Dynamical Systems To model the complex interplay among agents in a dynamical system, researchers have recently proposed to combine ODE with GNNs, which has been shown to achieve superior performance in long-range predictions (Kipf and Welling, 2019; Kipf and Welling, 2019; Kipf and Welling, 2019). In (Kipf and Welling, 2019), an encoder-processor-decoder architecture is proposed, where an encoder first computes the latent initial states for all agents individually based on their first observations. Then an ODE function parameterized by a GNN predicts the latent trajectories starting from the learned initial states. Finally, a decoder extracts the predicted dynamic features based on a decoding function that takes the predicted latent states as input. Later on, a Graph-ODE framework has been proposed (Kipf and Welling, 2019; Kipf and Welling, 2019) which follows the structure of variational autoencoder (Kipf and Welling, 2019). They assume an approximated posterior distribution over the latent initial state for each agent, which is learned based on the whole historical trajectories instead of a single point as in (Kipf and Welling, 2019). The encoder computes the approximated posterior distributions for all agents simultaneously considering their mutual influence and then sample the initial states from them. Compared with (Kipf and Welling, 2019), they are able to achieve better prediction performance, especially in the long range, and are also capable of handling the dynamic evolution of graph structures (Kipf and Welling, 2019) which is assumed to be static in (Kipf and Welling, 2019). We follow a similar framework to this line but aim at generalizing GraphODE to model multiple systems across environments. ## 4. Method In this section, we present Generalized Graph ODE (GG-ODE ) for learning complex system dynamics across environments. As depicted in Figure 1, GG-ODE consists of four main components that are trained jointly: (1) an initial state encoder for inferring the latent initial states for all agents simultaneously; (2) an environment encoder which learns the latent representations for exogenous factors; (3) a generative model defined by a GNN-based ODE function that is shared across environments for modeling the continuous interaction among agents in the latent space. The distinct latent exogenous factors learned for each environment are incorporated into the ODE function to account for their discrepancies, and (4) a decoder that extracts the predicted dynamic features based on a decoding function. We now introduce each component in detail. ### Initial State Encoder Given the observed trajectories \(X^{t_{1}:\kappa,\epsilon}\), the initial state encoder computes a posterior distribution of latent initial state \(q_{\phi}\left(z_{i}^{0,e}\mid X^{t_{1}:\kappa,\epsilon}\right)\) for each agent, from which \(\mathbf{z}_{i}^{0,e}\) is sampled. The latent initial state \(\mathbf{z}_{i}^{0,e}\) for each agent determines the starting point for the predicted trajectory. We assume the prior distribution \(p(\mathbf{z}_{i}^{0,e})\) is a standard normal distribution, and use Kullback-Leibler divergence term in the loss function to add significant regularization towards how the learned distributions look like, which differs VAE from other autoencoder frameworks (Kipf and Welling, 2019; Kipf and Welling, 2019). In multi-agent dynamical systems, agents are highly-coupled and influence each other. Instead of learning such distribution separately for each agent, such as using an RNN (Kipf and Welling, 2019) to encode the temporal pattern for each individual trajectory, we compute the posterior distributions for all agents simultaneously (similar to (Kipf and Welling, 2019)). Specifically, we fuse all trajectories as a whole into a temporal graph to consider both the temporal patterns of individual agents and the mutual interaction among them, where each node is an observation of an agent at a specific timestamp. Two types of edges are constructed, which are (1) spatial edges \(\mathbf{V}^{I}\) that are among observations of interacting agents at each timestamp if the Euclidean distance between the agents' positions \(t^{tr,e}_{ij}=||\mathbf{p}^{I,e}_{i}-\mathbf{p}^{I,e}_{j}||_{2}\) is within a (small) connectivity radius \(R\); and (2) temporal edges that preserve the autoregressive nature of each trajectory, defined between two consecutive observations of the same agent. Note that spatial edges are bidirectional while temporal edges are directional to preserve the autoregressive nature of each trajectory, as shown in Figure 1. Based on the constructed temporal graph, we learn the latent initial states for all agents through a two-step procedure: (1) dynamic node representation learning that learns the representation \(\mathbf{h}^{I,e}_{i}\) for each observation node whose feature vector is \(\mathbf{x}^{I,e}_{i}\). (2) sequence representation learning that summarizes each observation sequence (trajectory) into a fixed-dimensional vector through a self-attention mechanism. #### 4.1.1. Dynamic Node Representation Learning We first conduct dynamic node representation learning on the temporal graph through an attention-based spatial-temporal GNN defined as follows: \[\begin{split}\mathbf{h}^{I+1(t,e)}_{j}=\mathbf{h}^{I(t,e)}_{j}+ \sigma\left(\sum_{i(t^{\prime},a)\in\mathcal{N}^{I(t,e)}_{i}}\alpha^{I(t^{ \prime},e)\to j(t,e)}_{i}\times\mathbf{W}_{o}\widehat{\mathbf{h}}^{I(t^{\prime},e)}_{ i}\right)\\ \alpha^{I(t^{\prime},e)\to j(t,e)}_{i}=\left(\mathbf{W}_{k} \widehat{\mathbf{h}}^{I(t^{\prime},e)}_{i}\right)^{T}\left(\mathbf{W}_{q}\mathbf{h}^{I(t,e )}_{j}\right)\cdot\frac{1}{\sqrt{d}}\\ \widehat{\mathbf{h}}^{I(t^{\prime},e)}_{i}=\mathbf{h}^{I(t^{\prime},e)}_ {i}+\mathbf{TE}(t^{\prime}-t)\end{split} \tag{2}\] \[\begin{split}\mathbf{TE}(\Delta t)_{2i}=\sin\left(\frac{\Delta t }{10000^{2i/d}}\right),\ \mathbf{TE}(\Delta t)_{2i+1}=\cos\left(\frac{\Delta t}{10000^{2i/d}}\right) \end{split}\] where \(\sigma(\cdot)\) is a non-linear activation function; \(d\) is the dimension of node embeddings. The node representation is computed as a weighted summation over its neighbors plus residual connection where the attention score is a transformer-based (Wang et al., 2017) dot-product of node representations by the use of value, key, query projection matrices \(\mathbf{W}_{o},\mathbf{W}_{k},\mathbf{W}_{q}\). The learned attention scores are normalized via softmax across all neighbors. Here \(\mathbf{h}^{I(t,e)}_{j}\) is the representation of agent \(j\) at time \(t\) in the \(l\)-th layer. \(\mathbf{h}^{I(t^{\prime},e)}_{i}\) is the general representation for a neighbor which is connected either by a temporal edge (where \(t^{\prime}<t\) and \(i=j\)) or a spatial edge (where \(t=t^{\prime}\) and \(i\neq j\)) to the observation \(\mathbf{h}^{I(t,e)}_{j}\). We add temporal encoding (Wang et al., 2017; Wang et al., 2017) to each neighborhood node representation in order to distinguish the message delivered via spatial and temporal edges respectively. Finally, we stack \(L\) layers to get the final representation for each observation node as \(\mathbf{h}^{I,e}_{i}=\mathbf{h}^{I(t,e)}_{i}\). #### 4.1.2. Sequence Representation Learning We then employ a self-attention mechanism to generate the sequence representation \(\mathbf{m}^{e}_{i}\) for each agent, which is used to compute the mean \(\mathbf{\mu}^{0,e}_{i}\) and variance \(\mathbf{\sigma}^{0,e}_{i}\) of the approximated posterior distribution of the agent's initial state. Compared with recurrent models such as RNN, LSTM (Wang et al., 2017), it offers better parallelization for accelerating training speed and in the meanwhile alleviates the vanishing/exploding gradient problem brought by long sequences (Wang et al., 2017). Figure 1. The overall framework of GG-ODE consists of four modules. First, an initial state encoder computes the latent initial states for all agents simultaneously by constructing a temporal graph from the input trajectories. Additionally, an environment encoder computes the latent representations for exogenous factors that are distinct for each environment. Then, the generative model defined by a GNN-based ODE function calls the solver to output the predicted latent states for agents in the future, where the learned exogenous factors are incorporated into the ODE function. Finally, a decoder generates the predicted dynamics for each agent based on the decoding likelihood determined by the latent states. Two regularization terms are added to preserve the orthogonality of two encoders and the time-invariant property of the environment encoder. We follow (Kirshner, 2017) and compute the sequence representation \(\mathbf{m}_{i}^{e}\) as a weighted sum of observations for agent \(i\): \[\mathbf{m}_{i}^{e}=\frac{1}{K}\sum_{t}\sigma\left((\mathbf{a}_{i}^{e})^{T}\widehat{\mathbf{h }}_{i}^{e}\widehat{\mathbf{h}}_{i}^{t,e}\right),\ \mathbf{a}_{i}^{e}=\tanh\left(\left(\frac{1}{K}\sum_{t} \widehat{\mathbf{h}}_{i}^{t,e}\right)\mathbf{W}_{a}\right), \tag{3}\] where \(\mathbf{a}_{i}^{e}\) is the average of observation representations with a nonlinear transformation \(\mathbf{W}_{a}\) and \(\widehat{\mathbf{h}}_{i}^{t,e}=\mathbf{h}_{i}^{t,e}+\text{TE}(t)\). \(K\) is the number of observations for each trajectory. Then the initial state is drawn from the approximated posterior distribution as: \[q_{\phi}\left(\mathbf{z}_{i}^{0,e}\mid X^{t_{1},e}\right) =\mathcal{N}\left(\mathbf{\mu}_{i}^{0,e},\mathbf{\sigma}_{i}^{0,e}\right),\ \mathbf{\mu}_{i}^{0,e},\ \mathbf{\sigma}_{i}^{0,e}=f_{\text{trans}}\left(\mathbf{m}_{i}^{e}\right)\] \[z_{i}^{0,e}\sim p\left(\mathbf{z}_{i}^{0,e}\right)\approx q_{\phi} \left(\mathbf{z}_{i}^{0,e}\mid X^{t_{1},e},e\right) \tag{4}\] where \(f_{\text{trans}}\) is a simple Multilayer Perceptron (MLP) whose output vector is equally split into two halves to represent the mean and variance respectively. ### Environment Encoder The dynamic nature of a multi-agent system can be largely affected by some exogenous factors from its environment such as gravity, temperature, etc. These exogenous factors can span over a wide range of settings and are sometimes latent and not observable. To make our model generalize across environments, we design an environment encoder to learn the effect of the exogenous factors automatically from data to account for the discrepancies across environments. Specifically, we use the environment encoder to learn the representations of exogenous factors from observed trajectories and then incorporate the learned vector into the ODE function which is shared across environments and defines how the system evolves over time. In this way, we use a shared ODE function framework to capture the commonalities across environments while preserving the differences among them with the environment-specific latent representation, to improve model generalization performance. It also allows us to learn the exogenous factors of an unseen environment based on only its leading observations. We now introduce the environment encoder in detail. The exogenous factors would pose influence on all agents within a system. On the one hand, they will influence the self-evolution of each individual agent. For example, temperatures would affect the velocities of agents. On the other hand, they will influence the pairwise interaction among agents. For example, temperatures would also change the energy when two particles collide with each other. The environment encoder \(f_{\text{enc}}^{\text{env}}\) therefore learns the latent representation of exogenous factors \(\mathbf{u}^{e}\) by jointly consider the trajectories from all agents, i.e. \(f_{\text{enc}}^{\text{env}}:X^{t_{1},e,e}\to\mathbf{u}^{e}\). Specifically, we learn an environment-specific latent vector from the aforementioned temporal graph in Sec 4.1 that is constructed from observed trajectories. The temporal graph contains both the information for each individual trajectory and the mutual interaction among agents through temporal and spatial edges. To summarize the whole temporal graph into a vector \(\mathbf{u}^{e}\), we attend over the sequence representation \(\mathbf{m}_{i}^{e}\) for each trajectory introduced in Sec 4.1 as: \[\mathbf{u}^{e}=\frac{1}{N}\sum_{i}\sigma\left((\mathbf{b}^{e})^{T}\mathbf{m}_{i}^{e}\mathbf{ m}_{i}^{e}\right),\ \mathbf{b}^{e}=\tanh\left(\left(\frac{1}{N}\sum_{i}\mathbf{m}_{i}^{e}\right)\mathbf{W}_{b} \right), \tag{5}\] where \(\mathbf{W}_{b}\) is a transformation matrix and the attention weight is computed based on the average sequence representation with nonlinear transformation similar as in Eqn (3). Note that we use different parameters to compute the sequence representation \(\mathbf{m}_{i}^{e}\) as opposed to the initial state encoder. The reason is that the semantic meanings of the two sequence representations are different: one is for the latent initial states and another is for the exogenous factors. #### 4.2.1. Time Invariance A desired property of the learned representation for exogenous factors \(\mathbf{u}^{e}\) is that it should be time-invariant towards the input trajectory time window. In other words, for the same environment, if we chunk the whole trajectories into several pieces, the inferred representations should be similar to each other as they are describing the same environment. To achieve this, we design a contrastive learning loss to guide the learning process of the exogenous factors. As shown in Figure 2, we force the learned exogenous factor representations to be similar if they are generated based on the trajectories from the same environment (positive pairs), and to be apart from each other if they are from different environments (negative pairs). Specifically, we define the contrastive learning loss as follows: \[\mathcal{L}_{\text{contra}}\ =-\log\frac{\exp\left(\sin\left(f_{\text{enc}}^{ \text{env}}\ \left(X^{t_{1},t_{2},e}\right),f_{\text{enc}}^{\text{env}}\ \left(X^{t_{1},t_{4},e}\right)\right)/\tau\right)}{\sum_{e^{\prime}+e}\exp \left(\sin\left(f_{\text{enc}}^{\text{env}}\ \left(X^{t_{1},t_{2},e},f_{\text{enc}}^{\text{env}}\ \left(X^{t_{ 1},t_{4},e^{\prime}}\right)/\tau\right)\right)} \tag{6}\] where \(\tau\) is a temperature scalar and \(\sin(\cdot,\cdot)\) is cosine similarity between two vectors. Note that the lengths of the observation sequences can vary. The detailed generation process for positive and negative pairs can be found in Appendix A.3.2. #### 4.2.2. Orthogonality GG-ODE features two encoders that take the input of observed trajectories \(X^{t_{1},e,\tau}\) for learning the latent initial states and the latent exogenous factors respectively. As they are designed for different purposes but are both learned from the same Figure 2. Temporal properties of the environment encoder. We use contrastive learning loss to force the latent exogenous factors learned from different windows within the same environment to be close to each other, and from different environments to be apart from each other. input, we disentangle the learned representations from them via a regularization loss defined via mutual information minimization. Mutual information measures the dependency between two random variables \(X,Z\)(Wang et al., 2017). Since we are not interested in the exact value of the mutual information, a lower bound derived from Jensen Shannon Divergence (Jensen, 1958) could be formulated as \[I_{\text{SD}}(X,Z)=E_{P_{XZ}}[-\text{sp}(-M(x,z))]-E_{P_{X}P_{Z}}[\text{sp}(M(x, z))], \tag{7}\] where \(P_{X}P_{Z}\) is the product of the marginal distributions and \(P_{XZ}\) is the joint distribution. \(sp(w)=log(1+e^{w})\) and \(M\) is a discriminator modeled by a neural network to compute the score for measuring their mutual information. According to recent literature (Jensen, 1958; Wang et al., 2017; Wang et al., 2017), the sample pair (positive pairs) \((x,z)\) drawn from the joint distribution \(P_{XZ}\) are different representations of the same data sample, and the sample pair (negative pairs) drawn from \(P_{X}P_{Z}\) are different representations from different data samples. We therefore attempt to minimize the mutual information from the two encoders as follows \[\mathcal{L}_{\text{MI}}=\mathbb{E}_{e\in E_{i}}[-sp(-\Psi(\mathbf{z}_{i}^{0,e}, \mathbf{u}^{e}))]-\mathbb{E}_{e\in E\times e^{\prime}\in E_{i}}[sp(\Psi(\mathbf{z}_{i }^{0,e},\mathbf{u}^{e^{\prime}}))] \tag{8}\] where \(\Psi\) is a MLP-based discriminator. Specifically, we force the latent initial states \(\mathbf{z}_{i}^{0,e}\) for all agents from environment \(e\) to be dissimilar to the learned exogenous factors \(\mathbf{u}^{e}\). And construct negative pairs by replacing the learned exogenous factors from another environment as \(\mathbf{u}^{e^{\prime}}\). The generation process for positive and negative pairs can be found in Appendix A.3.2. ### ODE Generative Model and Decoder #### 4.3.1. ODE Generative Model After describing the initial state encoder and the environment encoder, we now define the ODE function that drives the system to move forward. The future trajectory of each agent can be determined by two important factors: the potential influence received from its neighbors in the interaction graph and the self-evolution of each agent. For example, in the n-body system, the position of each agent can be affected both by the force from its connected neighbors and its current velocity which can be inferred from its historical trajectories. Therefore, our ODE function consists of two parts: a GNN that captures the continuous interaction among agents and the self-evolution of the node itself. One issue here is how can we decide the neighbors for each agent in the ODE function as the interaction graph is evolving, the neighbors for each agent are dynamically changing based on their current positions, which are implicitly encoded in their latent state representations \(\mathbf{z}_{i}^{t,e},\mathbf{z}_{j}^{t,e}\). We propose to first decode the latent node representations \(\mathbf{z}_{i}^{t,e},\mathbf{z}_{j}^{t,e}\) with a decoding function \(f_{\text{dec}}\) to obtain their predicted positions \(\mathbf{p}_{i}^{t,e},\mathbf{p}_{j}^{t,e}\) at current timestamp. Then we determine their connectivity based on whether their Euclidean distance \(\mathbf{r}_{ij}^{t,e}=||\mathbf{p}_{i}^{t,e}-\mathbf{p}_{j}^{t,e}||_{2}\) is within the predefined radius \(R\). This can be computed efficiently by using a multi-dimensional index structure such as the \(k\)-\(d\) tree. The decoding function \(f_{\text{dec}}\) is the same one that we will use in the decoder. To incorporate the influence of exogenous factors, we further incorporate \(\mathbf{u}^{e}\) into the general ODE function to improve model generalization ability as: \[\frac{d\mathbf{z}_{i}^{t,e}}{dt}=g\left(\mathbf{z}_{1}^{t,e},\mathbf{z}_{2}^{ t,e}\cdots\mathbf{z}_{N}^{t,e}\right)=\sum_{j\in\mathcal{N}_{i}}f_{\text{GNN}}( \mathbf{z}_{i}^{t,e},\mathbf{\overline{z}}_{j}^{t,e})+f_{\text{self}}(\mathbf{\overline{ z}}_{i}^{t,e})\] \[\mathbf{\overline{z}}_{i}^{t,e}=f_{\text{env}}(\mathbf{z}_{i}^{t,e}||\mathbf{ u}^{e}) \tag{9}\] where \(||\) denotes concatenation and \(f_{\text{GNN}}\) can be any GNN that conducts message passing among agents. \(f_{\text{self}},f_{\text{env}}\) are implemented as two MLPs respectively. In this way, we learn the effect of latent exogenous factors from data without supervision where the latent representation \(\mathbf{u}^{e}\) is trained end-to-end by optimizing the prediction loss. #### 4.3.2. Decoder Given the ODE function \(g\) and agents' initial states \(\mathbf{z}_{i}^{0,e}\) for \(i=1,2\cdots N\), the latent trajectories for all agents are determined, which can be solved via any black-box ODE solver. Finally, a decoder generates the predicted dynamic features based on the decoding probability \(p(\mathbf{y}_{i}^{t,e}|\mathbf{z}_{i}^{t,e})\) computed from the decoding function \(f_{\text{dec}}\) as shown in Eqn 10. We implement \(f_{\text{dec}}\) as a simple two-layer MLP with nonlinear activation. It outputs the mean of the normal distribution \(p(\mathbf{y}_{i}^{t,e}|\mathbf{z}_{i}^{t,e})\), which we treat as the predicted value for each agent. \[\mathbf{z}_{i}^{t_{1},e}\cdots\mathbf{z}_{i}^{t_{T},e}=\text{ODESolve}(g,[ \mathbf{z}_{1}^{0,e},\mathbf{z}_{2}^{0,e}\cdots\mathbf{z}_{N}^{0,e}],(t_{1}\cdots t_{T}))\] \[\mathbf{y}_{i}^{t,e}\sim p(\mathbf{y}_{i}^{t,e}|\mathbf{z}_{i}^{t,e})=f_{ \text{dec}}(\mathbf{z}_{i}^{t,e}) \tag{10}\] ### Training We now introduce the overall training procedure of GG-ODE. For each training sample, we split it into two halves along the time, where we condition on the first half \([t_{1},t_{K}]\) in order to predict dynamics in the second half \([t_{K+1},t_{T}]\). Given the observed trajectories \(X^{t_{1},e,e}\), we first run the initial state encoder to compute the latent initial state \(\mathbf{z}_{i}^{0,e}\) for each agent, which is sampled from the approximated posterior distribution \(q_{\phi}\left(\mathbf{z}_{i}^{0,e}\mid X^{t_{1},e}\right)\). We then generate the latent representations of exogenous factors \(\mathbf{u}^{e}\) from the environment \(e\) via the environment encoder. Next, we run the ODE generative model that incorporates the latent exogenous factors to compute the latent states for all agents in the future. Finally, the decoder outputs the predicted dynamics for each agent. We jointly train the encoders, ODE generative model, and decoder in an end-to-end manner. The loss function consists of three parts: (1) the evidence lower bound (ELBO) which is the addition of the reconstruction loss for node trajectories and the KL divergence term for adding regularization to the inferred latent initial states for all agents. We use \(Z^{0,e}\) to denote the latent initial state matrix of all N agents. The standard VAE framework is trained to maximize ELBO so we take the negative as the ELBO loss; (2) the contrastive learning loss for preserving the time invariance properties of the learned exogenous factors; (3) the mutual information loss that disentangles the learned representations from the two encoders. \(\lambda_{1},\lambda_{2}\) are two hyperparameters for balancing the three terms. We summarize the whole procedure in Appendix A.4. \[\mathcal{L}=\mathcal{L}_{\text{ELBO}}+\lambda_{1}\mathcal{L}_{\text{contras}}+ \lambda_{2}\mathcal{L}_{MI} \tag{11}\] \[\mathcal{L}_{\text{ELBO}(\theta,\phi)}=-\mathbb{E}_{Z^{0,e_{-}}\prod_{i=1} ^{N}q_{\phi}\left(z_{i}^{0,e}|X^{t_{i},k,e^{\prime}}\right)}[\log p_{\theta}(Y^{ t_{K+1},T,e})] \tag{12}\] \[+\text{KL}[\prod_{i=1}^{N}q_{\phi}(z_{i}^{0,e}|X^{t_{i},k,e^{\prime }})||p(Z^{0,e})]\] ## 5. Experiments ### Experiment Setup #### 5.1.1. Datasets We illustrate the performance of our model across two physical simulations that exhibit different system dynamics over time: (1) The Water dataset (Narayan et al., 2017), which describes the fluid dynamics of water within a container. Containers can have different shapes and numbers of ramps with random positions inside them, which we view as different environments. The dataset is simulated using the material point method (MPM), which is suitable for simulating the behavior of interacting, deformable materials such as solids, liquids, gases 2. For each data sample, the number of particles can vary but the trajectory lengths are kept the same as 600. The input node features are 2-D positions of particles, and we calculate the velocities and accelerations as additional node features using finite differences of these positions. The total number of data samples (trajectories) is 1200 and the number of environments is 68, where each environment can have multiple data samples with different particle initializations such as positions, velocities, and accelerations. (2) The Lennard-Jones potential dataset (Kolmogorov, 1954), which describes the soft repulsive and attractive interactions between simple atoms and molecules 3. We generate data samples with different temperatures, which could affect the potential energy preserved within the whole system thus affecting the dynamics. We view temperatures as different environments. The total number of data samples (trajectories) is 6500 and the number of environments is 65. Under each environment, we generate 100 trajectories with different initializations. The trajectory lengths are kept the same as 100. The number of particles is 1000 for all data samples. More details about datasets can be found in Appendix A.1. Footnote 2: [https://en.wikipedia.org/wiki/Material_point_method](https://en.wikipedia.org/wiki/Material_point_method) Footnote 3: [https://en.wikipedia.org/wiki/Lemnard-Jones_potential](https://en.wikipedia.org/wiki/Lemnard-Jones_potential) #### 5.1.2. Task Evaluation and Data Split We predict trajectory rollouts across varying lengths and use Mean Square Error (MSE) as the evaluation metric. **Task Evaluation.** The trajectory prediction task is conducted under two settings: (1) Transductive setting, where we evaluate the test sequences whose environments are seen during training; (2) Inductive setting, where we evaluate the test sequences whose environments are not observed during training. It helps to test the model's generalization ability to brand-new systems. **Data Split.** We train our model in a sequence-to-sequence setting where we split the trajectory of each training sample into two parts \([t_{1},t_{K}]\) and \([t_{K+1},t_{T}]\). We condition on the first part of observations to predict the second part. To conduct data split, we first randomly select 20% environments whose trajectories are all used to construct the testing set \(X_{\text{test}}^{\text{Induct}}\) in the inductive setting. For the remaining trajectories that cover the 80% environments, we randomly split them into three partitions: 80% for the training set \(X_{\text{train}}\), 10% for the validation set \(X_{\text{val}}\) and 10% for the testing set in the transductive setting \(X_{\text{test}}^{\text{trans}}\). In other words, we have two test sets for the inductive and transductive settings respectively, one training set and one validation set. To fully utilize the data points within each trajectory, we generate training and validation samples by splitting each trajectory into several chunks that can overlap with each other, using a sliding window. The sliding window has three hyperparameters: the observation length and prediction length for each sample, and the interval between two consecutive chunks (samples). Specifically, for the Water dataset, we set the observation length as 50 and the prediction length as 150. We obtain samples from each trajectory by using a sliding window of size 200 and setting the sliding interval as 50. For the Lennard-Jones potential dataset, we set the observation length as 20, the prediction length as 50, and the interval as 10. The procedure is summarized in Appendix A.1.1. During evaluations for both settings, we ask the model to roll out over the whole trajectories without further splitting, whose prediction lengths are larger than the ones during training. The observation lengths during testing are set as 20 for the Lennard-Jones potential dataset and 50 for the Water dataset across the two settings. ### Baselines We compare both discrete neural models as well as continuous neural models where they do not have special treatment for modeling the influence from different environments. For discrete ones we choose: NRI (Kolmogorov, 1954) which is a discrete GNN model that uses VAE to infer the interaction type among pairs of agents and is trained via one-step predictions; GNS (Narayan et al., 2017), a discrete GNN model that uses multiple rounds of message passing to predict every single step; LSTM (Narayan et al., 2017), a classic recurrent neural network (RNN) that learns the dynamics of each agent independently. For the continuous models, we compare with NDCN (Narayan et al., 2017) and Social ODE (Narayan et al., 2017), two ODE-based methods that follow the encoder-processor-decoder structure with GNN as the ODE function. The initial state for each agent is drawn from a single data point instead of a leading sequence. CG-ODE (K power when making long-term predictions. This may be due to the fact that GG-ODE is a continuous model trained in a sequence-to-sequence paradigm whereas discrete GNN methods are only trained to make a fixed-step prediction. Another continuous model NDCN only conditions a single data point to make predictions for the whole trajectory in the future, resulting in suboptimal performance. Finally, we can see that GG-ODE has a larger performance gain over existing methods in the inductive setting than in the transductive setting, which shows its generalization ability to fast adapt to other unseen systems with a few data points. Figure 3 visualizes the prediction results under the transductive setting for the Water dataset. #### 5.3.1. Ablation Studies To further analyze the rationality behind our model design, we conduct an ablation study by considering three model variants: (1) We remove the contrastive learning loss which forces the learned exogenous factors to satisfy the time invariance property, denoted as \(-w/o\mathcal{L}_{\text{contra}}\); (2) We remove the mutual information minimization loss which reduces the variance of the learned exogenous factors from the same environment, denoted as \(-w/o\mathcal{L}_{MI}\). (3) We share the parameters of the two encoders for computing the latent representation \(\mathbf{m}_{i}^{e}\) for each observation \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Lennard-Jones potential} & \multicolumn{2}{c|}{Lennard-Jones potential} & \multicolumn{2}{c|}{Water} & \multicolumn{2}{c}{Water} \\ & \multicolumn{2}{c|}{Transductive MSE (\(10^{-2}\))} & \multicolumn{2}{c|}{Inductive MSE (\(10^{-1}\))} & \multicolumn{2}{c|}{Transductive MSE (\(10^{-3}\))} & \multicolumn{2}{c}{Inductive MSE (\(10^{-2}\))} \\ Rollout Percentage & 30\% & 60\% & 100\% & 30\% & 60\% & 100\% & 30\% & 60\% & 100\% & 30\% & 60\% & 100\% \\ \hline LSTM & 6.73 & 20.69 & 31.88 & 1.64 & 8.82 & 18.01 & 4.87 & 23.09 & 30.44 & 1.01 & 6.72 & 14.79 \\ NRI & 5.83 & 17.99 & 28.18 & 1.33 & 4.34 & 13.97 & 3.87 & 19.64 & 26.34 & 0.83 & 3.84 & 10.59 \\ NDCN & 5.99 & 17.54 & 27.06 & 1.35 & 4.27 & 12.37 & 3.95 & 18.76 & 24.33 & 0.85 & 3.79 & 10.11 \\ CG-ODE & 5.43 & 17.01 & 26.01 & 1.32 & 4.25 & 12.03 & 3.41 & 18.13 & 23.62 & 0.80 & 3.64 & 9.91 \\ SocialODE & 5.62 & 17.23 & 26.89 & 1.34 & 4.26 & 12.44 & 3.68 & 18.42 & 23.77 & 0.84 & 3.70 & 10.01 \\ GNS & **5.03** & 16.28 & 25.44 & 1.28 & 4.23 & 11.88 & **3.17** & 17.88 & 23.14 & 0.76 & 3.45 & 9.78 \\ GG-ODE & 5.18 & **16.03** & **24.97** & **1.10** & **3.98** & **10.77** & 3.20 & **16.94** & **22.58** & **0.63** & **3.11** & **8.02** \\ \hline -w/o \(\mathcal{L}_{\text{contra}}\) & 5.32 & 17.03 & 26.53 & 1.30 & 4.25 & 12.13 & 3.32 & 18.03 & 23.01 & 0.75 & 3.58 & 10.03 \\ -w/o\(\mathcal{L}_{MI}\) & 5.45 & 17.25 & 26.11 & 1.32 & 4.11 & 11.76 & 3.43 & 18.32 & 22.95 & 0.78 & 3.51 & 9.88 \\ shared encoders & 5.66 & 17.44 & 26.79 & 1.33 & 4.46 & 12.22 & 3.55 & 18.57 & 23.55 & 0.81 & 3.66 & 10.08 \\ \hline \hline \end{tabular} \end{table} Table 1. Mean Square Error (MSE) of rollout trajectories with varying prediction lengths. The transductive setting evaluates the testing sequences whose environments are seen during training. The inductive setting evaluates new systems with unseen environments during training. The best results are bold-faced. Figure 3. Visualization of the transductive prediction results for the Water dataset. Black lines are ramps within the container. The length of the observation sequence is set as 20. GNS makes less accurate predictions compared with GG-ODE. sequence in the temporal graph, denoted as shared encoders. As shown in Table 1, all three variants have inferior performance compared to GG-ODE, verifying the rationality of the three key designs. Notably, when making long-range predictions, removing \(\mathcal{L}_{MI}\) would cause more harm to the model than removing \(\mathcal{L}_{\text{contra}}\). This can be understood as the latent initial states are more important for making short-term predictions, while the disentangled latent initial states and exogenous factors are both important for making long-range predictions. #### 5.3.2. Hyperparameter Study We study the effect of \(\lambda_{1}/\lambda_{2}\), which are the hyperparameters for balancing the two regularization terms that guide the learning of the two encoders, towards making predictions under different horizons. As illustrated in Figure 4, the optimal ratio for making \(30,60\%,100\%\) rollout predictions are 2, 1,0.5 respectively, under both the transductive and inductive settings. They indicate that the exogenous factors modeling plays a more important role in facilitating long-term predictions, which is consistent with the prediction errors illustrated in Table 1 when comparing \(-\)w/o\(\mathcal{L}_{MI}\) with \(-\)w/o\(\mathcal{L}_{\text{contra}}\). However, overly elevating \(\mathcal{L}_{MI}\) would also harm the model performance, as the time invariance property achieved by \(\mathcal{L}_{\text{contra}}\) is also important to guarantee the correctness of the learned latent initial states, which determines the starting point of the predicted trajectories in the future. #### 5.3.3. Sensitivity Analysis GG-ODE can take arbitrary observation lengths to make trajectory predictions, as opposed to existing baselines that only condition on observations with fixed lengths. It allows the model to fully utilize all the information in the past. We then study the effect of observation lengths on making predictions in different horizons. As shown in Figure 5, the optimal observation lengths for predicting the rollouts with 20, 40, and 50 steps are 20, 25, 35 in the inductive setting, and 15, 25, 30 in the transductive setting. When predicting long-range trajectories, our model typically requires a longer observation sequence to get more accurate results. Also, for making predictions at the same lengths, the inductive setting requires a longer observation length compared with the transductive setting. ### Case Study We conduct a case study to examine the learned representations of the latent exogenous factors on the Lennard-Jones potential dataset. We first randomly choose one data sample for each of the 65 temperatures and visualize the learned representations of exogenous factors. As shown in Figure 6 (a), the representations of higher temperatures are closer to each other on the right half of the figure, whereas the lower temperatures are mostly distributed on the left half. Among the 65 temperatures, 20% of them are not seen during training which we circled in black. We can see those unseen temperatures are also properly distributed, indicating the great generalization ability of our model. We next plot the representations for all data samples under temperatures 2.5 and 3.5 respectively as shown in Figure 6 (b). We can see that the learned representations are clustered within the two temperatures, indicating our contrastive learning loss is indeed beneficial to guide the learning process of exogenous factors. ## 6. Conclusion In this paper, we investigate the problem of learning the dynamics of continuous interacting systems across environments. We model system dynamics in a continuous fashion through graph neural ordinary differential equations. To achieve model generalization, we learn a shared ODE function that captures the commonalities of the dynamics among environments while design an environment encoder that learns environment-specific representations for Figure 4. Effect of \(\lambda_{1}/\lambda_{2}\) on the Lennard-Jones potential dataset. Best results are circled in red for each setting. Figure 5. Effect of observation length on the Lennard-Jones potential dataset. Figure 6. T-SNE visualization of the learned exogenous factors on the Lennard-Jones potential dataset. (a) We randomly pick one data sample per temperature, where temperatures tested in the inductive setting are circled in black. (b) Visualization of data samples from two temperatures. exogenous factors automatically from observed trajectories. To disentangle the representations from the initial state encoder and the environment encoder, we propose a regularization loss via mutual information minimization to guide the learning process. We additionally design a contrastive learning loss to reduce the variance of learned exogenous factors across time windows under the same environment. The proposed model is able to achieve accurate predictions for varying physical systems under different environments, especially for long-term predictions. There are some limitations though. Our current model only learns one static environment-specific variable to achieve model generalization. However, the environment can change over time such as temperatures. How to capture the dynamic influence of those evolving environments remain challenging. ###### Acknowledgements. This work was partially supported by NSF 1829071, 2031187, 2106859, 2119643, 2200274, 2211557, 1937599, 2303037, NASA, research awards from Amazon, Cisco, NEC, and DARPA \(\#\)HR00112290103, DARPA \(\#\)HR0011260656. We would like to thank Mathieu Bauchy, Han Liu and Abhijeet Gangan for their help to the dataset generation procedure and valuable discussion throughout the project.
2303.01608
Discrete-time quantum walk dispersion control through long-range correlations
We investigate the evolution dynamics of inhomogeneous discrete-time one-dimensional quantum walks displaying long-range correlations in both space and time. The associated quantum coin operators are built to exhibit a random inhomogeneity distribution of long-range correlations embedded in the time evolution protocol through a fractional Brownian motion with spectrum following a power-law behavior, $S(k)\sim 1/k^{\nu}$. The power-law correlated disorder encoded in the phases of the quantum coin is shown to give rise to a wide variety of spreading patterns of the qubit states, from localized to subdiffusive, diffusive, and superdiffusive (including ballistic) behavior, depending on the relative strength of the parameters driving the correlation degree. Dispersion control is then possible in one-dimensional discrete-time quantum walks by suitably tunning the long-range correlation properties assigned to the inhomogeneous quantum coin operator.
A. R. C. Buarque, F. S. Passos, W. S. Dias, E. P. Raposo
2023-03-02T22:07:13Z
http://arxiv.org/abs/2303.01608v1
# Discrete-time quantum walk dispersion control through long-range correlations ###### Abstract We investigate the evolution dynamics of inhomogeneous discrete-time one-dimensional quantum walks displaying long-range correlations in both space and time. The associated quantum coin operators are built to exhibit a random inhomogeneity distribution of long-range correlations embedded in the time evolution protocol through a fractional Brownian motion with spectrum following a power-law behavior, \(S(k)\sim 1/k^{\nu}\). The power-law correlated disorder encoded in the phases of the quantum coin is shown to give rise to a wide variety of spreading patterns of the qubit states, from localized to subdiffusive, diffusive, and superdiffusive (including ballistic) behavior, depending on the relative strength of the parameters driving the correlation degree. Dispersion control is then possible in one-dimensional discrete-time quantum walks by suitably tunning the long-range correlation properties assigned to the inhomogeneous quantum coin operator. ## I Introduction Discrete-time quantum walks (DTQWs) [1; 2; 3; 4] have become a central topic of research in recent years in part due to the characteristic of faster propagation over time when compared to classical random walks. Originally introduced by Aharonov, Davidovich, and Zagury [5] as a generalization of the standard (classical) random walks, DTQWs readily drew considerable attention for exhibiting anomalous dynamics with dispersion increasing linearly (i.e., ballistically) with time, in contrast with the square-root time dynamics of their classical (Brownian) counterpart. Over the last three decades, DTQWs have found numerous applications [1; 2; 3; 4] in emerging quantum technologies and complex systems, across various fields, including physics, mathematics, engineering, computer science, and biology. In particular, in physics DTQWs have been utilized in simulations of complex physical systems and for investigating diverse relevant topics such as quantum computation [6; 7], quantum algorithms [8; 9], quantum entanglement [10], cybersecurity [11; 12], strongly correlated phenomena [13; 14], topological phenomena [15; 16], nonlinear dynamics [17; 18], interacting systems [17; 18], decoherence properties [19; 20], and complex disordered systems [21; 22; 23; 24]. In parallel with the various theoretical proposals, DTQWs have been experimentally demonstrated in a number of platforms, e.g., trapped ions [25], trapped atoms [26], photons in waveguides [27], light using optical devices [28], and even in nuclear magnetic resonance systems [29; 30] and superconducting qubits [31]. These experimental demonstrations highlight the versatility and wide-ranging applications of DTQWs. On the other hand, a major difficulty in practical implementations is to preserve quantum coherence for long periods of time, due to the high sensitivity of the quantum states and decoherence process induced by interaction with the environment [32; 33]. Recently, disordered DTQWs have garnered significant attention due to their rich dynamical properties [23; 24; 25; 26; 27; 28; 29; 34; 35; 36; 37; 38; 39]. We remark, for example, that disorder in DTQWs can lead to Anderson localization, characterized by exponentially localized eigenstates of a quantum particle [1; 2]. Conversely, studies have also shown that DTQWs with temporal inhomogeneity may exhibit diffusive dynamics similar to classical random walks [20]. In addition, Ahlbrecht and coauthors have investigated [34] the influence of random quantum coins with spatial and temporal dependence that act simultaneously on the dynamics of a DTQW, and reported the presence of diffusive dynamics in the long-time limit. More recently, Mendes and coauthors [40] have studied numerically the effects of static long-range correlations applied to the conditional displacement operator in a Hadamard quantum walk, characterizing a localized-delocalized state transition with onset of ballistic dispersion controlled by the parameter that adjusts the degree of correlation. In DTQWs the insertion of random inhomogeneities in the time evolution operator not only affects the transport properties, but also changes the coin-position entanglement features [35; 36; 21; 37]. In [35] the effects of different types of disorder on the generation of coin-position entanglement have been investigated, with the static disorder shown not to be a very efficient mechanism to generate quantum entanglement. In contrast, for any initial condition, dynamic and fluctuating disorder leads to maximum entangled states asymptotically in time. Recently, we have shown [36] that static inhomogeneities with aperiodic correlations can also give rise to maximally entangled states. In general, understanding the time evolution of DTQWs in the presence of disorder sources such as noise, fluctuations, and random inhomogeneity is crucial for practical implementations, as it enables greater control over the dynamics in the presence of environmental interactions. In this work, we investigate the role of random inhomogeneities with long-range spatial and temporal correlations on the quantum walk dynamics. We study the time evolution of an initial qubit state following the DTQW protocol, with long-range (power-law) correlations displaying space-time dependence (static and dynamic inhomogeneities) encoded in the phases of the quantum coin. Interestingly, depending on the relative strength of the correlation parameters, quite diverse dynamics arise in the quantum system, ranging from localized to superdiffusive (including ballistic) behavior. Overall, our findings advance in understanding the interplay between the effect of inhomogeneous correlations and the resulting dynamics of DTQWs, possibly bringing about not only theoretical gain but eventually practical relevance as well. The article is organized as follows. In Section II we introduce the model and describe the general formalism. Results and discussion are presented in Section III. Lastly, final remarks and conclusions are left to Section IV. ## II Model and formalism We consider a quantum random walker propagating in a one-dimensional (1D) lattice with \(N\) sites, discrete positions indexed by integers \(n\) (\(=1,2,...,N\)), and long-range spatial-temporal correlated inhomogeneities introduced as described below. The walker is a qubit with internal degree of freedom spanned by a two-level system that defines the basis of the so-called coin space [41]: \(\mathcal{H}^{\mathcal{C}}\equiv\{|\uparrow\rangle=(1,0)^{T},\,|\uparrow \rangle=(0,1)^{T}\}\), where \(T\) denotes transpose. The qubit state \(|\Psi\rangle\) belongs to a Hilbert space set by the tensor product of two spaces, \(\mathcal{H}=\mathcal{H}^{\mathcal{P}}\otimes\mathcal{H}^{\mathcal{C}}\), with \(\mathcal{H}^{\mathcal{P}}\) assigned to the position space consisting of states \(\{|n\rangle\}\). The generic initial (\(t=0\)) state of the quantum walker is written as the superposition \[|\Psi_{0}\rangle\equiv|\Psi(t=0)\rangle=\sum_{n}\left(a_{n,t=0}|\uparrow \rangle+b_{n,t=0}|\downarrow\rangle\right)\otimes|n\rangle, \tag{1}\] with normalization \(\sum_{n}\left(|a_{n,t=0}|^{2}+|b_{n,t=0}|^{2}\right)=1\). The system evolution with discrete time \(t\) depends on both internal and spatial degrees of freedom, which are respectively driven by the unitary operators \(\hat{C}\) (quantum coin) and \(\hat{S}\) (conditional displacement operator). In a general description, one can express [41] a single-site quantum coin as an arbitrary unitary SU(2) matrix on the basis of the coin space, \[\hat{C}(q,\theta,\phi)=\left(\begin{array}{cc}\sqrt{q}&\sqrt{1-q}e^{i\theta }\\ \sqrt{1-q}e^{i\phi}&-\sqrt{q}e^{i(\theta+\phi)}\end{array}\right), \tag{2}\] where the angles \(0\leq\theta\leq 2\pi\) and \(0\leq\phi\leq 2\pi\) control the relative phase between the two coin states, while the parameter \(q\in[0,1]\) drives the spatial bias of the quantum coin. For example, for \(q=1/2\) and \(\theta=\phi=0\) one has a fair quantum coin that chooses both possible directions in the 1D lattice (left or right) with equal probability (Hadamard coin) [41]. In this work, we set \(q=1/2\) and consider the stochastic evolution of the random phases \(\theta\) and \(\phi\) as defined below. On the other hand, the conditional displacement operator, \[\hat{S}=\sum_{n}\left(|\uparrow\rangle\langle\uparrow|\otimes|n+1\rangle \langle n|+|\downarrow\rangle\langle\downarrow|\otimes|n-1\rangle\langle n|\right), \tag{3}\] does not alter the walker's internal state, but moves it from position \(n\) to \(n+1\) (\(n-1\)) if the internal state is \(|\uparrow\rangle\) (\(|\downarrow\rangle\)). The system evolution with discrete time \(t\) from the initial state \(|\Psi_{0}\rangle\), Eq. (1), is thus obtained through \(|\Psi(t)\rangle=(\hat{U})^{t}|\Psi_{0}\rangle\), so that the time evolution operator \(\hat{U}=\hat{S}\hat{C}\) describes the simultaneous action on the quantum walker of both quantum coin and conditional displacement operators. In order to introduce random inhomogeneity effects and spatial and temporal long-range correlations in the DTQW model, we first describe a general procedure to generate a set \(\{\tilde{V}_{n}\}\) of random variables with heterogeneous distributions and long-range correlations. We start by considering a large number \(M\gg 1\) of generic random variables \(\tilde{V}_{j}\) given by the sum, \[\tilde{V}_{j}=\sum_{k=1}^{M/2}\left[\left(\frac{2\pi}{M}\right)^{(1-\nu)} \frac{1}{k^{\nu}}\right]^{1/2}\cos\left(\frac{2\pi jk}{M}+\mu_{k}\right), \tag{4}\] where \(j=1,2,...,M\), with \(\mu_{k}\) denoting \(M/2\) independent random phases uniformly distributed in the interval \([0,2\pi)\). We note that \(\tilde{V}_{j}\) corresponds [40; 42] to the trace of a fractional Brownian motion, with the sequence of values \(\{\tilde{V}_{j}\}\) displaying asymptotic power-law spectrum in the form \(S(k)\sim 1/k^{\nu}\). The parameter \(\nu\geq 0\) controls the degree of correlation of the set of variables \(\tilde{V}_{j}\). Indeed, for \(\nu=0\) the values \(\{\tilde{V}_{j}\}\) are essentially uncorrelated, whereas for \(\nu>0\) they present long-range correlations even for large \(M\). Also, Eq. (4) yields a \(j\)-dependent probability distribution for each variable \(\tilde{V}_{j}\), therefore leading to a statistically inhomogeneous (or heterogeneous) set of random variables \(\tilde{V}_{j}\). From the averages \(\overline{\cos\mu_{k}}=\overline{\sin\mu_{k}}=0\), we notice that the first and second moments read \(\widetilde{V}_{j}=0\) and \(\overline{\tilde{V}_{j}^{2}}=1\), independent of \(M\) and \(\nu\). By next considering \[V_{j}=\pi\left[\tanh(\tilde{V}_{j})+1\right], \tag{5}\] we constrain the normalized variables \(V_{j}\) to lie within the range \([0,2\pi)\). We remark that transformation (5) does not change the asymptotic power-law behavior of the correlations of the sequence [40]. Figure 1 shows profiles of single realizations of the generic variables \(V_{j}\) generated from Eqs. (4) and (5), for \(M=200\) and three representative values of the power-law exponent \(\nu\). In Fig. 1(a) we the uncorrelated case, with uniformly distributed random \(V_{j}\)-values. On the other hand, for increasingly positive values of \(\nu\), Figs. 1(b)-(c) indicate that the patterns of \(V_{j}\)-values gradually smooth out, resembling the profile of the trace of a fractional Brownian motion with power-law spectrum \(S(k)\sim 1/k^{\nu}\)[42]. At this point, the connection with the DTQW model can be built. We aim to introduce temporal and spatial inhomogeneities in the quantum coin operator \(\hat{C}\), Eq. (2). By fixing \(q=1/2\), two degrees of freedom are left, associated with the phases \(\theta\) and \(\phi\). One possible choice to yield time and space dependence in the coin operator is to set \(\theta\to\theta_{t}\) and \(\phi\to\phi_{n}\), respectively, so that \(\hat{C}(q,\theta,\phi)\to\hat{C}(1/2,\theta_{t},\phi_{n})\) in Eq. (2). Now, Eqs. (4) and (5) can be used to generate long-range correlated random sequences of both coin phases. For example, by assigning the general index \(j\) to the discrete time \(t\), we set \(V_{j}\to\theta_{t}\) along with \(M\to T\) (maximum time considered) and the power-law exponent \(\nu\to\alpha_{t}\). Conversely, by associating \(j\) with the lattice site \(n\), we take \(V_{j}\to\phi_{n}\) along with \(M\to N\) and \(\nu\to\beta_{s}\). It is also important to mention that each sequence \(\{\theta_{t},\phi_{n}\}\), for fixed \(\alpha_{t}\geq 0\) and \(\beta_{t}\geq 0\), is generated using statistically independent sets \(\{\mu_{k}\}\). This procedure ultimately leads to long-range correlations with temporal inhomogeneities in the \(\theta\)-phase (driven by \(\alpha_{t}\)) and spatial inhomogeneities in the \(\phi\)-phase (driven by \(\beta_{s}\)). The time evolution protocol is described as follows. The state of the qubit after \(t\) discrete time steps can be expressed as \[|\Psi(t)\rangle=(\hat{U})^{t}|\Psi_{0}\rangle=\sum_{n}\left(\psi_{t,n}^{\uparrow }|\uparrow\rangle+\psi_{t,n}^{\downarrow}|\downarrow\rangle\right), \tag{6}\] where \(\psi_{t,n}^{\uparrow}\) and \(\psi_{t,n}^{\downarrow}\) are the probability amplitudes of obtaining the internal states \(|\uparrow\rangle\) and \(|\downarrow\rangle\) at the position \(n\) in time \(t\). With the use of Eqs. (1)-(3), together with \(\hat{C}(q,\theta,\phi)\to\hat{C}(1/2,\theta_{t},\phi_{n})\), these amplitudes are given in terms of their values in the preceding time and neighbor sites, along with the coin phases \(\{\theta_{t},\phi_{n}\}\), by the following recurrence relations, \[\psi_{t,n}^{\uparrow} =\frac{1}{\sqrt{2}}\left(\psi_{t-1,n+1}^{\uparrow}+e^{i\theta_{ t}}\psi_{t-1,n+1}^{\downarrow}\right),\] \[\psi_{t,n}^{\downarrow} =\frac{1}{\sqrt{2}}\left(e^{i\phi_{n}}\psi_{t-1,n-1}^{\uparrow}- e^{i(\theta_{t}+\phi_{n})}\psi_{t-1,n-1}^{\downarrow}\right). \tag{7}\] These equations are then iterated numerically, giving rise to the quantum state (6) in the subsequent time, and so on. The dynamical analysis of the DTQW can be performed through the study of the propagation of the qubit wave packet. From the probability of finding the quantum walker at the site \(n\) in time \(t\), \(P_{n}(t)=|\langle\Psi(t)|\uparrow,n\rangle|^{2}+|\langle\Psi(t)|\downarrow,n \rangle|^{2}\), we obtain its mean position as a function of time, \(\overline{n}(t)=\sum_{n}nP_{n}(t)\), and the associated dispersion, \[\sigma(t)=\sqrt{\sum_{n}\left[n-\overline{n}(t)\right]^{2}P_{n}(t)}. \tag{8}\] In general terms, the asymptotic relation \(\sigma(t)\sim t^{H}\) between the dispersion and Hurst exponent \(H\) quantifies key aspects of the walker's dynamics [43]. For instance, normal diffusion of either classical (Brownian) or quantum random walkers is characterized by the Hurst exponent \(H=1/2\), with statistics governed by the central limit theorem (CLT). On the other hand, anomalous diffusion presents \(H\neq 1/2\), for example, subdiffusion (\(0<H<1/2\)) and superdiffusion (\(H>1/2\)) processes, including the ballistic (\(H=1\)) and even superballistic (\(H>1\)) cases. In particular, the interest in anomalous superdiffusive processes has grown considerably in the last decades [43], as superdiffusivity has been increasingly reported in many distinct contexts, usually related to the generalized CLT, Levy distributions, and extreme event statistics, from anomalous quantum transport and quantum work [44; 45; 46], to photons propagating in random lasers [47; 48; 49; 50], and efficient random searches [51; 52; 53], to name a few. Figure 1: Single realizations of the distribution of values of the generic random variables \(V_{j}\) as a function of \(j\) (\(=1,2,...,M\)), with \(M=200\), generated from Eqs. (4) and (5) for three representative values of the power-law exponent: (a) \(\nu=0\), (b) \(\nu=1.0\), and (c) \(\nu=2.0\). When \(\nu=0\), we obtain the uncorrelated case, with a uniformly random distribution of \(V_{j}\)-values. For increasing \(\nu>0\) the sequences \(\{V_{j}\}\) smooth out towards the profile of a fractional Brownian motion with power-law spectrum \(S(k)\sim 1/k^{\nu}\). Variables \(V_{j}\) are used in this work to generate distributions of long-range correlated inhomogeneous coin phases \(\theta_{t}\) and \(\phi_{n}\), by taking \(j\to t\) and \(j\to n\), respectively. Our results presented in the next section characterize diverse dynamical regimes displayed by long-range correlated inhomogeneous DTQWs. ## III Results and discussion Results were obtained following the numerical time evolution protocol (Eqs. (6) and (7)) applied to a quantum walker initially located at the nearly symmetric position \(n_{0}=N/2\), with the initial state \(\ket{\Psi_{0}}\) displaying equiprobable internal states \(\ket{\uparrow}\) and \(\ket{\downarrow}\), \[\ket{\Psi_{0}}=\frac{1}{\sqrt{2}}(\ket{\uparrow}+i\ket{\downarrow})\otimes \ket{n_{0}}. \tag{9}\] This means that the initial probability \(P_{n}(t=0)\) of finding the walker at sites \(n\neq N/2\) is null. For \(t>0\), the recursive application of the conditional displacement operator, Eq. (3), can yield a non-null \(P_{n}(t)\) for \(n\neq N/2\), as also seen from the probability amplitudes, Eq. (7). However, due to the stochastic character of the sequence \(\{\theta_{t},\phi_{n}\}\), the walker's dynamics is strongly dependent on the choice of parameters \(\{\alpha_{t},\beta_{s}\}\) that drive the degree of temporal and spatial correlations. Accordingly, the range of sites around the starting point with considerable probability of finding the walker in a given time is also greatly influenced by the dispersion properties (e.g., diffusive, superdiffusive, ballistic, etc.; see below). In most cases, we consider a maximum time \(t=T\) not too large, so that the boundaries of the 1D lattice at \(n=1\) and \(n=N\) are not accessed, thus avoiding edge effects. Also, for each choice \(\{\alpha_{t},\beta_{s}\}\) averages were taken over 5000 independent random realizations of the sequence \(\{\theta_{t},\phi_{n}\}\). We start with the time evolution of the probability \(P_{n}(t)\) in a 1D lattice with \(N=1000\) sites, for a maximum time \(T=N/2\). Figure 2 shows \(P_{n}(t)\) for four relevant choices of \(\{\alpha_{t},\beta s\}\). In each case, the upper panel displays the evolution profile of the probability in the space-time plane, while the lower panel presents snapshots of \(P_{n}(t)\) in \(t=50,200,500\). For uncorrelated inhomogeneities (\(\alpha_{t}=\beta_{s}=0\), Figs. 2(a)-2(b)), the coin phases \(\{\theta,\phi\}\) are randomly distributed in time and space, respectively. We observe in Fig. 2(a) that the dynamics of the wave packet profile indicates a low degree of mobility of the quantum walker, which spreads slowly over the 1D lattice. The probability snapshot shows that the qubit wave packet acquires a Gaussian profile (blue line curve in Fig. 2(b)) that resembles the propagation of a classical (Brownian) random walker (see, also, the dispersion results below). On the other hand, for \(\alpha_{t}=4\) (strong long-range Figure 2: Dynamic evolution of the probability \(P_{n}(t)\) of finding the quantum walker at the site \(n\) in time \(t\), in a 1D lattice with \(N=1000\) sites and for a maximum time \(T=N/2\). The upper panel shows \(P_{n}(t)\) in the space-time plane, whereas the lower panel displays snapshots in \(t=50,200,500\). We consider four relevant choices of the parameters \(\{\alpha_{t},\beta_{s}\}\) that drive the temporal and spatial degrees of correlation and inhomogeneity in the phases \(\{\theta,\phi\}\) of the quantum coin: (a)-(b) \(\alpha_{t}=\beta_{s}=0\) (uncorrelated inhomogeneities in both space and time); (c)-(d) \(\alpha_{t}=4\) and \(\beta_{s}=0\) (strong long-range temporal correlation and uncorrelated spatial inhomogeneity); (e)-(f) \(\alpha_{t}=0\) and \(\beta_{s}=4\) (uncorrelated temporal inhomogeneity and strong spatial correlation); (g)-(h) \(\alpha_{t}=\beta_{s}=4\) (strong long-range correlated inhomogeneities in both space and time). Different dynamical scenarios emerge by adjusting the combination of these parameters (see text). temporal correlation) and \(\beta_{s}=0\) (uncorrelated spatial inhomogeneity), Figs. 2(c)-2(d) show that the probability profile of the qubit wave packet remains trapped around the initial position \(n_{0}=N/2=500\), in a picture consistent with a localized quantum state (see below). In contrast, by setting \(\alpha_{t}=0\) (uncorrelated temporal inhomogeneity) and \(\beta_{s}=4\) (strong spatial correlation) in Figs. 2(e)-2(f), a probability function with Gaussian profile is recovered, including a spread pattern in Fig. 2(e) similar to that observed in the fully uncorrelated case, Fig. 2(a). These findings suggest that the time-inhomogeneity aspects in the coin phases seem to prevail over the spatial fluctuations in what concerns the quantum walker's dynamics. At last, when both phases are tunned in the strong correlation regime, \(\alpha_{t}=\beta_{s}=4\), the above scenarios change drastically, as seen in Figs. 2(g)-2(h). The qubit wave packet spreads rather fast, with much stronger dispersive character. The probability profile exhibits two nearly symmetric peaks that decay monotonically with time, see Fig. 2(h), thus suggesting a delocalization of the quantum walker, with two most probable positions equidistant from the starting point. Figure 3 presents results of the normalized dispersion, \(\sigma(t)/N\), given in Eq. (8), for the same parameter choices of Fig. 2. In addition to the lattice with \(N=1000\) sites, we also show data for \(N=2000,4000,8000\), with a larger maximum time, \(T=5N\). We notice that the results in Fig. 3 are fully consistent with those of Fig. 2. For example, after an initial transient, the uncorrelated case (\(\alpha_{t}=\beta_{s}=0\)), shown in Fig. 3(a), displays for all \(N\) an asymptotic scaling relation \(\sigma(t)\sim t^{0.5}\), which is consistent with the Brownian-like Hurst exponent \(H=1/2\), typical of the normal dynamics of a classical random walker driven by a Gaussian probability function and CLT. A similar picture with \(H=1/2\) is also observed in Fig. 3(c) for the case with uncorrelated temporal inhomogeneity and strong long-range spatial correlation, \(\alpha_{t}=0\) and \(\beta_{s}=4\), in agreement with Figs. 2(e)-2(f). These results indicate that in the absence of temporal correlations, \(\alpha_{t}=0\), the qubit wave packet exhibits diffusive behavior, regardless of the value of \(\beta_{s}\). Interestingly, the localized scenario of Figs. 2(c)-2(d), with strong temporal correlation and uncorrelated spatial inhomogeneity, \(\alpha_{t}=4\) and \(\beta_{s}=0\), reveals an asymptotic saturation of the dispersion, \(\sigma\sim t^{0}\), for long times and all \(N\). The saturation value of \(\sigma\) depends on the size \(N\) of the 1D lattice. We thus notice that the disorder effects in the form of the joint action of strong long-range temporal correlation and lack of spatial correlation in the quantum coin cause the qubit states to display an Anderson-like localization behavior. In this case, the random scattering of the qubit wave packet, driven by the stochastic sequence \(\{\theta_{t},\phi_{n}\}\), promotes a significant spatial confining of the quantum walker around the initial position. On the other hand, the fast spread of the wave packet shown Figs. 2(g)-2(h) for long-range spatial and temporal correlations, \(\alpha_{t}=\beta_{s}=4\), yields a superdiffusive (ballistic) dynamic behavior, \(\sigma(t)\sim t^{1.0}\) for all \(N\), with the dispersion increasing linearly with time and Hurst exponent \(H=1\), up to times \(t\sim N/2\) that mark the onset of the reaching of the lattice boundaries (this is, in fact, the only result in which the walker hits the borders). In order to offer a complementary analysis, Fig. 4 and Fig. 5 display the dispersion \(\overline{\sigma}\) averaged over the last 100 time steps (i.e., from \(t=T-100\) to \(t=T\), with \(T=N/2\)), as a function of the lattice size \(N\), up to a much larger value, \(N=32000\). In a way similar to the Hurst exponent, we define the relation \(\overline{\sigma}\sim N^{\gamma}\), where the Figure 3: Normalized dispersion, \(\sigma(t)/N\), as a function of the normalized discrete time, \(t/N\), for the same parameter choices \(\{\alpha_{t},\beta_{s}\}\) of Fig. 2, lattice sizes \(N=1000,2000,4000,8000,\) and maximum time \(T=5N\). The asymptotic behavior \(\sigma(t)\sim t^{H}\), where \(H\) is the Hurst exponent, defines the localized (\(H\sim 0\)), diffusive (\(H=1/2\)), and superdiffusive (ballistic) (\(H=1\)) dynamics of the quantum walker, consistently with the \(P_{n}(t)\) results of Fig. 2. exponent \(\gamma\) quantifies the scaling with the system size \(N\) of the width of the qubit wave packet in the asymptotic long-time regime. We thus can see in more detail how the competition between long-range correlations in time and space can lead to a quite rich dynamics of the quantum walker, as progressively larger lattices are considered. Figure 4(a) shows the average dispersion \(\overline{\sigma}\) when the inhomogeneity is uncorrelated in time, \(\alpha_{t}=0\), for various degrees of spatial correlation. In this case, the quantum walker exhibits diffusive dynamics (\(\gamma=1/2\)) for all \(\beta_{s}\)-values, confirming the previous results of Fig. 2 and Fig. 3. On the other hand, intermediate long-range temporal correlations, \(\alpha_{t}=2.0\) in Fig. 4(b), induce a rich spectrum of dynamic regimes, from subdiffusive (\(0<\gamma<1/2\)) to ballistic (\(\gamma=1\)) as \(\beta_{s}\) increases. These regimes are also present for strong long-range temporal correlations, \(\alpha_{t}=4.0\) in Fig. 4(c), with the addition of a localized behavior (\(\gamma\sim 0\)) in the spatially uncorrelated case, \(\beta_{s}=0\). On the other hand, Fig. 5 is the counterpart of Fig. 4, but with the parameter \(\beta_{s}\) fixed at the values \(\beta_{s}=0,2.0,4.0\), whereas \(\alpha_{t}\) varies in the range \(\alpha_{t}\in[0,4.0]\). Essentially, the same dynamic regimes of Fig. 4 are also seen in Fig. 5, as the degree of temporal correlation is increased for fixed \(\beta_{s}\). The values of the scale exponent \(\gamma\) as a function of \(\{\alpha_{t},\beta_{s}\}\) are plotted in Fig. 6. Apart from small fluctuations, we note a roughly monotonic trend of \(\gamma\) to either Figure 4: Long-time average dispersion \(\overline{\sigma}\) as a function of the system size \(N\). Parameter \(\alpha_{t}\) that controls the temporal correlations is fixed at the values (a) \(\alpha_{t}=0\), (b) \(\alpha_{t}=2.0\), and (c) \(\alpha_{t}=4.0\), while parameter \(\beta_{s}\) driving the spatial correlations varies in the range \([0,4.0]\). The scaling behavior \(\overline{\sigma}\sim N^{\gamma}\) characterizes a variety of dynamical regimes, in agreement with Figs. 2-3. (a) In the uncorrelated temporal case, \(\alpha_{t}=0\), the dynamics is independent of the spatial correlation degree, exhibiting diffusive behavior (\(\gamma=1/2\)) for all \(\beta_{s}\). (b) For an intermediate degree of temporal correlation, \(\alpha_{t}=2.0\), the dynamics ranges from subdiffusive (\(0<\gamma<1/2\)) to superdiffusive (\(\gamma>1/2\)), and up to ballistic (\(\gamma=1.0\)). (c) For strong temporal correlations, \(\alpha_{t}=4.0\), all regimes can be accessed, from localized (\(\gamma\sim 0\)) to ballistic (\(\gamma=1.0\)), as \(\beta_{s}\) increases from \(\beta_{s}=0\) to \(\beta_{s}=4.0\). Figure 5: Long-time average dispersion \(\overline{\sigma}\) as a function of the system size \(N\), depicting the counterpart of Fig. 4, but with \(\beta_{s}\) fixed at the values (a) \(\beta_{s}=0\), (b) \(\beta_{s}=2.0\), and (c) \(\beta_{s}=4.0\), while \(\alpha_{t}\) varies in the range \([0,4.0]\). The scaling behavior \(\overline{\sigma}\sim N^{\gamma}\) characterizes diverse dynamic regimes identified by the \(\gamma\)-value, consistent with Figs. 2-4. increase, decrease, or remain constant, when \(\alpha_{t}\) or \(\beta_{s}\) are fixed. Instances of diffusive behavior (\(\gamma=1/2\)) are found for small \(\alpha_{t}\) irrespective of the \(\beta_{s}\)-value (see, e.g., the nearly horizontal lines for \(\alpha_{t}\lesssim 0.5\) in Fig. 6(b)). On the other hand, for \(\alpha_{t}\gtrsim 1\) the dynamic regime is strongly dependent on the spatial correlation degree, as seen in Fig. 6(a), displaying a crossover from subdiffusive behavior for \(\beta_{s}\lesssim 1.0\) to superdiffusive dynamics when \(\beta_{s}\gtrsim 1.0\). In particular, ballistic behavior (\(\gamma=1\)) sets in only when both sources of inhomogeneities exhibit strong enough long-range correlations, i.e., for \(\alpha_{t}\gtrsim 2.0\) and \(\beta_{s}\gtrsim 2.0\). Lastly, the localized regime (\(\gamma\sim 0\)) of the quantum walker's wave function occupies a smaller fraction of the parameter space, being restricted to the range \(\beta_{s}\lesssim 0.5\) with strong temporal correlations (\(\alpha_{t}=4.0\) in Fig. 6(b)). To end this section, we perform the complete mapping of the dynamics of inhomogeneous DTQWs with the coin phases displaying different degrees of long-range space-time correlations. Figure 7 presents a phase diagram in the parameter space \(\{\alpha_{t},\beta_{s}\}\), with the different labels and colors identifying the various dynamical regimes obtained from the exponent \(\gamma\): (L, gray) localized (large \(\alpha_{t}\) and small \(\beta_{s}\)); (SBD, orange) subdiffusive (intermediate to large \(\alpha_{t}\) and small \(\beta_{s}\)); (D, blue) diffusive (small \(\alpha_{t}\) and any \(\beta_{s}\), and intermediate to large \(\alpha_{t}\) and intermediate \(\beta_{s}\)); (SPD, green) superdiffusive (intermediate to large \(\alpha_{t}\) and intermediate \(\beta_{s}\), and intermediate \(\alpha_{t}\) and intermediate to large \(\beta_{s}\)); and (B, yellow) ballistic (large \(\alpha_{t}\) and \(\beta_{s}\)). ## IV Conclusions The importance of studying disorder effects on discrete-time quantum walks (DTQWs) can be hardly overstated. Actually, advances in the understanding of this issue can impact on practical implementations of associated qubit states, as it enables greater control over the system dynamics in the presence of environmental interactions. In this work, we have investigated the role of temporal and spatial random inhomogeneities in the phases of the quantum coin operator. We have considered time and space correlations in the stochastic sequences of coin phases, described by a fractional Brownian motion with power-law spectrum, \(S(k)\sim 1/k^{\nu}\), where the exponent \(\nu\) drives the long-range character of temporal and spatial correlations, \(\nu\to\alpha_{t}\) and \(\nu\to\beta_{s}\), respectively. Figure 7: Phase diagram with an overview of the dynamic behaviors of inhomogeneous DTQWs with the coin phases displaying different degrees of long-range space-time correlations, obtained from the exponent \(\gamma\): (L, gray) localized; (SBD, orange) subdiffusive; (D, blue) diffusive; (SPD, green) superdiffusive; and (B, yellow) ballistic. Figure 6: Exponent \(\gamma\) obtained from the scaling relation \(\overline{\sigma}\!\sim\!N^{\gamma}\) of the average dispersion with the system size, for \(N=2000,4000,8000,16000,32000\). The degree of temporal and spatial long-range correlations, set by \(\alpha_{t}\) and \(\beta_{s}\), respectively, drives the dynamics of the quantum walker in the localized (\(\gamma\sim 0\)), subdiffusive (\(0<\gamma<1/2\)), diffusive (\(\gamma=1/2\)), superdiffusive (\(\gamma>1/2\)), and ballistic (\(\gamma=1\)) regimes (see text). A suitable tunning of the degree of such correlations leads to several dynamic regimes in DTQWs. For instance, strong temporal correlations and weak spatial correlations give rise to an Anderson-like localized behavior of the quantum walker. Moreover, subdiffusive, diffusive, superdiffusive, and ballistic dynamics can be also found, e.g., with the latter arising in the presence of both spatial and temporal strong long-range correlations. We have presented a phase diagram with a mapping of these dynamical regimes in the \(\{\alpha_{t},\beta_{s}\}\) parameter space. DTQWs may serve as a versatile platform to approach diverse phenomena, from quantum computation to cybersecurity, to name a few. From these findings, we have shown that it is possible to control the degree of dynamic spreading of the qubit wave packet by properly adjusting the long-range correlation properties assigned to the inhomogeneous quantum coin operator. We hope our work can stimulate further theoretical and experimental research to advance the understanding of the interplay between disorder effects on inhomogeneous correlations and the resulting dynamics of DTQWs. ###### Acknowledgements. This work was partially supported by the Brazilian agencies CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico) and FACEPE (Fundacao de Amparo a Ciencia e Tecnologia do Estado de Pernambuco).
2310.01730
Analysis of Local Anisotropy Fluctuations in Compact Objects
Mathematical modeling within the framework of the general theory of relativity has been used to explain the behavior and structure of massive objects as neutron stars, quasars, black holes, pulsars and white dwarfs and requires finding the exact solutions of the Einstein-Maxwell system. In this paper we study the effects induced by fluctuations of local anisotropy in a new family of anisotropic solutions depending on a parameter, with a specific value that provides a radial pressure having the same functional dependence on the radial coordinate as the Schwarzschild solution. It is shown the effect the functional dependence on the radial coordinate has in the occurrence of cracking within the sphere when anisotropy fluctuations are allowed.
Manuel Malaver, Maria Esculpi
2023-10-03T01:44:59Z
http://arxiv.org/abs/2310.01730v1
# Analysis of Local Anisotropy Fluctuations in Compact Objects ###### Abstract Mathematical modeling within the framework of the general theory of relativity has been used to explain the behavior and structure of massive objects as neutron stars, quasars, black holes, pulsars and white dwarfs and requires finding the exact solutions of the Einstein-Maxwell system. In this paper we study the effects induced by fluctuations of local anisotropy in a new family of anisotropic solutions depending on a parameter \(\alpha\), whose value \(\alpha\)=2 provides a radial pressure having the same functional dependence on the radial coordinate as the Schwarzschild solution. It is shown the effect the functional dependence on the radial coordinate has in the occurrence of cracking within the sphere when anisotropy fluctuations are allowed. **Keywords:** fluctuations, local anisotropy, cracking, radial pressure, Schwarzschild solution ## 1 Introduction The study of the relation between compacts objects and the gravitational collapse is one of the most fundamental and important factors in astrophysics and has attracted much researchers and scientists due to formulation of the general theory of relativity. In the construction of the first theoretical models of relativistic stars, some works are important such as Schwarzschild [1], Tolman [2], Oppenheimer and Volkoff [13]. Schwarzschild [1] found exact solutions to the Einstein's Field Equations and Tolman [2] proposed a method in order to obtain explicit solutions of static spheres of fluid in terms of known analytical functions. Oppenheimer and Volkoff [3] have deployed Tolman's solutions in order to investigate about gravitational balance of neutron stars. It is noticed that Chandrasekhar's contributions [4] in modelling for production of white dwarfs under relativistic effects and research of Baade and Zwicky [5] establish the concept of neutron stars as relativistic star of very dense matter. Many researchers have used a great variety of mathematical techniques to try in order to obtain solutions of the Einstein-Maxwell field equations since it has been demonstrated by Bowers y Liang [6], Ruderman [7], Canuto [8], Komathiraj and Maharaj [9], Cosenza [10], Esculpi [11] and Malaver [12-22]. These investigations show that the system of Einstein-Maxwell equations plays an important role to describe ultracompacts objects. In the formulism of realistic model of super dense stars, it is also important to include the pressure anisotropy. Bowers and Liang [4] extensively discuss the effect of pressure anisotropy in general relativity. At a density of the order of \(10^{15}\,g/cm^{3}\) nuclear matter may be anisotropic when its interactions need to be treated relativistically [7]. In massive objects the radial pressure may differ from the tangential. In 1933, Lemaitre [23] established that in stellar models consisting of spherically symmetric distribution of matter the stress tensor may be locally anisotropic. From theoretical work [24-33] realistic stellar models it has been suggested that superdense matter may be anisotropic, at least in some density ranges. The existence of anisotropy within a star can be explained by the presence of a solid core, phase transitions, a type III super fluid [34], a pion condensation [35] or another physical phenomenon by the presence of an electrical field [36]. Bowers and Liang [4] generalized the equation of hydrostatic equilibrium for the case of local anisotropy. Bondi [31] have shown that for anisotropic fluids there exist a surface redshift bound, if either the strong or dominant energy condition are considered to hold within the star. Esculpi et al [11] have obtained a new family of anisotropic solutions with uniform energy density, where solutions depend on two parameters that can be adjusted to improve gravitational redshift. Also the assumption of local anisotropy has been used to study problems related to various relativistic compact objects [37-47]. In order to describe the behavior of an anisotropic fluid distribution when it exits the dynamic equilibrium, Herrera [48] and Di Prisco et al. [49,50] propose the concept of cracking, which implies the appearance of different radial forces within the system. We say that there are cracking whenever inward in the inner part of the sphere for all values of the radial coordinate. Otherwise, when the force is directed outward inside and changes direction in the outermost regions of the star, then there is an inversion [48-50]. Herrera [48] established that the appearance of a cracking is induced by the local anisotropy of a fluid distribution, whereas in the case of a perfect fluid outside equilibrium, the configuration tends to expand or collapse. Chan et al [51] studied the role of local anisotropy over dynamic instability and found that small anisotropies can drastically change the evolution of a system. Di Prisco et al [50] studied the role of local anisotropy fluctuations and determined that these fluctuations are a crucial factor for cracking. Abreu et al [41] considered a particular type of perturbation in which the difference in sound velocities is taken into account, where \({v^{2}}_{sr}\) and \({v^{2}}_{s\perp}\) represent the velocity of radial and tangential sound, respectively and found that regions where \({v^{2}}_{sr}\,{<}{v^{2}}_{s\perp}\), within a matter distribution, no cracking will occur and it could be considered as stable. Manjarres [52] analyzes what happens in charged spheres when the charge is perturbed with energy density and anisotropy, and finds that these perturbations can lead to the appearance of cracking. Malaver [53] finds that when a slow adiabatic contraction is performed on an anisotropic sphere model, which depends on an adjustable parameter and a coefficient that measures anisotropy, this model turns out to be unstable on the surface of the sphere so instability could occur in the outer layers. The aim of this research is to study the behavior of anisotropic fluid solutions found by Esculpi et al [11] against density variations and local anisotropy and to determine the factors that favor the appearance of cracking. The results shall be compared with previous results for similar anisotropic solutions. We have used the method suggested by Herrera [48] and Di Prisco et al [50] in the study of cracking for compact and anisotropic objects with constant density in which radial pressure is only a function of the radial coordinate. In our case radial pressure is written as the product of a function that depends on the anisotropy factor and a function of the radial coordinate, which as in Herrera's work [48] remains constant against the variation of energy density and anisotropy. Comparing the behavior of the family of solutions for different values of the parameter \(\alpha\), which defines the functional form of the pressure with the radial coordinate, the influence of the model on the appearance of cracking is verified. The paper is structured as follows: the next section, Sect.2, are presented the interior solutions of Einstein-Maxwell field equations of anisotropic fluid. In Sect.3, the occurrence of cracking was calculated when local anisotropy fluctuations occurred for an anisotropic star model with uniform energy density. In Sect.4 discusses and concludes the work. ## 2 The Einstein Field Equations Considering a spherically symmetrical quadridimensional space, whose line element is described by the Schwarzschild coordinates [1, 2] given by: \[ds^{2}=e^{\nu}dt^{2}-e^{\lambda}dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^ {2}) \tag{1}\] With a static distribution of matter consisting of a non-pascalian hydrodynamic fluid, with an energy-impulse tensor given by the expression \[T^{\mu\nu}=(\rho+P_{t})U^{\mu}U^{\nu}-P_{t}g^{\mu\nu}+(P_{r}-P_{t})\chi^{\mu} \chi^{\nu} \tag{2}\] Einstein's field equations are: \[8\pi T^{0}{}_{0}=1/\,r^{2}-e^{-\lambda}(1/\,r^{2}-\lambda^{\prime}/\,r) \tag{3}\] \[8\pi T^{1}{}_{1}=1/\,r^{2}-e^{-\lambda}(1/\,r^{2}+\nu^{\prime}/\,r) \tag{4}\] \[8\pi T^{2}{}_{2}=8\pi T^{3}{}_{3}=\frac{-1}{4}e^{-\lambda}(2\nu^{\prime\prime }\ \ +\nu^{\prime\,2}-\lambda^{\prime}\,\nu^{\prime}\ \ +2(\nu^{\prime}\ \ -\lambda^{\prime})/\,r \tag{5}\] and from equation (3) we obtain \[e^{-\lambda}=1-2m/\,r\ \ \ \ {\rm where}\ \ \ \ \ \ m(r)=\int 4\pi\rho^{2}dr \tag{6}\] Using equations (4) and (5) we obtain the generalized Tolman-Oppenheimer-Volkov equation [3] for hydrostatic equilibrium in the presence of tangential pressure: \[\frac{dP_{r}}{dr}=-(\rho+P_{r})\,\frac{4\pi P_{r}r^{3}+m(r)}{r^{2}\,(1-2m(r)/r)}+2 \frac{(P_{r}-P_{r})}{r} \tag{7}\] ## 3 Analysis of fluctuations in anisotropic stars This section determines the appearance of cracking when fluctuations in local anisotropy occur for an anisotropic star model with uniform energy density proposed by Esculpi et al [11]. The procedure suggested by Herrera [48] and Di Prisco et al [49, 50] has been used in the study of cracking for compact and anisotropic objects. There are a large number of physical processes that give rise to deviations from the local isotropy of the fluid, such as exotic phase transitions involving the appearance of an anisotropic phase during the gravitational collapse process [51]. The existence of solid nuclei and the presence of superfluids may give rise to local anisotropy [34]. Also, the overlap of two perfect fluids can be described as an anisotropic fluid [54]. The Eq. (7) can be written as \[R=\frac{dP_{r}}{dr}+\frac{4\pi{P_{r}}^{2}}{1-2m/r}+\frac{P_{r}m}{r^{2}\,(1-2m /r)}+\frac{4\pi\rho P_{r}}{1-2m/r}+\frac{\rho m}{r^{2}\,(1-2m/r)}-\frac{2(P_{r} -P_{r})}{r} \tag{8}\] Where \(R\) defines the total radial force on each fluid element. If the system under study is taken out of equilibrium by some perturbation, a total radial force R appears, which can lead to cracking or inversions [48, 49]. Given a state equation for hydrodynamic variables and suitable juncture conditions, it is possible to obtain a solution for Einstein's field equations. A distribution of matter with uniform energy density \(\rho\), contained in a sphere of radius a, is considered and a relationship between radial and tangential pressures is proposed which generalizes the solution proposed by Dev and Gleiser [35] as follows: \[\rho=\rho_{0}\quad\mbox{r}\leq\mbox{a} \tag{9}\] \[P_{t}-P_{r}=\frac{2\pi Cr^{2}\,({P_{r}}^{2}+\alpha P_{r}\rho+\rho^{2})}{3(1- \frac{2m(r)}{r})} \tag{10}\] Where \(C\) is the anisotropy factor and \(\alpha\) is a new parameter that varies the relationship between radial and tangential pressure for a given value of the anisotropy parameter. Equation (10) can be substituted into the equation for hydrostatic equilibrium and solved, considering the possible values for the discriminant and obtain: \[\Delta={\rho_{0}}^{2}\left[(4-\alpha C)^{2}-4(3-C)(1-C)\right] \tag{11}\] For values \(\Delta>0\), the radial pressure within the star is given by: \[P_{r}=\rho_{0}\Bigg{[}\frac{1-C}{\beta+\Gamma}\Bigg{]}\frac{\left(1-2m/r\right)^ {\Gamma/2}-\left(1-2M/R\right)^{\Gamma/2}}{\left(1-2M/R\right)^{\Gamma/2}- \left(\frac{\beta-\Gamma}{\beta+\Gamma}\right)\left(1-2m/r\right)^{\Gamma/2}} \Bigg{]} \tag{12}\] For the analysis of cracking, the solution shown in equation (12) representing a new exact solution for anisotropic stars with uniform density where \(C\) is the anisotropy constant, \(\beta=2\Bigg{(}1-\frac{\alpha C}{4}\Bigg{)}\), \(\Gamma=\left[\beta^{2}-(3-C)(1-C)\right]^{1/2}\) and \(\alpha\) is a parameter that measures the degree of anisotropy. For \(\alpha\)=2 an expression is obtained for the radial pressure which has the same functional dependence on the radial coordinate of the Schwarzschild solution. The following dimensionless variables are now introduced: \[\mu=1-2M/a\ \ \ y\ \ \ x=r/a \tag{13}\] The expression (12) can then be written in the form: \[P_{r}=\rho_{0}f(c)\varphi(x) \tag{14}\] and we have : \[\varphi(x)=\frac{\left[1-(1-\mu)x^{2}\right]^{\Gamma/2}-\mu^{\Gamma/2}}{\mu^{ \Gamma/2}-\left[\frac{\beta-\Gamma}{\beta+\Gamma}\right]\left[1-(1-\mu)x^{2} \right]^{\Gamma/2}} \tag{15}\] \[f(c)=\frac{1-C}{\beta+\Gamma} \tag{16}\] The system is now perturbed according to the scheme established by Herrera [49] and Di Prisco et al [50, 51] where density and anisotropy are also perturbed and the radial dependence is invariant, i.e.: \[\widetilde{P}_{r}=\widetilde{\rho}_{0}\widetilde{f}(C)\varphi(x) \tag{17}\] \[\widetilde{C}=C+\delta C \tag{18}\] \[\widetilde{\rho}_{0}=\rho_{0}+\delta\rho_{0} \tag{19}\] where it has been considered that: \[\gamma=\widetilde{\rho}_{0}\ /\ \rho_{0} \tag{20}\] The tilde indicates how much is being disturbed. From equations (12), (13), (14) and (15) the expression for R takes the form: \[R=\rho_{0}\ \frac{f(C)}{a}\frac{d\varphi(x)}{dx}+\frac{x}{a}\frac{(1-\mu)\rho_{0} }{2}\frac{\left[f^{\ 2}\varphi(x)^{\ 2}\left(3-C\right)+(1-C)+f(C)\varphi(x)(4-\alpha C) \right]}{\left[1-(1-\mu)\alpha x^{\ 2}\right]} \tag{21}\] To calculate \(\ \widetilde{R}\\), the following dimensionless function must be introduced: \[\widetilde{\widetilde{R}}=a\widetilde{R}\ /\ \rho_{0} \tag{22}\] and the expression for \(\ \ \widetilde{\widetilde{R}}\\) is as: \[\widetilde{\widetilde{R}}=\widetilde{f}\left(C\right)\frac{d\varphi(x)}{dx}+ \gamma^{\ 2}x\frac{(1-\mu)}{2}\frac{\left[f^{\ 2}\varphi^{2}\left(3-C\right)+(1-C)+f\varphi(x)(4-\alpha C) \right]}{\left[1-(1-\mu)\gamma x^{\ 2}\right]} \tag{23}\] Considering \[\delta\widetilde{\widetilde{R}}=\frac{\partial\widetilde{\widetilde{R}}}{ \partial\gamma}\delta\gamma\ \left|{}_{\widetilde{C}=C}^{\ Figure 1 shows how the radial force varies with the radius of the star for an anisotropy factor value C=0.73 and different values of \(\alpha\), keeping the gravitational potential value fixed and equal to \(\mu\ =\) 0.2 and where it has been considered that \(P_{r}\ \geq\) 0, \(\Delta>\) 0. It is shown that when \(\alpha\) increases the radial force \(\widetilde{\dot{R}}\) decreases; for values of \(\alpha<\)2 sign changes occur and the cracking occurs in regions close to the surface of the star as it increases \(\alpha\), which corresponds to the presence of cracking for these values of \(\alpha\). In this model, as in that of Bowers and Liang [6], cracking is presented for a low value of \(\mu\)[51], that is for more compact configurations. An analogous behavior is presented in Figure 2 for an anisotropy factor of \(C=\) 0.45 and the same value of the gravitational potential. In both figures it is observed as an increase of C causes a decrease in the radial force \(\widetilde{\dot{R}}\), contrary to what occurs in the model of Bowers and Liang [6], in which it is observed that as it decreases h, where h=1-2C, which is the parameter that measures anisotropy, increases the radial force, as shown in Figure 3. For the models considered, small fluctuations in the values of C and h, that is, changes in the local anisotropy of the fluid can cause the appearance of cracking. Figure 1: \(\partial\widetilde{\dot{R}}\) as a function of x for \(\mu\)=0.2, \(C\)=0.73 and different values of the parameter \(\alpha\). The plot with lines and with two alternating dots corresponds to \(\alpha=\) 0.25, the dash line corresponds to \(\alpha\)=0.5, with short lines corresponds to \(\alpha\)=1.0 and solid line is for \(\alpha\)=1.5. Figure 3: \(\delta\hat{R}\) as a function of \(x\) for \(\mu\)=0.2 for the Bowers and Liang model [6]. The solid line and the long-dash line correspond to \(C\)=0.45; \(h\)=0.1 and \(C\)= 0.73; \(h\) = -0.46, respectively. Figure 2: \(\delta\hat{R}\) as a function of x for \(\mu\)=0.2, \(C\)=0.45 and different values of the parameter \(\alpha\). The plot with lines and with two alternating dots corresponds to \(\alpha\) = 0.25, the dash line corresponds to \(\alpha\)=0.5, with short lines corresponds to \(\alpha\)=1.0 and solid line is for \(\alpha\)=1.5. ## 4 Conclusions A cracking analysis for a new anisotropic star model with uniform energy density has been presented in this paper. For this model with anisotropy parameter values \(C\)=0.45 and \(C\)=0.73, cracking occurs near the surface of the sphere as values of \(\alpha\) increase and anisotropy is an important parameter for determining cracking conditions when gravitational potential is modified. It is interesting to highlight the marked dependence of the response to the cracking with the type of model. Variations in anisotropy are expected to allow for cracking under certain conditions. A modification of the parameter \(\alpha\) can generate different expressions for radial pressure, which in turn changes the response to the cracking. As each value of \(\alpha\) changes the exponent \(\Gamma\) it is obvious then that the parameter \(\alpha\) defines the different model types, as shown in Figure 4. It is observed that \(\alpha\) is increased when \(\Gamma\) decreases and takes values smaller than one for values of \(\alpha\) greater than 2, and greater than one for \(\alpha<\)2. For \(\alpha\)= 2, the exponent acquires the constant value \(\Gamma=\)1, regardless of the value of C. If \(\alpha>\) 2, the exponent \(\Gamma\)/2 decreases when the value of C increases, keeping constant the value of \(\alpha\). Si \(\alpha<\) 2, \(\Gamma\)/2 decreases when C decreases for a fixed value of \(\alpha\). For \(\alpha<\) 2 the presence of cracking is observed. Figure 4: Exponent \(\Gamma\)/2 as a function of the parameter \(\alpha\) for different values of the anisotropy factor C. The solid lines, long dash line, dash-dot line, spaced dots and short-dash line correspond to the values of C = 0.15, 0.25, 0.35, 0.55 and 0.75 respectively. The appearance of cracking, associated with fluctuations of the local anisotropy, depends on the functional form of the pressure with the radial coordinate. Indeed, a modification of the parameter can generate different expressions for radial pressure, which in turn changes the response to the cracking, but such a cracking always occurs in regions close to the surface of the sphere. This behavior is to be expected according to Malaver [53] who finds that when a slow adiabatic contraction is made in the model of anisotropic star with uniform density proposed by Esculpi et al [11], this model turns out to be more stable in the outer layers than that of the Bowers and Liang solution [6], so it is likely to present instability in the outer layers.
2303.06167
Overwriting Pretrained Bias with Finetuning Data
Transfer learning is beneficial by allowing the expressive features of models pretrained on large-scale datasets to be finetuned for the target task of smaller, more domain-specific datasets. However, there is a concern that these pretrained models may come with their own biases which would propagate into the finetuned model. In this work, we investigate bias when conceptualized as both spurious correlations between the target task and a sensitive attribute as well as underrepresentation of a particular group in the dataset. Under both notions of bias, we find that (1) models finetuned on top of pretrained models can indeed inherit their biases, but (2) this bias can be corrected for through relatively minor interventions to the finetuning dataset, and often with a negligible impact to performance. Our findings imply that careful curation of the finetuning dataset is important for reducing biases on a downstream task, and doing so can even compensate for bias in the pretrained model.
Angelina Wang, Olga Russakovsky
2023-03-10T19:10:58Z
http://arxiv.org/abs/2303.06167v2
# Overcoming Bias in Pretrained Models by Manipulating the Finetuning Dataset ###### Abstract Transfer learning is beneficial by allowing the expressive features of models pretrained on large-scale datasets to be finetuned for the target task of smaller, more domain-specific datasets. However, there is a concern that these pretrained models may come with their own biases which would propagate into the finetuned model. In this work, we investigate bias when conceptualized as both spurious correlations between the target task and a sensitive attribute as well as underrepresentation of a particular group in the dataset. Under both notions of bias, we find that (1) models finetuned on top of pretrained models can indeed inherit their biases, but (2) this bias can be corrected for through relatively minor interventions to the finetuning dataset, and often with a negligible impact to performance. Our findings imply that careful curation of the finetuning dataset is important for reducing biases on a downstream task, and doing so can even compensate for bias in the pretrained model. ## 1 Introduction The current paradigm in machine learning typically involves using an off-the-shelf pretrained model that has been trained on a large-scale dataset, and then finetuning it on a smaller, application-specific dataset. This transfer learning is especially common for high-dimensional data like images and language [14, 11, 62]. However, large-scale datasets have been criticized for their biases [38, 59, 4, 9], which leaves open the concern that models pretrained on such datasets may carry biases over into the finetuned model. On the other hand, pretraining has been shown to confer benefits in model robustness and uncertainty estimation [25], so there is also the potential that pretrained models can reduce downstream biases by being more regularized or resistant to spurious correlations [5]. In our work, we investigate the implications of bias in pretrained models for the downstream finetuning task, and provide actionable insights on how to counteract this. We have been deliberately vague thus far on what we mean by "bias." In this work, we operationalize bias in two ways based on what has been found thus far to be problematic in image features: _spurious correlations_ between a sensitive attribute and target task [64, 60, 51, 61] and reduced performance from _underrepresentation_[6, 10, 50]. The topic of whether pretrained biases matter in finetuning is often assumed to be obvious, with contradictory arguments containing intuitively plausible explanations on both sides of the debate: that it does matter because the pretrained model brings biased features [44], or it does not because finetuning data will overwrite any pretrained biases [7, 15, 56]. Due to this uncertainty, it is not clear how to react to biases found in the features of pretrained models [55, 52, 17]. Of course, there is not a singular binary answer, as much is dependent upon the particulars of the training task. However, we bring much-needed clarity to the space for computer vision tasks, and give advice about bias transference from using pretrained models. In this work we study the two notions of bias, _spurious correlations_ and _underrepresentation_, by finetuning a variety of different pretrained models (Fig. 1). For each notion of bias, we first show our results on the CelebA dataset [32]. Then for spurious correlations we investigate the more complex COCO dataset [31] using real-world popular pretrained models (e.g., **MoCo**[22], **SimCLR**[8]). For underrepresentation, we look to the Dollar Street dataset [41] using pretrained models of our own design to test for specific hypotheses. On both forms of bias we find the following: 1) models finetuned on top of pretrained models can inherit Figure 1: We explore bias transference from pretrained to finetuned models in two forms in this work: spurious correlations and underrepresentation. We find that intervening on the finetuning data allows us to overcome bias from pretraining, often without compromising on performance. their biases (for spurious correlations, this is especially true if the correlation level is high, the salience of the bias signal is high relative to the true task signal, and/or the number of finetuning samples is low); 2) this bias can be relatively easily corrected for by curating the distribution of the finetuning dataset, _with a negligible impact to performance_. For example on CelebA, we find that by manipulating the strength of the spurious correlation in the finetuning dataset from 20% to 30%, we can retain the same high performance from using a biased pretrained model, but cut by almost half the amount of bias. The implications for this are significant: practitioners can use the pretrained model that lends the best performance in most cases so long as they appropriately curate the finetuning dataset, and thus get the best of both worlds in terms of performance and fairness. This means that significant consideration and effort needs to be spent on the curation of finetuning datasets, in a way that may not necessarily reflect the distribution of the test set, in order to "correct" for the biases of the pretrained model.1 Footnote 1: Code at [https://github.com/princetonvisualai/overcoming-pretraining-bias](https://github.com/princetonvisualai/overcoming-pretraining-bias). ## 2 Related Work Rather than exploring all viable algorithmic bias mitigation strategies, we focus specifically on studying the impact of the finetuning dataset's composition. **Benefits of Pretraining.** Pretrained features can be learned through methods like unsupervised [8, 22], self-supervised [35, 12], or supervised learning [36]. While there are indisputable time- and compute-saving benefits to transfer learning, He et al. [23] showed that sometimes models trained from scratch can match the performance of models finetuned from pretrained weights. However, even in those cases, Hendrycks et al. [25] finds that the finetuned model has superior robustness and uncertainty estimates. In our work, we try to understand whether pretrained models confer any other such benefits or harms in terms of bias. **Identifying Pretraining Bias.** Prior works have sought to measure the fairness of image features [17, 55, 52], the concern being that any biases in image features may propagate into predictions based on such features. Goyal et al. [16] report that pretrained features trained without supervision are more robust and fair than those with supervision, and Sirotkin et al. [52] find that within self-supervised models, contrastive losses lead to more biases--however, neither of these works measure bias on a downstream task. So, while the pretrained features may be biased, it still does not tell us the implications for a downstream task. Prior work in NLP has measured the correlation of "intrinsic" metrics like biases in the word embedding to "extrinsic" metrics like biases on the downstream task which uses these word embeddings, finding little correlation between these metrics [15, 7]. Other work in NLP has found that the distribution of the finetuning dataset matters more than the pretraining dataset in terms of bias [56, 53]. We find similar results in our study on pretrained image models, and give advice on how to correct for this. There is also work beginning to study the multimodal pretraining biases in the vision language domain [54]. **Bias Transferrence in Vision.** Salman et al. [44] similarly study bias transference, but focus on an operationalization of bias akin to backdoor attacks [18]. We both find that biases in pretrained models can propagate to finetuned models, but in our work we take this a step further by productively demonstrating that relatively minor interventions on the finetuning dataset can counteract this. While they find that in certain settings the biases in pretrained models persist even when the downstream dataset does not contain such biases, this different conclusion is likely due to their freezing of model layers. Kirichenko et al. [27] find that re-training just the last layer of a network can help models overcome spurious correlations. This matches the setting of one of our two conceptualizations of bias, and we add new findings when bias is underrepresentation. While their claim is stronger because they only retrain the last layer whereas we retrain the entire network, the setting we consider is different because in our scenarios the available number of finetuning samples is relatively small, e.g., 128 or 1024 on CelebA, whereas in their comparable experiments they have access to the entire CelebA dataset (130,216). ## 3 Preliminaries In this work, we will be conceptualizing "bias" in its two most studied forms in computer vision: _spurious correlations_ and _underrepresentation_. Each of our experimental setups will consist first of a _pretraining_ step on the upstream task and then a _finetuning_ step on the downstream task. The _pretraining_ step can include simply instantiating a randomly initialized model (**Scratch**), instantiating the model weights to be that of an existing pretrained model (e.g., **SimCLR**[8], **MoCo**[22]), or pretraining our own model (e.g., **Gendered**). All pretrained models will be denoted in bold text. In the _finetuning_ step we train all layers of our model on the downstream task. We do not experiment with any freezing of layers, and thus our results serve as the upper bound for the effect that finetuning can have. ### Datasets We use six datasets in this work. They serve as training data for our pretrained model, the task for our downstream finetuning, or both. We use gender as our sensitive attribute for our analysis on spurious correlations, and use the terms Men and Women when referring to the annotated gender groups. Much fairness work is limited by the availability of sensitive attribute annotations [1], and our annotations generally treat gender as binary, a schema which harmfully alienates and erases different communities [47]. Our datasets are: CelebA [32] with 39 attributes such as Brown Hair and labels for Male; FairFace [29] with racial attributes such as White and labels for Male; COCO [31] with 80 objects such as Toothbrush2 and gender labels derived from the presence of certain words in the captions [64, 63], with most images coming from the Global North [10]; ImageNet [42] with 200 objects such as Fence and most images coming from the Global North [10]; Dollar Street [41, 10] with 135 objects such as Books, of which 15 map onto COCO object labels, and is more geographically diverse than, e.g., COCO and ImageNet; GeoDE [40] with 40 objects such as Bicycle and is deliberately collected to be geographically diverse. Footnote 2: Even though COCO is most often a multi-label task, in some experiments we perform binary prediction on one label at a time to better isolate and manipulate the correlation of that particular object. ### Implementation Details We use a ResNet50 [24] with SGD descent for all of our experiments. The images are resized to be 224 by 224, normalized to ImageNet's mean pixel values, and randomly flipped horizontally for data augmentation. The particular hyperparameters chosen for finetuning are impactful [30], so we perform grid search for the lowest validation loss across the following hyperparameters: learning rate in \(\{.1,.05,.01,.005,.001\}\) and weight decay in \(\{0.,.0001,.0005,.001\}\).3 We train five random runs of every model to generate 95% confidence intervals. When we need discrete predictions, i.e., \(\{0,1\}\), as opposed to continuous ones, i.e., \([0,1]\), we pick the classification threshold for each label to be well-calibrated such that the percentage of predicted positive labels is the same as the true percentage. Footnote 3: For similar settings, e.g., same task with a different dataset distribution, we reuse the same hyperparameters in order to save compute. ## 4 Bias as Spurious Correlation We begin our analysis by considering bias in the form of spurious correlations between the target label and a sensitive attribute which is predictive on the training set but not necessarily so on the test set [64, 60, 51, 61, 43]. We first demonstrate that finetuned models can inherit bias of this form from pretrained models on the CelebA dataset (Sec. 4.1). In other words, when the presence of a target label (e.g., Eyebags) may be erroneously learned to be correlated with gender, such that the classifier over-predicts Eyebags, i.e., a false positive, on images with Women. We thus measure the bias of spurious correlations as false positive rate (FPR) difference between Women and Men.4 We show that pretrained bias is especially likely to affect the downstream task when the finetuning dataset has a high correlation level, low salience relative to the sensitive attribute, and/or a low number of finetuning samples. Footnote 4: Though often called equal opportunity, we will stay away from this term to not detract from its namesake [21], which is focused on allocational harms, when our application likely leads to representational harms [3]. Then we examine bias mitigation through finetuning on different distributions of the downstream dataset (Sec. 4.2). We show on two domains (CelebA attribute classification [32] and COCO object recognition [31]) that finetuning dataset interventions can mitigate much of the bias while retaining the performance of a biased pretrained model. ### Finetuned models inherit spurious correlations from pretrained models Initial Look.To start off, we want a set of pretrained models of differing biases to finetune on a set of downstream tasks where we can measure bias (i.e., FPR difference between Women and Men) in order to assess whether it makes a difference which pretrained model was used. For our downstream tasks, we perform binary classification on 11 different CelebA attributes.5 To understand whether the bias of a pretrained model (i.e., learned correlation between gender and target attribute) will affect the bias on a finetuning task (i.e., applied correlation of gender and target attribute during inference), we compare a pretrained model of our own design with high bias, called **Gendered**, to three of lesser bias, called **Control**. We then finetune all pretrained models and compare their downstream FPR gender difference. We create **Gendered** by training a model from scratch to classify gender (Men or Women) on the CelebA dataset.6 As our **Control** pretrained models, we create three "less biased" ones that have been trained to classify the attributes of Wearing Hat, Bangs, and Chubby. These have been sparsely sampled to be of different saliences (how discriminable an attribute is).7 Footnote 5: We selected the 11 attributes as follows: taking gender to be our sensitive attribute, we are left with 39 in CelebA. Following from Ramaswamy et al. [39], we winnow this down to the 26 that do not suffer from severe data imbalance (i.e., positive label rate between 5-95%), then the 20 that are consistently labeled (e.g., Black Hair and not Big Nose), then the 14 that look the same between people of different genders (e.g., Eyeglasses and not Young). From here, we use three as the training data for our **Control** models, and are left with 11. Footnote 6: We do not condone the use of gender predictors [20]. These models merely represent our conception of the kinds of image features that would be present in a model which has severe gendered correlations. Footnote 7: We use three **Control** models of different saliences because, as we will show later, salience is a relevant factor in the transference of bias, and we want to ensure robust comparisons to a reasonable control condition. To understand whether finetuning on the **Gendered** model will cause more downstream bias than on a **Control** model, we compare the ratio of the FPR difference between the finetuned **Gendered** model and the most biased of the three finetuned **Control** models. As transfer learning is often done when there is insufficient data in the downstream domain, we do this on our 11 attributes using 1024 finetuning ing samples. In Fig. 2 we see that for 7 of our 11 attributes, finetuning on the **Gendered** model is no more biased than finetuning on any of the **Control** models; however for 4 of the attributes (Earrings, Brown Hair, Blond Hair, Eyebags), the pretrained model impacts downstream bias. To understand what it is that differentiates the scenarios where pretrained bias does matter (4 of the 11 attributes) and where it does not (7 of the 11 attributes), we consider the relevance of three factors: (1) correlation level, (2) salience, and (3) number of finetuning samples. We show CelebA results, with COCO results in the Supplementary. **(1) Correlation level and (2) Salience.** Correlation level is the strength of the correlation between gender and the target attribute, and captures how useful it is for the upstream task's spurious correlation to be retained for the downstream task; salience is like the notion of discriminability, and captures whether an attribute is easier to learn in the downstream task than the spurious gender signal from the upstream task.8 For quantifying these two concepts we adopt the methods from prior work [64, 39]. We calculate correlation level by \(\frac{N(\text{Women, attribute})}{N(\text{Women, attribute})+N(\text{Wen, attribute})}\)[39] where \(N(\text{Women, attribute})\) indicates the number of images which are labeled as Women and have the attribute. Salience is measured relative to gender in this dataset, and positive values indicate gender is more salient than the downstream attribute (e.g., Mouth Slightly Open is not that salient), whereas negative values indicate it is less so (e.g., Eyeglasses is very salient). This value is calculated by constructing a setting where gender and the attribute are exactly correlated, and the salience score is derived from which of the gender or the attribute is learned better, details in Supplementary. Footnote 8: The notion of salience is related to simplicity bias [49, 57, 2, 34] in that neural networks may be more inclined to learn certain predictive patterns due to their being more salient or simple than others. For any given attribute of a particular salience, we can artificially manipulate its correlation level by selecting different subsets of the data. We do so for all attributes to be 80%, and perform a mixed-effect regression analysis on our 22 observations (all 11 attributes of naturally different saliences at both their natural correlation level and at 80%), with parameter estimations adjusted by the group random effects for each attribute.9 Footnote 9: We normalize correlation level to be between [0, 1], and calculate the FPR difference sign depending on which group the correlation is with. We find the effect size for correlation level to be 2.19 (95% CI [.53, 3.85]) and salience to be 1.30 (95% CI [.27, 2.34]), both with \(p<.05\). The positive coefficient for correlation level indicates that the stronger the correlation between the target task and gender (i.e., the more it benefits a model to rely on a gendered correlation), the more it will be affected by a biased pretrained model. The positive coefficient for salience indicates that attributes of lower salience are more likely to be affected by a biased pretrained model, likely because the gender visual cue is easier to spuriously rely on compared to the true downstream task cue. Having seen that both correlation level and salience are able to explain part of why bias is transferred, we next investigate the effect of the number of finetuning examples. **(3) Finetuning number.** We analyze the area under the ROC curve (AUC) and FPR difference of three pretrained models (**Scratch**, **Gendered**, **Control**) when finetuned on increasing amounts of data: \([2^{4},2^{5},2^{6},...,2^{12},2^{13},\text{full}=130\text{k}]\). To pick the comparison **Control** from our three, we select the one with the highest AUC at \(2^{10}\) for each attribute. In Fig. 3 we present results on two attributes which were Figure 3: On two attributes from CelebA, we track the AUC and FPR difference for three pretrained bases (**Gendered**, **Control**, and **Scratch**) on increasing numbers of finetuning samples. The **Gendered** model typically has a higher magnitude FPR difference than the **Control** model, until the two converge with sufficient finetuning samples (ray box). Earrings, which we saw to be less susceptible to bias from a pretrained model in Fig. 2 and is of higher salience and less correlation level, achieves this at around 512 images, while Eyebags takes until around 4096. Figure 2: Across 11 CelebA attributes, the FPR gender difference of each model finetuned on a different base. The attributes are sorted by the ratio of the FPR difference between the **Gendered** model and highest **Control** model. The dotted line indicates the cutoff between the seven attributes that do not have a higher FPR difference from biased pretraining and the four attributes that do. more biased when finetuned on the **Gendered** model compared to the **Control**: Earrings which had the least bias and Eyebags which had the most. Up until the full finetuning dataset is used, the **Gendered** and **Control** models which have received pretraining are able to achieve better performance than the **Scratch** model, which is in turn more fair. While with sufficient finetuning samples all three models converge to the same levels of performance and bias, we may not always have a large number of finetuning samples. Earrings, which has higher salience and less correlation level than Eyebags, requires a smaller number of finetuning samples for it not to matter in terms of FPR difference whether the **Gendered** or **Control** pretrained model was used (around 512 compared to around 4096). When the fairnesses converge, it becomes negligible which pre-trained model was used. This is a powerful result, as it indicates we can retain potential performance gains of a pretrained model, without needing to inherit its potential biases. While using a **Scratch** model would have alleviated concerns about the bias a pretrained model, it suffers too drastically in performance in the low data regime to always be feasible. In the rest of this work, we study how we might be able to retain the performance gains of different pretrained models while not inheriting their harmful biases. ### Bias from spurious correlations can be corrected for in finetuning We have now established that high correlation level, low salience, and low finetuning numbers are all relevant factors to whether the bias of a pretrained model will propagate into a finetuned model. With the exception of salience, which is hard to manually manipulate so we merely observe as a covariate, these other two factors become the levers we intervene on in our experiments going forward. The scenario we consider is as follows: we have a dataset for a downstream task that is small such that we would like to leverage transfer learning from a pretrained model. However, we believe the pretrained model is likely to contain biases in the form of spurious correlations that could impact the downstream task. We want to understand whether there is a way we can benefit from the performance gains that the pretrained model brings, without inheriting all of the biases. We investigate this on two datasets, CelebA and COCO, and use five pretrained models: **Scratch**, **TorchVision**[36], **MoCo**[22], **SimCLR**[8], and **Places**[65]. Each is finetuned on the downstream task, which has been artificially manipulated to have a correlation level of either 20% (towards men) or 80% (towards women), and the dataset is balanced to have equal numbers of images for the two genders. Without changing the distribution of this downstream test set, we artificially manipulate the downstream training set, i.e., finetuning dataset, across 11 increments of correlation level ([0%, 10%,..., 100%]). We then assess whether there is a manipulated setting of the finetuning dataset such that a pretrained model is able to retain most of the performance gains it brings, while having less of a FPR difference than if trained directly on the natural finetuning distribution. **CelebA.** We use two of the four attributes most susceptible to gender biases in the previous section (Blond Hair set to correlation level 80% and Eyebags set to correlation level 20%), and show the remaining two in the Supplementary. We visualize the tradeoff between performance (AUC) and bias (FPR difference) on the left of Fig. 4, and find both attributes have a configuration of the finetuning dataset which is different than the test dataset such that the performance is comparable to the peak performance, but the bias is significantly lower. For example, on Eyebags, **MoCo** has the highest performance but also the highest bias. However, by manipulating the finetuning correlation level from 20% to 30% the AUC remains at \(0.88\) while the FPR difference improves from \(-0.46\) to \(-0.27\). If we further manipulate the correlation level to 40%, the AUC drops slightly to \(0.85\), but the FPR difference improves to \(-0.05\). If we had restricted ourselves only to those models which are finetuned on a dataset that has the same correlation level as the test set (i.e., the bolded points), we would have had to compromise on performance in order to achieve lower bias. However, by manipulating the finetuning dataset such that it contains less spurious correlations than the test dataset, we can actually train a model that retains most of the performance benefits afforded to the more biased model, but also has the fairness of a less accurate model. This change is often relatively small as well, with just a 20% shift in the correlation. In the Supplementary we show results on 128 finetuning samples, where stronger manipulations are required to decrease the FPR difference. Of course, these gains are not free: the added cost of this approach is the need to collect additional training samples to create the new correlation level. However, even in a constrained setting where you are unable to collect additional finetuning samples, _removing finetuning data_ to create a less correlated dataset can bring about some of the same benefits. The right side of Fig. 4 represents this scenario where we are unable to collect additional samples, and simply create different correlation levels by removing samples. For Eyebags, finetuning **MoCo** by removing samples such that the correlation level goes from 20% to 30% barely drops the performance from \(0.88\) to \(0.87\), while the FPR difference improves from \(-0.45\) to \(-0.35\). If we make the larger change to 50% the performance drops to \(0.80\), but the FPR difference is improved to \(-0.19\). **COCO.** We investigate the two objects with the highest representation across both genders, because in this dataset the labels are more sparse and we want sufficient positive examples to be able to control the correlation level of an object with each gender. These two objects are dining table, which like the previous section we set to correlation level of 80%, and chair, which we set to 20%. We show results on additional objects in the Supplementary. In Fig. 5 we show results on experiments with 5000 fine-tuning samples for the same setup as on CelebA.10 Because of COCO's far noisier gender cues and additional complexity due to a greater diversity of image appearances, compared to CelebA's more uniform frontal facing individuals, the trends are not as clear where each change in correlation level directly corresponds to a reduction in FPR difference. However, what remains clear is there always exists a version of the finetuning dataset which retains the high AUC of a biased pretrained model, but reduces the bias on the downstream task. For example, on chair the finetuned **MoCo** has an AUC of \(0.81\) and FPR difference of \(-0.015\); meanwhile, the finetuned **Places** has a better AUC of \(0.84\) but a worse FPR difference of \(-0.024\). However, instead of having to choose between lower bias or higher performance, we find that by finetuning the better-performing **Places** model on a dataset of correlation level 40% rather than 20%, the performance stays constant at \(0.87\), while the FPR difference improves to \(-0.002\)! This indicates that careful curation of a finetuning dataset can overcome many FPR difference concerns, without compromising on the high performance that a pretrained model may be selected for. Footnote 10: We show results with 1000 finetuning examples in the Supplementary. In this section we have showed how bias in the form of spurious correlations can propagate from a pretrained model to a finetuned one. However, we have also shown that manipulations to the finetuning dataset, even if they cause the finetuning dataset to deviate in distribution from the downstream test set, can actually correct for much of this bias while retaining high performance. While we have focused on the sensitive attribute of gender due to the availability of annotations, our findings are not necessarily restricted to this domain. In the next section, we show the same results when bias is considered to be underrepresentation. ## 5 Bias as Underrepresentation Now, we consider the implications of biased pretraining when "bias" means that one appearance of an object is underrepresented. This is inspired from both Buolamwini and Gebru [6], who showed that an underrepresentation of darker-skinned individuals led to worse classification performance for this group, and DeVries et al. [10], who showed that appearance differences, i.e., subcategories, within an object class led to objects from countries with lower household incomes to be misclassified more often. For example, bar soap is less recognizable as soap than pump soap is. In this section, we want to understand whether a pretrained model that has only learned about one subcategory of an object (e.g., pump soap), will perform worse on the other subcategory of an object (e.g., bar soap), compared to if it had seen that subcategory during pretrain Figure 4: The performance and bias with 95% confidence intervals of pretrained models finetuned on different versions of two downstream tasks on CelebA: Blond Hair (correlated with women) and Eyebags (correlated with men). The bolded point indicates when the finetuning distribution matches the test distribution, and all other points indicate variations on the finetuning dataset. The left column represents when additional data is collected to manipulate the correlation level while maintaining 1024 finetuning samples, and the right column represents when finetuning data is removed such that the correlation level changes. On the left, there are versions of the finetuning dataset that allow us to retain performance gains and improve fairness; on the right, we lose some performance to improve fairness. Figure 5: The performance and bias with 95% confidence intervals of pretrained models finetuned on different versions of two downstream tasks on COCO: dining table (correlated with women) and chair (correlated with men). The bolded point indicates when the finetuning distribution matches the test distribution, and all other points indicate variations on the finetuning dataset; all models are trained on 5000 finetuning samples. In each case, there is a version of the finetuning dataset that is different in distribution from the test set which allows us to train a model which retains the high AUC of a more biased model, but has lower FPR difference. ing. When we find that it does, we provide insight about the level of intervention on the finetuning dataset we can perform to overcome it. ### Finetuned models do worse on subcategories underrepresented in pretrained models We consider a downstream task Target to be composed of two possible subcategories: T1, e.g., pump soap, and T2, e.g., bar soap. We create **Pretrain-T1** that has only been trained to classify T1 on FairFace and **Pretrain-T2** that has only been trained to classify T2 on FairFace. Taking T2 to be the underrepresented subcategory that we are particularly interested in the performance of, our measure of bias is the AUC difference on T2 between **Pretrain-T2** and **Pretrain-T1**, i.e., the performance lost by having used a pretrained model which has not seen this subcategory before.11 Footnote 11: AUC difference is between models rather than groups because different subcategories of an object may be inherently harder to classify or not, and we want to capture the relevant aspect of performance. Because FPR difference can be manipulated through post-hoc threshold changes [21], that comparison remains between groups. For our downstream task, we use the CelebA dataset, and simulate the different appearances of an object through two different attributes as the subcategories of the classification label. For example, "Light Hair" could be composed of Blond Hair and Brown Hair. On 12 such subcategory pairings, selected to be roughly representative in terms of relative salience, we find that when we finetune on 128 images where half the positive labels are T1 and half are T2, our bias measure of T2 AUC difference is \(.124\pm.023\) between the two different pretrained models. The statistically significant positive difference indicates that a finetuned **Pretrain-T1** is not able to reach the performance on T2 that a finetuned **Pretrain-T2** is. Experiment details and results on the analogous three factors of correlation level, salience, and finetuning number are in the Supplementary. ### Bias from underrepresentation can be corrected for in finetuning We have just established that a pretrained model (e.g., **Pretrain-T1**) which has not been trained on a particular label appearance (e.g., T2) will perform worse on it at finetuning time than if it had been trained on it (e.g., **Pretrain-T2**). In this section, we treat the proportion of finetuning dataset that is T1 or T2 as our analog to correlation level, and investigate the effect of manipulating it. It is obvious that increasing the proportion of T2 will likely increase the performance on T2; however, it is not obvious what amount of proportion manipulation is required to compensate for a pretraining model not having seen a particular attribute. Our experimental setup is as follows: the positive labels in our downstream task are 90% T1 and 10% T2. As we know from the previous section, if we finetune **Pretrain-T1** on this, we would not achieve as high of a T2 AUC as if we had finetuned **Pretrain-T2**. However, it is unlikely we have access to **Pretrain-T2** because existing pretrained models are often biased towards having been trained on, for example, geographically homogenous images [10, 59, 50]. Thus, in our setting in which we are actually able to compare to **Pretrain-T2**, we ask what proportion of the finetuning dataset needs to be T2 such that the **Pretrain-T1** we actually have can achieve comparable performance. In other words, the amount of change to the finetuning dataset that correlates with having used an entirely different pretrained model to begin with. **CelebA.** On our 12 attribute pairs we find that for 128 finetuning samples, manipulating the proportion of T2 from just 10% to 30% brings the performance of **Pretrain-T1** to that of **Pretrain-T2** in 6 of our 12 cases, and for 1024 finetuning samples that manipulation does so in 8 of our 12 cases. **Dollar Street and COCO.** Applying this insight from CelebA that relatively minor manipulations to the proportion of the dataset from underrepresented subcategories can significantly impact the performance of those subcategories, we now turn to the more complex and realistic object datasets of Dollar Street and COCO. We consider the task of recognizing 15 objects in Dollar Street that have corresponding objects in COCO. Although the object classes are the same between datasets, their visual distribution is different as COCO images largely come from only higher-income regions [10] whereas Dollar Street was collected to be more geographically diverse. Nevertheless COCO images are more plentiful; to simulate this, we consider a finetuning dataset of 128 images where 90% are from COCO and 10% from Dollar Street. We use two pretrained models, named after the dataset each is trained on: **ImageNet**[42], where the training data is more similar to the COCO distribution, and **GeoDE**[40], which is trained on a newer and more geographically diverse dataset. As expected, finetuning **GeoDE** achieves a higher accuracy (where a prediction is correct if the object is one of the top-5 predicted labels [10]) of 13.1% on Dollar Street compared to finetuning **ImageNet** with 8.4%. However, what we ultimately want to investigate is how much investing in a better finetuning dataset can help overcome the problem (Fig. 6). Thus, we manipulate the finetuning dataset (simulating the collection of more Dollar Street-like images, while keeping the overall finetuning number the same), and observe that with just 20% rather than 10% of images coming from Dollar Street, **ImageNet** is able to outperform the performance of the **GeoDE** baseline with an accuracy of 21.7%. ## 6 Discussion Across both operationalizations of bias, as _spurious correlations_ and _underrepresentation_, we find that bias from a pretrained model can propagate into a finetuned one. When bias is in the form of a _spurious correlation_, this is especially likely when the downstream task has a high correlation level with the sensitive attribute, is of low salience, and there is a small number of finetuning samples. For those creating pretrained models, responsibility needs to be taken to clearly document the data the model was trained on, as well as known biases it may have [13, 33]. However, across both of these conceptualizations of bias, we also find that the distribution of the finetuning dataset has the ability to counteract much of the bias a pretrained model may bring. When the bias we are concerned with is a _spurious correlation_, this entails manipulating the correlation level in the finetuning dataset, and when the bias is _underrepresentation_, this entails manipulating the proportion of positive labels which are of the underrepresented group. Necessarily, all of these manipulations will require additional data collection efforts. However, the efforts are significantly lower than would be required to intervene on the massive pretraining dataset. It is notable that these manipulations allow us to retain most of the performance gains that a biased pretrained base might bring. Acting on these findings will require careful care and consideration in the collection of finetuning datasets, at times even over-correcting for a particular correlation that may be present in the downstream test set. Especially given that most settings we investigate are those with smaller finetuning datasets, these being the scenarios where pretrained models are most necessary, we should expect and be willing to put in thoughtful work in curating finetuning datasets [46, 37, 45]. Data collection can often be extractive and violate privacy [26, 38], but there are ways in which it can be done more consensually [40]. Given that the relevant harms are usually context-specific and hard to conceptualize upstream of the downstream task, this lends further support to the notion that the finetuning dataset makes for both a practical and efficient point of intervention in bias mitigation. ### Limitations There are a number of limitations that qualify the generalizability of our findings. All experiments are conducted on a ResNet50, which uses a convolutional neural network architecture. The constructed pretrained bases are trained on their source task to convergence, but that isn't always the optimal way to construct a pretrained model [19]. Additionally, we finetune all of the model's weights rather than freezing some. Additionally, we have focused on studying relatively low-data regimes of finetuning in this work. As we saw in Sec. 4.1, the amount of finetuning data matters, as greater amounts tend to wash away the difference between distinct pretrained bases. Most significantly, we only conceive of two possible operationalizations of bias in this work: spurious correlations and underrepresentation. Other types of biases (e.g., stereotypical representations) will likely lead to different transference properties. In our technically defined notions of bias, we have also left out of scope considerations such as NSFW content, privacy issues, and other inherent harms of pretraining datasets [4, 38]. Irrespective of their downstream effects, pretraining on these images are inherently harmful because of impacts such as on the individuals potentially tasked with annotating the data. Ultimately in this work, we explored very directed interventions on the finetuning dataset to target specific forms of bias. However, likely any downstream task will have many different kinds of bias that are relevant, and potentially even at odds [58]. We leave for future work how to balance potentially conflicting tensions in order to curate more suitable finetuning datasets. ## 7 Conclusion Finetuning on top of pretrained models is a powerful way to train models on domains where we have less data. In this work, we conceive of bias as _spurious correlations_ or _underrepresentation_ and show that biases of either form can transfer from pretrained models to finetuned ones. However, we also affirmatively show that targeted manipulations of the finetuning dataset can counteract this bias transferrance, allowing us to retain performance gains that certain pretrained models with bias may bring, without compromising on fairness concerns. These dual findings indicate that while interventions on the pretrained model to ensure less bias in the features are certainly useful, more effective interventions can be performed by manipulating the finetuning dataset itself. The benefit of manipulations at this juncture is there will also likely be a better understanding of the application-specific harms for which dataset intervention can be targeted. This will require a careful, participatory, and deliberative curation of the finetuning dataset. Figure 6: The performance of an **ImageNet**-pretrained model [42] at recognizing 15 objects in the geographically diverse Dollar Street [41] dataset, as the proportion of finetuning images from Dollar Street itself (more geographically diverse) and COCO [31] (less so) is manipulated. Only a minor change in finetuning proportion is required for the **ImageNet**-pretrained model to match the performance of a **GeoDE**-pretrained model, which was trained on a dataset curated to be more geographically diverse [40]. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 1763642, Grant No. 2112562, Grant No. 2145198, and Graduate Research Fellowship to AW. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We thank Allison Chen, Vikram V. Ramaswamy, and Shruthi Santhanam for feedback.
2308.04402
Person Re-Identification without Identification via Event Anonymization
Wide-scale use of visual surveillance in public spaces puts individual privacy at stake while increasing resource consumption (energy, bandwidth, and computation). Neuromorphic vision sensors (event-cameras) have been recently considered a valid solution to the privacy issue because they do not capture detailed RGB visual information of the subjects in the scene. However, recent deep learning architectures have been able to reconstruct images from event cameras with high fidelity, reintroducing a potential threat to privacy for event-based vision applications. In this paper, we aim to anonymize event-streams to protect the identity of human subjects against such image reconstruction attacks. To achieve this, we propose an end-to-end network architecture jointly optimized for the twofold objective of preserving privacy and performing a downstream task such as person ReId. Our network learns to scramble events, enforcing the degradation of images recovered from the privacy attacker. In this work, we also bring to the community the first ever event-based person ReId dataset gathered to evaluate the performance of our approach. We validate our approach with extensive experiments and report results on the synthetic event data simulated from the publicly available SoftBio dataset and our proposed Event-ReId dataset.
Shafiq Ahmad, Pietro Morerio, Alessio Del Bue
2023-08-08T17:04:53Z
http://arxiv.org/abs/2308.04402v4
# Person Re-Identification without Identification via Event Anonymization ###### Abstract Wide-scale use of visual surveillance in public spaces puts individual privacy at stake while increasing resource consumption (energy, bandwidth, and computation). Neuromorphic vision sensors (event-cameras) have been recently considered a valid solution to the privacy issue because they do not capture detailed RGB visual information of the subjects in the scene. However, recent deep learning architectures have been able to reconstruct images from event cameras with high fidelity, reintroducing a potential threat to privacy for event-based vision applications. In this paper, we aim to anonymize event-streams to protect the identity of human subjects against such image reconstruction attacks. To achieve this, we propose an end-to-end network architecture jointly optimized for the twofold objective of preserving privacy and performing a downstream task such as person Reld. Our network learns to scramble events, enforcing the degradation of images recovered from the privacy attacker. In this work, we also bring to the community the first ever event-based person Reld dataset gathered to evaluate the performance of our approach. We validate our approach with extensive experiments and report results on the synthetic event data simulated from the publicly available SoftBio dataset and our proposed Event-Reld dataset. The code is available at [https://github.com/IIT-PAVIS/Refd_without_Id](https://github.com/IIT-PAVIS/Refd_without_Id) ## 1 Introduction For security and monitoring purposes, intelligent surveillance systems are installed in our personal spaces (e.g., home surveillance) and all over urban areas (hospitals, banks, shopping malls, airports and streets, etc.). However, collecting images and videos with always-connected vision sensors raises new issues: _i)_ ethical discussions over the balance between safety/security needs and individual privacy; _ii)_ unauthorized access to sensory data that may threaten users' privacy; _iii)_ extensive resource consumption of large-scale sensor networks, e.g., energy, bandwidth, and computing power. Neuromorphic vision sensors (event cameras) are a disruptive technology as they only capture scene dynamics and do not record visual detail of humans, which enforces privacy-by-design (to some extent); their ultra-low resource consumption makes them ideal for always-on visual sensors. Besides, their high dynamic range enables them to work under challenging illumination conditions while, alike RGB cameras, event-cameras are able to solve various vision tasks, such as object recognition [26], human pose estimation [33], detection and tracking [27, 15, 21], and person re-identification (ReId) [1]. Event cameras output asynchronous events that are triggered with extremely low latency when an intensity change at pixel level is over a given threshold. Due to their asynchronous nature, event streams do not form images but rather a data-stream containing pixel position activation Figure 1: Event-to-image [32] can be regarded as a _privacy attack_, which reconstructs the appearance of a person from an event stream (**a**). We propose a learnable Event Anonymization network architecture (**b**), which deals with such attack by scrambling the event stream so that reconstruction deteriorates while preserving the performance of an event-based downstream task (e.g., person Reld (**c**)). We also consider a possible _Inversion Attack_ (**d**), where the attacker tries to reverse the effect of the proposed anonymization in order to attain image reconstruction (**e**). (i.e., \((u,v)\) coordinates) and a polarity. These event streams were considered privacy-preserving, as they do not contain detailed visual features that can let a human or algorithm recognize individual traits such as faces. However, event streams encode the entire visual signal in an extremely compressed form and could, in principle, be decompressed to recover a high-quality video stream. Currently, deep neural network-based image reconstruction models [32, 30, 37, 43] have demonstrated impressive abilities in recovering grayscale images from event streams, representing a potential threat to the privacy of event-based vision applications as shown in Fig. 1 (a). Recently, Du et al. [11] proposed a hand-crafted encryption framework to prevent privacy attacks on event-streams. Their approach incorporates a spatial chaotic mapping to scramble the positions of events and flip their polarities. The spatial information in the encrypted event-stream is thus deformed due to 2D position scrambling and, as a result, event-to-image methods fail to reconstruct high-quality images. The main drawback of this encryption technique is that downstream computer vision tasks cannot be performed directly with the encrypted event-stream, which is only useful to protect data during transmission or storage and must be decrypted before being utilized. In this paper, we propose a learning-based approach called Event-Stream Anonymization which prevents privacy attacks on event data (see Fig. 1 b), while at the same time allowing the execution of downstream tasks. The proposed method enforces the degradation of images recovered from the privacy attacker (i.e., event-to-image module) while jointly optimizing a downstream task in an end-to-end fashion. In other words, it helps protect subjects' identity while preserving the information needed to achieve other tasks, such as person Reld as shown in Fig. 1 (c). The two tasks, anonymization and Reld, seem to have contrasting objectives. This represents the actual challenge of our work. However, Reld only aims at associating images of a person in a camera network, while anonymity refers to protecting a person's identity or other biometric traits. A practical use case is when an attacker has a person's name and photo and aims at identifying that person by maliciously accessing a camera network. The proposed anonymization pipeline prevents this attack while allowing Reld by the surveillance system. We verify that our approach can successfully anonymize the event-stream with only a small drop in performance in person Reld by performing extensive experiments on simulated event data and on a newly introduced real event-based person Reld dataset called Event-Reld. More specifically, to evaluate the robustness of our method against event-to-image reconstruction techniques, not only do we measure the (poor) quality of the recovered images, but we also verify that classic full-body human identification or face identification tasks are hardly possible using such anonymized data. In addition, we validate the robustness of our anonymization technique against an inversion attack, where an attacker tries to reverse the effect of the anonymization network (see Fig. 1 d). The main contributions of this work are summarised as follows: * We propose an event-stream anonymization network to protect the identity information against event-to-image attacks in event-based vision applications. We also propose a joint optimization framework that preserves anonymization with a small drop in performance while testing on downstream tasks (e.g., Reld). * We contribute a first-ever person Reld dataset captured with event camera, namely the Event-Reld dataset. * We performed extensive experiments to verify the robustness of event stream anonymization network against privacy attacks (event-to-image approaches) using synthetic and the proposed real event dataset. ## 2 Related Work ### Privacy-Preserving Computer Vision **Standard (RGB) Vision Sensor:** Currently, few methods are developed to solve privacy-preserving issues for standard RGB cameras. These methods [18, 17, 20, 38, 36, 12, 7] can be divided into software and hardware level protection against privacy attacks. Methods based on software-level protection [20, 38, 12, 7] employ various computer vision algorithms to morph image/video data representations after acquisition. Thus, they learn privacy-preserving encodings through adversarial training to degrade privacy-related visual information in images/videos while trying to preserve essential features to perform inference tasks and prevent adversarial attacks. The hardware-level protection framework acts instead on the vision sensor to include an additional layer of security by removing sensitive data during the image acquisition. Most recent approaches optimize the distortion parameters of a virtual lens via adversarial training to hide the identity information of humans while allowing essential visual information to be gathered for computer vision task [18, 17, 36]. Actual lenses can then be manufactured using the learned coefficients. **Event-based Vision Sensors:** Event cameras are often regarded as privacy-preserving as they naturally discard detailed visual biometric information (such as face details). However, the events-stream encodes the complete visual signal in an extremely compressed form and recent works were able to decompress it and recover a standard (grayscale) visual output, either using patch-based dictionaries [3], variational models [28], or deep learning-based solutions [32, 30, 37, 43]. Such event-to-image conversion approaches seem to suggest that event-cameras can no longer consider privacy-preserving devices since attackers can train their own models to break anonymity. Du et al. [11] investigate the privacy of event cameras and analyze the possible security attacks, including gray-scale image reconstruction and privacy-related classification. In addition, to prevent event-to-image conversion approaches, they proposed a hand-crafted encryption framework that incorporates spatial chaotic mapping to scramble the positions of events and polarity flipping. However, this framework is only useful to protect event-stream during transmission and storage purposes. In fact, the visual information is distorted in the event-stream due to 2D position scrambling and a computer vision module (e.g., person ReId, tracking, detection, etc.) can not directly be applied to the encrypted event-stream. On the contrary, we develop a method that distorts event-stream in such manner that image reconstruction methods produce degraded images while preserving the useful information to perform computer vision tasks (e.g., person ReId) on those distorted events. ### Person Re-Identification Person Re-Identification has gained significant interest as an enabling technique for smart video surveillance systems (e.g., tracking in non-overlapping views, forensic and security applications [40]). The person ReId problem has been extensively studied in standard (RGB) camera networks and deep-learning-based ReId approaches [39, 40] have improved the performance rapidly. Most of the existing ReId frameworks are developed for conventional RGB cameras, although different methods have been proposed for multi-modal person ReId such as e.g., cross-modal RGB-infrared [6, 22] and with RGB-D camera [2, 29]. Nowadays, ReId raises severe privacy concerns and preserving people's privacy becomes essential [10] also in view of the General Data Protection Regulation (EU GDPR). Currently, very few methods [9, 41, 10] addressed privacy concerns in person ReId. Julia et al. [9] apply face blurring to anonymized person identity and perform ReId. On the other hand, Shuguang et al. [10] suggested a privacy-preserving ReId method called person identity shift (PIS) that removes the absolute identity of the image (i.e., who is the person in the image) while preserving the relationship between images pairs. Zhao et al. [41] proposed a cloud-based privacy-preserving solution for ReId. That allows the cloud server to perform person ReId operations on encrypted data and output the final ReId results in plain text. A major drawback of all the above methods is that they did not ensure the end-to-end privacy of the ReId system. The possibility of unauthorized access to the surveillance camera still poses severe threats to privacy. To address this challenge, the authors in [1] proposed an event-based person ReId system. Since event cameras capture scene dynamics without providing RGB image content, Ahmad et al. [1] showed that event-frames deliver mostly edge and texture contours details that might be used for ReId. Nevertheless, as already discussed, event-based streams still disclose personal traits by using neural networks [32, 37, 43] that can extract high-quality grayscale images from event-stream. To achieve an end-to-end privacy-preserving person ReId system, we propose a learning-based approach called event-stream anonymization for privacy-preserving person ReId. Our model learns to anonymize the event-stream to prevent image reconstruction techniques (i.e., privacy attacks) from recovering gray-scale images that may disclose identity information. ## 3 Event-ReId: A New Dataset and Benchmark Our target is to develop privacy-preserving person ReId methods using event-cameras. Yet, the research community lacks a dataset captured with real event cameras, which are also suitable for benchmarking person ReId methods. Hence, despite the advantages of event cameras in a surveillance application, research has been held back by the unavailability of event data, and so far, only simulated experiments have been deployed [1]. To address this issue and to boost new research on this topic, we propose the first event-based person ReId dataset named **Event-ReId**. The Event-ReId dataset comprises 33 subjects walking across a non-overlapping field of view of four Prophesee(r) integrated within a surveillance network. The cameras feature different positioning and tilt angles; each one is coupled with an RGB camera in a fixed stereo configuration that captures approximately the same scene and being both synchronized by the network clock, see Fig. 2. Each RGB camera records data at 30 FPS at a resolution of _640\(\times\)480 \begin{table} \begin{tabular}{|c||c|c c c|} \hline \multirow{2}{*}{Dataset} & **Event** & \multirow{2}{*}{n-HAR} & \begin{tabular}{c} DailyAction \\ DVS \\ \end{tabular} & \multirow{2}{*}{DHP19} \\ \cline{1-1} \cline{3-5} & & & \\ \hline No. of subjects & 33 & 30 & 15 & 17 \\ \hline \end{tabular} \end{table} Table 1: Event-based dataset size comparison. Figure 2: Samples from RGB and event cameras views from our Event-ReId dataset. pixels, having captured a total of 16K images with an average of 120 frames per person per camera. The event-cameras resolution is the same as the RGB camera and each stream is recorded for the same length (\(\approx\)4 sec) for both sensors. Further, out of 33 identities, 9 people wear face masks and each person appears in the view of all four camera pairs. The dataset includes variations, such as changes in illumination, pose, and viewpoint. We manually annotate the person and face bounding boxes on both event and RGB streams; the event ground truth bounding box is synchronized with RGB bounding boxes. The proposed dataset size compares favorably with the size of other event-based datasets: activity recognition dataset **n-HAR**[31] and **DailyAction-DVS**[23], and human pose estimation dataset **DHP19**[5] (see comparisons in Table 1). Download **Event-Reld** from here [https://doi.org/10.5281/zenodo.8256439](https://doi.org/10.5281/zenodo.8256439) ## 4 Proposed Method The proposed pipeline consists of three main modules: the event anonymization block, the event-to-image reconstruction block, playing the role of the privacy attacker, and the person ReId block, which performs the downstream vision task on the anonymized event stream as shown in Fig 3. In the following, after describing the input event representation to the network, we provide a detailed description of each module, including its implementation and functionalities for preserving privacy and person ReId. We conclude this section with a description of the joint optimization method. ### Input Event Representation The output of an event camera is an asynchronous event stream that encodes the time, location, and polarity of the intensity changes (increase or decrease in intensity) [13]. Consequently, each event alone carries limited information about the scene appearance. Typically, asynchronous event data are converted to a grid-like representation as event-frame, or 2D histogram [25], time surface 2D map [34], and voxel grids [42]. This pre-processing facilitates the visualization and the extraction of meaningful information using standard frame-based methods such as deep convolutional neural networks (CNN) [25, 34, 42]. The input of our network is a voxel grid \(X_{e}\) as proposed in [42]. A voxel grid is a space-time (3D) histogram of events generated by discretizing the time domain, where each voxel represents a particular pixel and time interval. Spatiotemporal coordinates, \(x_{k},y_{k},t_{k}\), lie on a voxel grid such that \(x_{k}\in\{1,2,...,W\}\), \(y_{k}\in\{1,2,...,H\}\), and \(t_{k}\in\{t_{0},t_{0}+\Delta t,...,t_{0}+B\Delta t\}\), where \(t_{0}\) is the first time stamp, \(\Delta t\) is the bin size, and \(B\) is the number of temporal bins and \(W\), \(H\) are the sensor width and height. We utilize a voxel grid representation for three reasons: _i)_ to make the model fully differentiable; _ii)_ the event-to-image methods in our proposed model also rely on a voxel grid; _iii)_ a voxel grid preserves the temporal information of event streams. ### Networks and modules **Event-Stream anonymization Block.** The anonymization network in our framework (Fig. 3a) modifies the event-streams to prevent the following image reconstruction tech Figure 3: The complete pipeline of the proposed method. a) Event anonymization Network \(E_{an}\) take raw event (voxel-grid) data and output anonymize event. \(L_{struct}\) loss enforce \(E_{an}\) to preserve the structural information in voxel-grid. b) Image reconstruction block \(E_{rec}\) (pre-trained E2VID [32]) play the role of privacy attack and try to reconstruct grayscale image, \(E_{an}\) maximize \(L_{rec}\) loss to protect person identity information. c) Person ReId backbone \(E_{reid}\) is trained with anonymized event data in an en-to-end fashion with \(E_{an}\). niques from converting events into intensity images that can reveal privacy-sensitive information (e.g., faces). At the same time, this module should preserve useful spatial information needed for performing person ReId successfully. The anonymization network consist in a convolutional autoencoder [35]\(E_{an}\) which takes a raw event-voxel \(X_{e}\in\mathbb{R}^{B\times W\times H}\) and output anonymized event-voxel \(\hat{X}_{e}\in\mathbb{R}^{B\times W\times H}\). The use of an autoencoder-like architecture is primarily justified by the fact that this module, in the worst-case scenario, should be able to replicate the event-stream in order to allow performing the downstream task. The autoencoder architecture consists of \(4\) convolutional layers, each with a filter size of \(3\) and a stride of \(1\). **Image Reconstruction Block.** The image reconstruction module consists of a pre-trained E2VID network [32] that is a recurrent neural network that reconstructs high-quality grayscale images from the stream of events. In this block, any event-to-image method e.g., [30, 37, 43] can be integrated as a privacy attacker. E2VID translates a continuous stream of events into a sequence of images. To achieve this, the incoming stream of events is partitioned into sequential (non-overlapping) spatiotemporal windows. Similarly, we partitioned the input event streams into a fixed time window \(\mathbf{T}\) (explained in Sec. 4.1) for the anonymization network \(E_{an}\). The output voxel-grid \(\hat{X}_{e}\) is then processed by reconstruction module \(E_{rec}\) to reconstruct the target grayscale image. We thus encourage degradation in the recovered image to prevent identity information leakage. Note that the weights of this module are not updated during training. **Event-based Person ReId Block.** Person ReId methods aim to learn a vector representation, usually a feature embedding from a CNN, of images to perform retrieval and recover images belonging to the same person Id. In our case, ReId is performed on event-stream data instead of the standard RGB signals. We employ a ResNet-50 [16] pre-trained on ImageNet as the backbone for feature embedding. Unlike the event-based ReId in [1], which utilizes event-frames, our ReId module \(E_{reid}\) takes anonymized event-voxels \(\hat{X}_{e}\in\mathbb{R}^{B\times W\times H}\) as input. We modify the original ResNet architecture to accommodate the \(B\) input channels of the voxel-grid representation and compute a 256-D feature embedding for ReId. The ReId model uses classification loss (cross-entropy) and triplet loss for all experiments and is jointly trained with the anonymization network. ### End-to-End Training Our ultimate goal is to learn the parameters of anonymization network \(E_{an}\) such that: _i)_ event-to-image techniques cannot recover intensity image from \(E_{an}\) output that can disclose private visual information; _ii)_ person ReId achieves the best performance or at least does not experience a significant drop if compared to using a non-anonymized event-stream. The three modules are combined as shown in Fig. 3 so that the output of \(E_{an}\) (anonymized stream) is the input of \(E_{rec}\) and \(E_{reid}\) at once. We train all the modules jointly in an end-to-end manner, described in detail below. \(E_{an}\) has the aim of neutralizing the reconstruction attack, thus ultimately must be trained with the objective of degrading the quality of the recovered images \(\hat{I}_{image}=E_{rec}(E_{an}(X_{e}))\). To this end, we use the structural similarity index (SSIM) [19] to assess the quality of \(\hat{I}_{image}\) compared to the ground-truth \(I_{image}\): \[\mathcal{L}_{rec}=SSIM(\hat{I}_{image},I_{image}). \tag{1}\] SSIM is one of the most popular perception-based error metrics [35], aiming to measure better image luminance, contrast, and structure information. Since our objective is to degrade the recovered image, the SSIM function is bounded ranges between [0-1], where a value near 0 indicates less similarity between two compared images. Thereby the \(\mathcal{L}_{rec}\) loss is minimized during training to force the images recovered by the attacker to be as more diverse as possible from the real ones. Moreover, as our anonymization model scrambles the input raw event-voxel, the useful visual information in the event-voxel could be lost, decreasing the performance of the person ReId substantially. To preserve the structural similarity between \(X_{e}\) and \(\hat{X}_{e}\), which is useful for person ReId, we compute the structural loss as \[\mathcal{L}_{struct}=1-SSIM(\hat{X}_{e},X_{e}), \tag{2}\] and define the person ReId objective as \[\mathcal{L}_{reid}=\mathcal{H}(P_{id},E_{reid}(\hat{X_{e}})). \tag{3}\] Here \(\mathcal{H}\) refers to the identity loss (cross entropy and triplet loss) function and \(P_{id}\) is the ids label for person ReId. Thus, our training scheme jointly models the event-stream anonymization with person ReId during training and the overall cost function can be written as: \[\mathcal{L}_{Total}=\alpha\mathcal{L}_{struct}+\beta\mathcal{L}_{rec}+\gamma \mathcal{L}_{reid}. \tag{4}\] ## 5 Experiments **Synthetic and real datasets** We test our method on reconstruction and inversion attacks using synthetic data and the real dataset presented in Section 3. Synthetic event data is generated from the video-based person ReId SoftBio [4] dataset through open-source event simulator [14]. The SoftBio dataset comprises 152 identities and a total of 64,472 frames collected with eight surveillance cameras. The dataset is recorded in an uncontrolled environment, and each identity may only appear in a subset of cameras, which collect data under very different viewpoints, with drastic changes in illumination and background. In addition, we benchmark our approach on the **Event ReId** dataset described in Sec. 3. **Setup and Implementation:** We simulate event data from SoftBio, randomly splitting 152 identities, 76 IDs for training and 76 other IDs for testing. For the real data in Event ReId, we randomly select 22 IDs for training and 11 IDs for testing out of 33 identities of the proposed real event-based person Event-ReId. We choose the time span for the spatiotemporal voxel grid \(T{\approx}40ms\) for synthetic event data and \(T{\approx}33.3ms\) for real event data to be synchronized with the corresponding RGB frames. Following [32], we set the size of temporal bin \(B=5\) for the event voxel grid and during training, our model resized the event voxel grid to \(5{\times}392{\times}192\). We use a batch size of 24 and train the model with a base learning rate of 0.001 for 60 epochs. We set momentum \(\mu\) = \(0.9\) and the weight decay to 5\({\times}10^{-4}\). In Eq. 4 we set \(\alpha{=}\beta{=}\gamma{=}1\). The implementation is based on PyTorch. ### Metrics and Evaluation Methods To evaluate the performance of our complete model on reconstruction and inversion attacks, we need to assess the trade-off between person ReId and privacy-preserving tasks. We first processed the raw event-stream for both tasks during inference through our anonymization network to acquire the anonymized event data. Later, we measure the performance of person ReId and privacy-preserving tasks using anonymized data. **Person ReId:** Our main goal is to perform person ReId with anonymized event data without compromising ReId accuracy. We thus train our ReId backbone on both anonymized and raw events separately and then compare their performance. We report the rank accuracy and mean average precision for both real and simulated data. **Privacy-preserving:** Here, we consider the case in which the attacker has access to the anonymized event data and tries to disclose the person's identity by employing image reconstruction, e.g., E2VID [32]. To experimentally test the robustness of our event stream anonymization approach against the reconstruction attack, we measure the image quality using the structural similarity index (SSIM) and peak-signal-to-noise ratio (PSNR). Low values of SSIM and PSNR suggest low image quality, which is what we expect to achieve if anonymization is successful. We compute the average SSIM and PSRN for all images in test sets of the real and simulated datasets. In addition, we also validate that our proposed identity anonymization framework completely removes information that can be used to identify the persons. Therefore, we also formulate the privacy attack as an image retrieval and face verification task. _(i) Image Retrieval:_ We consider that an attacker has access to the event-based privacy-preserving surveillance camera network and also holds a query image of a target to identify. The query image is either captured with a standard RGB camera \(Q_{rgb}\) or a gray-scale image \(Q_{event}\) reconstructed from an event-stream without the protection of the privacy module. Then the attacker determines whether this person exists in the gallery set \(G_{an-event}\) that contains degraded images by using the query image to retrieve the correct target identity. Higher retrieval performance indicates a lower privacy-preserving effect: \(E_{an}\) performance is evaluated based on the rank accuracy or mean average precision metrics. For this experiment, we employ the state-of-the-art person ReId model BOT [24] to evaluate image retrieval and use the test sets of real and simulated datasets. _(ii) Face Recognition:_ In this experiment, we assume a similar scenario, where the attacker holds a _face_ image (RGB or reconstructed gray-scale image) and tries to disclose identity information by matching it with a degraded face image. We use the pre-trained face recognition model ArcFace [8] to measure the resilience of our system to this privacy attack. We measure face recognition performance in terms of the area under the curve (AUC) of the ROC curve. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Dataset & **Real** & & **Synthetic** \\ \hline \hline **Method** & _SSIM\(\downarrow\)_ & _PSNR\(\downarrow\)_ & _SSIM\(\downarrow\)_ & _PSNR\(\downarrow\)_ \\ \hline \hline No-privacy & 0.548 & 11.617 & 0.530 & 11.284 \\ Privacy (Our) & 0.384 & 8.943 & 0.368 & 8.071 \\ \hline \end{tabular} \end{table} Table 2: Recovered image quality: SSIM and PSNR values. Figure 4: Image retrieval score on Event-ReId (left) and SoftBio (right), following query-gallery setting, blue: Q\({}_{RGB}\), G\({}_{event}\), orange: Q\({}_{event}\), G\({}_{an-event}\) and green: Q\({}_{RGB}\), G\({}_{an-event}\) Figure 5: Face recognition accuracy using Arcface[8] model. ### Results on Reconstruction attack to 54.5\(\%\). Finally, including \(L_{struct}\) loss (\(\alpha\)=\(\beta\)=1) (helps to maintain structural information while anonymizing the voxel-grid) recovers the accuracy to 59.2\(\%\), still preserving privacy, as detailed in Table 5 and Fig 7. **Qualitative Results.** We qualitatively compare the reconstructed images acquired using our approach with the original images. We show the results on two examples from each Event-ReId and Softbio data video from the dataset. Fig. 8 displays anonymized images compared to the original RGB and recovered gray-scale images for reference. As observed, the image reconstructed from anonymized events degraded as compared to non-privacy images. We also show the two exemplar face reconstructions from real event data in Figure 9, showing that the subject face can not be reconstructed from our anonymized event stream compared to face reconstruction from the non-privacy event stream. ### Results on Inversion attack We explore a scenario where an attacker has access to our privacy-preserving event camera; they can produce a large training set containing anonymized event data along with their corresponding original event data. In such a case, the attacker can possibly train a network \(E_{inv}\) trying to reverse the effect of \(E_{an}\), leading to the reconstruction of high-quality grayscale images. To validate the robustness of our proposed framework to such privacy attacks, we train an autoencoder network on the real event dataset, similar to the \(E_{an}\) network. The network takes as input the anonymized event stream from the pre-trained \(E_{an}\) network and is trained to minimize the image reconstruction loss (instead of maximizing it). Quantitative results in Table 6 show the performance score of image retrieval on reconstructed images, suggesting that reconstruction is significantly poor and identity information is still preserved. Fig. 10 presents qualitative results on two sample images, which show the image reconstruction failed to recover images correctly. Hence, the inversion attack could not reverse the effect of the event anonymization network. ## 6 Conclusion This paper presented an end-to-end learning-based approach for privacy-preserving person re-identification. We identified event-to-image techniques as a potential threat to privacy in event-based vision. The proposed approach jointly optimizes the event-stream anonymization to prevent privacy attacks while effectively performing ReId task. The proposed model is trained and evaluated on simulated event data and real event data Event-ReId. Human identification and face recognition results verify the efficacy of our framework against possible privacy attacks. We also demonstrate that our model is resistant to an inversion attack, which tries to reverse the effect of the anonymization module. The main limitations of the proposed pipeline are \begin{table} \begin{tabular}{|l||c c c c|} \hline Method & R1 & R5 & R10 & mAP \\ \hline \hline No-Privacy & 67.8 & 79.9 & 88.4 & 40.7 \\ Privacy & 8.9 & 15.6 & 17.7 & 3.2 \\ Inversion Attack & 9.1 & 14.3 & 17.4 & 2.9 \\ \hline \end{tabular} \end{table} Table 6: Image retrieval performance on the Event-ReId dataset for recovered image under Inversion Attack. Figure 8: Visualisation of reconstructed images obtained using learning-based event anonymization method. a) real event dataset Event-ReId; b) synthetic event dataset SoftBio. Figure 10: Reconstructed images from a) raw events, b) anonymized events, and c) output of Inversion Attack. Figure 9: Visualisation of reconstructed face images. Top row RGB images. Middle row recovered from raw event. Bottom row recovered from anonymized events. limited to a small drop in performance for the downstream task and a slight computational overhead due to the event anonymization network. **Impact:** The proposed approach can be integrated with person Reld systems where privacy-preserving is essential. As this work aims to perform Reld tasks without disclosing human identity information, we believe that in the future, the event-stream anonymization mechanism can be extended to other event-based computer vision tasks to protect privacy at large. The potential negative impact lies in that surveillance data and person Reld datasets may be targeted by privacy attacks, which is why their acquisition, data storage, and protection should be strictly regulated. Also, the misuse of ReId can potentially have a negative impact. ## Acknowledgment This work was partially supported by the project "RAISE-Robotics and AI for Socio-economic Empowerment" and has been supported by European Union-NextGenerationEU.
2306.12914
The Dilaton Improves Goldstones
The free scalar field is only conformally invariant when non-minimally coupled to gravity. In flat space this amounts to amending, or improving, the energy momentum tensor. A no-go theorem prohibits the improvement for Goldstone bosons, originating from global internal spontaneous symmetry breaking. It is shown that the no-go theorem can be circumvented in the presence of a dilaton. The latter is a (pseudo) Goldstone boson originating from spontaneous conformal symmetry breaking in a theory with an infrared fixed point. Specifically, the tracelessness of the energy momentum tensor is demonstrated for a generic $d$-dimensional curved space. Additionally, the Goldstone gravitational form factors are shown to obey conformality constraints in the soft limit. The crucial point is that the remainder term of the soft theorem is non-zero due to the presence of the dilaton pole. For Goldstone systems with a trivial infrared fixed point the leading order analysis of this paper ought to be sufficient. Loop effects govern the improvement term outside the fixed point and are scheme-dependent as briefly discussed towards the end of the paper.
Roman Zwicky
2023-06-22T14:30:23Z
http://arxiv.org/abs/2306.12914v2
# The Dilaton Improves Goldstones ###### Abstract The free scalar field is only conformally invariant when non-minimally coupled to gravity. In flat space this amounts to amending, or improving, the energy momentum tensor. A no-go theorem prohibits the improvement for Goldstone bosons, originating from global internal spontaneous symmetry breaking. It is shown that the no-go theorem can be circumvented in the presence of a dilaton. The latter is a (pseudo) Goldstone boson originating from spontaneous conformal symmetry breaking in a theory with an infrared fixed point. Specifically, the tracelessness of the energy momentum tensor is demonstrated for a generic \(d\)-dimensional curved space. Additionally, the Goldstone gravitational form factors are shown to obey conformality constraints in the soft limit. The crucial point is that the remainder term of the soft theorem is non-zero due to the presence of the dilaton pole. For Goldstone systems with a trivial infrared fixed point the leading order analysis of this paper ought to be sufficient. Loop effects govern the improvement term outside the fixed point and are scheme-dependent as briefly discussed towards the end of the paper. ## 1 Introduction The free massless scalar field, minimally coupled to gravity, is not conformal unless the spacetime dimension is \(d=2\). This leads to problematic ultraviolet (UV) properties in \(d\neq 2\) quantum field theories (QFT) [1; 2] and is in tension with the weak equivalence principle of general relativity (GR) [3]. However, the issues resolve when the scalar field is non-minimally coupled to the Ricci scalar [1; 2; 3]. It is the common view in the literature that this type of improvement procedure cannot be applied to Goldstones of a broken internal global symmetry [4; 5; 6]. In this work it is shown that Goldstones can be improved in the presence of a dilaton, the (pseudo) Goldstone arising from spontaneous breaking of conformal symmetry. The dilaton itself can be improved and at the same time improves the remaining Goldstones. Before summarising the solution proposed in this paper, the literature standard for the free scalar field and the no-go theorem for the Goldstones are reviewed. ### The improved free scalar field reviewed It is well-known that for a scalar field in curved space there are two terms \[{\cal L}=\frac{1}{2}\left((\partial\varphi)^{2}-\xi R\varphi^{2}\right)\;, \tag{1}\] with dimensionless couplings which are quadratic in the field e.g. [7].1 Besides the standard kinetic term there is another one proportional to the Ricci scalar \(R\) which may be regarded as a non-minimal coupling to gravity. The specific value of the \(\xi\)-parameter is related to the improvement and discussed below. The corresponding classical energy momentum tensor (EMT), defined by the metric variation, reads Footnote 1: At the level of a four derivative kinetic term there exists a unique conformal extension [8]. It has been argued that such theories can be ghost free [9] and that they have interesting properties for models of cosmology [10; 11]. The treatment of this case is outside the scope of this paper. \[T_{\mu\nu}=2\frac{\delta}{\delta g^{\mu\nu}}\int d^{4}x\sqrt{-g}{\cal L}\Big{|} _{g_{\mu\nu}=\eta_{\mu\nu}}=\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{ \eta_{\mu\nu}}{2}(\partial\varphi)^{2}+\xi(\partial^{2}\eta_{\mu\nu}-\partial _{\mu}\partial_{\nu})\varphi^{2}\;, \tag{2}\] where the flat space limit has been assumed for simplicity. Taking the trace and using the equation of motion (EOM) \(\partial^{2}\varphi=0\), one gets \[T^{\rho}_{\;\rho}=-d_{\varphi}(\partial\varphi)^{2}+\xi(d-1)\partial^{2} \varphi^{2}=(d-1)(\xi-\xi_{d})\partial^{2}\varphi^{2}\;, \tag{3}\] where \(d_{\varphi}\equiv\frac{d-2}{2}\) is the free scalar dimension and \(\xi_{d}\) is the famous improvement parameter \[\xi_{d}\equiv\frac{(d-2)}{4(d-1)}\stackrel{{ d\to 4}}{{\to}}\;\frac{1}{6}\;, \tag{4}\] or \(\xi_{d}=\frac{d_{\varphi}}{2(d-1)}\) in alternative variables. Setting \(\xi=\xi_{d}\) leads to \(T^{\rho}_{\;\rho}=0\). Further note that \(T^{\rho}_{\;\rho}=0\), on physical states, is equivalent to conformal symmetry, whereas \(T^{\rho}_{\;\rho}=\partial_{\alpha}X^{a}\) signals scale invariance only [12]. Conformality in \(d=2\) is automatic in that \(\xi_{2}=0\) and thus \(\xi=0\), where the free scalar field serves as an example of a simple conformal theory e.g. [12]. Historically, the value \(\xi_{4}=\frac{1}{6}\) was first noted in the context of conformal symmetry in GR [13] and later seen as a necessary choice to obey the weak equivalence principle [3] (or also [14]). In QFT it was shown that \(\xi=\xi_{d}\) leads to a UV-finite EMT for the free field which is a necessity, in a renormalisable theory, since \(T_{\mu\nu}\) is an observable [1]. The authors, Callan, Coleman and Jackiw (CCJ), referred to the corresponding EMT as "the improved EMT" which has become standard terminology. Similarly the value \(\xi=\xi_{d}\) guarantees UV-finiteness of the integrated Casimir energy [2]. Altogether this may be taken as an indication that conformality, rather than scale invariance, plays a more fundamental role in physics. ### The problem with Goldstones Dolgov and Voloshin concluded that \(\xi=0\) for Goldstones by applying a soft theorem to gravitational form factors [4] reviewed in Sec. 2.3.1, and cf. [5; 6] for further discussion. In order to make the presentation more concrete, the standard chiral spontaneous symmetry breaking (SSB) of QCD-like theories, \(SU(N_{F})_{L}\times SU(N_{F})_{R}\to SU(N_{F})_{V}\), is assumed and its Goldstones are referred to as pions, e.g. [15; 16]. A straightforward way to see that there is a problem is to note that the \(\delta{\cal L}=-\frac{1}{2}\xi R\varphi^{2}\) term cannot be built out of the non-linear coset field \(U=\exp(i\pi/F_{\pi})\), as it either breaks \(SU(N_{F})_{L}\times SU(N_{F})_{R}\)-invariance \(\delta{\cal L}\propto R\text{Tr}[U+U^{\dagger}]\) or it is trivial \(\delta{\cal L}\propto R\text{Tr}U^{\dagger}U\propto R\).2 Footnote 2: For a pseudoscalar isosinglet, such as the \(\eta^{\prime}\) or the axion, the symmetry restrictions are not as severe but still severe enough. In addition Goldstones, other than the dilaton, are of Weyl-weight zero, cf. the discussion in Sec. 2.1.1, and as a consequence Weyl invariance is spoiled. To rectify this aspect is precisely the task of the dilaton. A non-vanishing classical trace, besides the problems already mentioned, is problematic in the IR in connection with renormalisation group (RG) flow theorems. The latter state that the difference of specific Weyl anomalies of the UV and IR limits of the theory are positive. The quantity can be written as integrals over TEMT correlation functions. The expressions only converge when the TEMT vanishes fast enough in the UV and the IR. Convergence in the UV can be shown to hold performing resummation with RG methods [17]. If the IR theory is standard QCD with free pions in the IR, then \(T^{\rho}_{\ \rho}=-\frac{1}{2}\partial^{2}\pi^{2}\) leads to logarithmic IR divergences for the \(\Box R\)-flow [18].3\({}^{,}\)4 Footnote 3: For the famous \(a\)-theorem [19; 20; 21; 22; 23] it seems, cf. footnote 11 in [18], that the same issue arises since the \(2\to 2\) scattering contains correlation function of the type \(\langle T^{\rho}_{\ \rho}(p_{1}+p_{2})T^{\rho}_{\ \rho}(p_{3}+p_{4})\rangle\) in Eq. 3.7 [22], and the on-shellness condition \(p_{i}^{2}=0\) does not imply \((p_{1}+p_{2})^{2}=0\), and equally leads to logarithmic IR divergences. However, the argument about the sufficiently inclusive cross section [22] points towards no IR divergences. In addition the \(a\)-theorem, unlike the \(\Box R\)-flow, is protected by topology and calculable at each fixed point separately. In summary there is presumably no problem and the improvement proposed in this work would fix the technical issue pointed out above. Footnote 4: An interesting aspect is of course that if a massless dilaton is present then the Goldstone counting, as first applied in [19], would be modified by the addition of the dilaton. ### The dilaton improves Goldstones Let us turn to the solution proposed in this paper. The dilaton field \(D\), is non-linearly realised by \[\chi\equiv F_{D}e^{-\hat{D}}\;,\quad\hat{D}\equiv\frac{D}{F_{D}}\;, \tag{5}\] where \(F_{D}\) is the dilaton decay constant with units of mass and hatted quantities are, hereafter, understood to be dimensionless divided by the appropriate power of \(F_{D}\). The reader is referred to App. A for the relevant Weyl transformation which make the non-linearity manifest. The problem found in the pion case does not arise since the dilatation subroup is abelian and transforms via the shift symmetry only. That is, the following improvement term \[{\cal L}_{d}^{R}=\frac{\kappa}{4}\,R\,\chi^{d-2}\;, \tag{6}\] is invariant under global Weyl transformations, and it is \(\chi\) that takes on the role of the compensator, e.g [24], in that it compensates the scale \((F_{D})^{d-2}\), hidden in the \(\kappa^{d-2}\)-factor. for the (hidden) scale \((F_{D})^{d-2}\). The dilaton improvement factor \[\kappa=\kappa_{d}\equiv\frac{2}{(d-1)(d-2)}\stackrel{{ d\to 4}}{{ \rightarrow}}\frac{1}{3}\;, \tag{7}\] (\(\kappa_{d}=\frac{1}{d_{\varphi}(d-1)}=\frac{2}{d_{\varphi}^{2}}\xi_{d}\)) gives rise to \(T^{\rho}_{\;\rho}=0\), as we shall see. The normalisation is chosen such that in \(d=4\) the quadratic orders in the fields match: \(\xi_{4}R\varphi^{2}\leftrightarrow\kappa_{4}RD^{2}\). And since \(\xi_{4}=\frac{1}{6}\) and \(\kappa_{4}=\frac{1}{3}\) this may be interpreted as a double improvement w.r.t. to non Goldstone fields. This will become clearer when studying the gravitational form factors. At last a few words about the dilaton. It is not the common view that (chiral) SSB is accompanied by a dilaton. However, it has been pointed out that certain features of low energy QCD can be interpreted with an infrared fixed point (IRFP) [25].5 If the latter were true then a dilaton is to be expected [26; 27; 28; 31]. Whether or not the dilaton has a mass, due to the conformal symmetry only being emergent in the IR, is an open question with diverging opinions in the literature. The improvement is possible, in the sense of \(T^{\rho}_{\;\rho}={\cal O}(\hat{m}_{D}^{2})\), as long as \(m_{D}\ll\Lambda\) such that the dilaton mass can be added as a perturbation, with further comments in the relevant sections. Footnote 5: An important aspect for its validity is that massive hadrons are not in conflict with IR conformality [26]. Earlier work [27; 28] suggested that such a scenario could for example ease the explanation of \(K\rightarrow\pi\pi\)[27; 28], at the practical cost of a proliferation of low energy constants. The previously mentioned flow theorems provide additional motivation. Examples of dilatons are known in \(d=2\)[29] at finite temperature and \(d=3\)[30]. The paper is organised as follows. In Sec. 2 the pion-dilaton system is investigated at the classical level, establishing \(T^{\rho}_{\;\rho}={\cal O}(\hat{m}_{D}^{2})\), verifying conformality constraints on gravitational form factors and demonstrating the link of the improvement term to the order parameter \(F_{D}\) within the EFT. The RG-behaviour, that is the quantum effects, of the standard and Goldstone improvement parameters are discussed in qualitative manner in Sec. 3. The paper ends with summary and conclusions in Sec. 4. Conventions, comments on earlier work and remarks on the (non)-role of the improvement term in effective theories of gravity are deferred to Apps. A, B and C respectively. ## 2 Pion-dilaton System ### Improvement and equations of motion First the simple case of a flat 4-dimensional space is considered before analysing a generic \(d\)-dimensional curved space. The latter is relevant for GR and applications to cosmology. #### 2.1.1 Flat space in \(4\)-dimensions Neglecting terms in the EFT, suppressed by the cutoff \(\Lambda\approx 4\pi\min F_{D,\pi}\) the Lagrangian, referred to as leading order (LO) hereafter, reads \[{\cal L}_{\rm LO}={\cal L}_{\rm kin,4}+{\cal L}_{4}^{R}-V_{4}(\chi)\;, \tag{8}\] with improvement term \({\cal L}_{4}^{R}\) defined in (6), a potential \(V_{4}\) with comments to follow, and for later convenience the kinetic terms are given in \(d\)-dimensional form \[{\cal L}_{\rm kin,d}={\cal L}_{\rm kin,d}^{\pi}+{\cal L}_{\rm kin,d}^{D}=\frac{F _{\pi}^{2}}{4}\hat{\chi}^{d-2}{\rm Tr}[\partial^{\mu}U\partial_{\mu}U^{\dagger} ]+\frac{1}{2}\chi^{d-4}(\partial\chi)^{2}\;. \tag{2}\] The prefactor \(\chi^{d-2}\) in front of \({\rm Tr}[\partial^{\mu}U\partial_{\mu}U^{\dagger}]\) signals that pions are of zero Weyl-weight which is a direct consequence of the conformal algebra [32]. Note the alternative form \({\cal L}_{\rm kin,d}^{D}=\frac{1}{2}\chi^{d-2}(\partial\hat{D})^{2}\) of the dilaton kinetic term which appears frequently in the literature. We shall refer to this EFT as dilaton chiral perturbation theory (D\(\chi\)PT) following [33] earlier literature although scale chiral perturbation theory [27; 28] would also be a sensible name. The Lagrangian (2) without the improvement term is fairly standard in the literature and can be obtained by the previously mentioned compensator trick [24], requiring global Weyl invariance. The addition of the improvement term renders \({\cal L}_{\rm LO}\) invariant under _local_ Weyl transformation, and the latter imply conformal invariance e.g. [34].6 We proceed in showing \(T^{\rho}_{\;\;\rho}=0\) explicitly, for which the key ingredient is the use of the dilaton EOM Footnote 6: The dilaton kinetic term and the improvement term compensate each others transformation. Moreover, the mass term itself is of course not Weyl invariant and this will show up as an \({\cal O}(\hat{m}_{D}^{2})\)-correction to tracelessness. \[\chi\partial^{2}\chi=2{\cal L}_{\rm kin,4}^{\pi}-\partial_{\ln\chi}V_{4}\;. \tag{3}\] The pion EOM is not quoted as not needed for \(T^{\rho}_{\;\;\rho}={\cal O}(V_{4})\) but solely to verify translational invariance \(\partial^{\mu}T_{\mu\nu}=0\). Assuming the flat limit one obtains for the EMT \[T_{\mu\nu}=\frac{F_{\pi}^{2}}{2}\hat{\chi}^{2}{\rm Tr}[\partial_{\mu}U\partial _{\nu}U^{\dagger}]+\partial_{\mu}\chi\partial_{\nu}\chi-\eta_{\mu\nu}({\cal L }_{\rm kin,4}-V_{4})+T_{\mu\nu}^{R}\;, \tag{4}\] where the \(\eta_{\mu\nu}\)-term originates from varying the determinant, and the improvement-part assumes the form \[T_{\mu\nu}^{R}=\frac{\kappa_{4}}{2}(\eta_{\mu\nu}\partial^{2}-\partial_{\mu} \partial_{\nu})\chi^{2}\;, \tag{5}\] which differs from the standard scalar field form (2) by \(\chi\to\varphi\) and \(\kappa_{4}\to 2\xi_{4}\). Taking the trace leads to \[T^{\rho}_{\;\;\rho} =\frac{3}{2}\kappa_{4}\partial^{2}\chi^{2}-2{\cal L}_{\rm kin,4}^ {\pi}+4V_{4}-2{\cal L}_{\rm kin,4}^{D} \tag{6}\] \[\stackrel{{\eqref{eq:2.1}}}{{=}} \frac{3}{2}\kappa_{4}\partial^{2}\chi^{2}-(\partial\chi)^{2}- \chi\partial^{2}\chi+F_{4}(V_{4})\] \[=(3\kappa_{4}-1)\big{\{}\chi\partial^{2}\chi+(\partial\chi)^{2} \big{\}}+F_{4}(V_{4})\;,\] where the functional \(F_{4}\) of the potential \(V_{4}\) is given by \[F_{4}(V_{4})=4V_{4}-\partial_{\ln\chi}V_{4}\;. \tag{7}\] As indicated the EOM were used above which is permitted since \(T^{\rho}_{\;\;\rho}={\cal O}(V_{4})\) is to be verified on physical states only. One deduces that the improvement in \(d=4\) is \[T^{\rho}_{\;\;\rho}={\cal O}(\hat{m}_{D}^{2})\quad\Leftrightarrow\quad\kappa_ {4}=\frac{1}{3}\;, \tag{8}\] which corresponds to the double improvement referred to earlier and the \({\cal O}(\hat{m}_{D}^{2})\), due to the potential, has been anticipated as of below.7 Let us pause a moment and reflect on how the improvement works. The underlying symmetry is conformal symmetry. The problematic term for the single pion system is the kinetic term which is also the only one. It is by adding the \(\chi^{2}\)-prefactor that it becomes conformally invariant and this couples the dilaton to the pions. Hence it is no surprise that conformality emerges upon using the dilaton EOM. Footnote 7: This type of improvement has been obtained in a proceeding by Crewther and Tunstall [28], cf. their Eq. 4. The relation to the gravitational form factors, the double improvement or any other aspects discussed in this work were not considered in [28]. At last let us turn to the potential. If scale symmetry is assumed to be unbroken then the only scale invariant term is \(V_{4}\propto\chi^{4}\), for which \(F_{4}(V_{4})=0\). This is not permitted since the dilaton is not at a local minimum. We are therefore to conclude that \(V_{4}=0\). In the case where the dilaton has a mass the following potential \[V_{4}=\frac{1}{4}m_{D}^{2}F_{D}^{2}(\frac{1}{2}\hat{\chi}^{4}-\hat{\chi}^{2})\;, \tag{9}\] is stable and gives \(F_{4}(V_{4})=-\frac{1}{2}m_{D}^{2}D^{2}+{\cal O}(\hat{D}^{3})\). As stated earlier the improvement for the pion works, in the sense of \(T^{\rho}_{\;\rho}={\cal O}(\hat{m}_{D}^{2})\), as long as \(m_{D}\ll\Lambda\) such that \(m_{D}\) can be treated as a perturbation to the EFT and correction can be expected to be of \({\cal O}(\hat{m}_{D}^{2})\). In the opposite limit \(m_{D}\gg\Lambda\), standard LO chiral perturbation theory would be recovered and one would come to the conclusion, as in [6], that the improvement is not possible. #### 2.1.2 Curved space in \(d\)-dimensions Let us turn to the case of a \(d\)-dimensional curved space. The derivation proceeds analogously, with every piece falling into its place. The LO Lagrangian (1) is adapted to \({\cal L}_{\rm LO,d}={\cal L}_{\rm kin,d}+{\cal L}_{d}^{R}-V_{d}\) and partial derivatives are replaced by diffeomorphism covariant ones \(\partial\to\nabla\), with metric compatibility \(\nabla_{\alpha}g_{\mu\nu}=0\). The dilaton EOM read \[\chi^{d-3}\nabla^{2}\chi=(d-2)({\cal L}_{\rm kin,d}^{\pi}+{\cal L}_{d}^{R})- \partial_{\ln\chi}V_{d}-(d-4){\cal L}_{\rm kin,d}^{D}\;. \tag{10}\] The EMT is given by \[T_{\mu\nu}=\frac{F_{\pi}^{d-2}}{2}\hat{\chi}^{d-2}{\rm Tr}[\nabla_{\mu}U\nabla _{\nu}U^{\dagger}]+\chi^{d-4}\nabla_{\mu}\chi\nabla_{\nu}\chi-g_{\mu\nu}({ \cal L}_{\rm kin,d}-V_{d})+T_{\mu\nu}^{R}\;, \tag{11}\] where the improvement part reads \[T_{\mu\nu}^{R}=\frac{\kappa_{d}}{2}\big{(}2G_{\mu\nu}+(g_{\mu\nu}\nabla^{2}- \nabla_{\mu}\nabla_{\nu})\big{)}\chi^{d-2}\;. \tag{12}\] Above \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\) is the Einstein tensor which is conserved, and thus \(\nabla^{\mu}T_{\mu\nu}^{R}=0\) holds separately. Taking the trace one obtains \[T^{\rho}_{\;\;\rho} =\frac{d-1}{2}\kappa_{d}\nabla^{2}\chi^{d-2}-(d-2)({\cal L}_{\rm kin,d}^{\pi}+{\cal L}_{d}^{R}+{\cal L}_{\rm kin,d}^{D})+dV_{d}\] \[\stackrel{{\eqref{eq:EOM}}}{{=}} \frac{d-1}{2}\ \kappa_{d}\nabla^{2}\chi^{d-2}-\frac{d-2}{2}\chi^{d-4}( \nabla\chi)^{2}-\chi^{d-3}\nabla^{2}\chi-\frac{d-4}{2}(\nabla\chi)^{2}+F_{d}( V_{d})\] \[=(\kappa_{d}\frac{(d-1)(d-2)}{2}-1)\big{\{}\chi\nabla^{2}\chi+(d-3)( \nabla\chi)^{2}\big{\}}\chi^{d-4}+F_{d}(V_{d})\;, \tag{13}\] where as before the EOM were used and \(F_{d}(V_{d})=dV_{d}-\partial_{\ln\chi}V_{d}\) is the \(d\)-dimensional analogue of (7). The following improvement is deduced \[T^{\rho}_{\ \rho}={\cal O}(\hat{m}_{D}^{2})\quad\Leftrightarrow\quad\kappa_{d}= \frac{2}{(d-1)(d-2)}\;. \tag{14}\] For \(d\to 4\), it reduces to \(\kappa_{4}=\frac{1}{3}\) connecting to the previous finding (8). As for \({\cal O}(\hat{m}_{D}^{2})\) and a potential mass term the same remarks apply as at the end of the last section. ### Improvement and spontaneous conformal symmetry breaking The order parameter of SSB of scale symmetry is the dilaton decay constant8 Footnote 8: Eq. (15) is equivalent to \(\langle 0|J_{\mu}^{D}(x)|D(q)\rangle=iF_{D}q_{\mu}e^{-iqx}\) where \(J_{\mu}^{D}(x)=x_{\nu}T^{\nu\mu}\) is the dilatation current. \[\langle 0|T_{\mu\nu}|D(q)\rangle=\frac{F_{D}}{d-1}(m_{D}^{2}\eta_{\mu\nu}-q _{\mu}q_{\nu})\;, \tag{15}\] just as the pion decay constant is for chiral symmetry breaking [15, 16, 35]. To remain general a dilaton mass has been kept. The observation of this section is that the improvement term (6) realises this matrix element within the D\(\chi\)PT. Using \(\hat{\chi}^{d-2}=1-(d-2)\hat{D}+{\cal O}(\hat{D}^{2})\) and (5), assuming the flat space limit \[\langle 0|T_{\mu\nu}^{R}|D(q)\rangle=\frac{\kappa_{d}}{2}\langle 0|(\eta_{\mu \nu}\partial^{2}-\partial_{\mu}\partial_{\nu})\chi^{2}|D(q)\rangle=\frac{F_{D }}{d-1}(m_{D}^{2}\eta_{\mu\nu}-q_{\mu}q_{\nu})\;, \tag{16}\] one recovers (15) upon using (14). It is noted that in \(d=2\) the decay constant is equally defined as the \(d-2\)-factor in the \(\hat{\chi}^{d-2}\)-expansion cancels against the \(\kappa_{d}\propto 1/(d-2)\). Note that the potential term does not contribute as it, by construction, does not contain a linear term in the dilaton. Altogether, this is encouraging and should be taken as another sign of internal consistency and underlines the necessity of the improvement procedure when a dilaton is present. ### Improvement and gravitational Goldstone form factors The constraints for _massive_ hadrons on diagonal gravitational form factors, at zero momentum transfer, were worked out in [26]. Here, _massless_ hadrons \(\varphi\), the Goldstones, are considered. The dilaton mass is kept zero, except in some intermediate steps, as its complete inclusion would necessitate further assumptions which are beyond the scope of this section. In fact, it is the dilaton pole that plays the decisive role, as for the form factors of the massive hadrons. The pole cancels against the vanishing numerator, originating from the effective Lagrangian, yielding a finite expression. For the massless hadrons the following parameterisation is adequate \[\tau_{\mu\nu}^{(\varphi)}\equiv\langle\varphi(p^{\prime})|T_{\mu\nu}(0)| \varphi(p)\rangle=2{\cal P}_{\mu}{\cal P}_{\nu}g_{1}(q^{2})+(q_{\mu}q_{\nu}-q^{ 2}\eta_{\mu\nu})g_{2}(q^{2})\;, \tag{17}\] where \(q\equiv p-p^{\prime}\) is the momentum transfer, \({\cal P}\equiv(p+p^{\prime})/2\) and \(q^{\mu}\tau_{\mu\nu}^{(\varphi)}=0\) is automatically transverse manifesting translational invariance. In comparison with [26], \(m_{\varphi}^{2}/q^{2}G_{2}\to g_{2}\) as the mass proves, unsurprisingly, to be inconvenient in the massless case. As we assume an IRFP, the dynamics of the massless or IR degrees of freedom in the soft limit are governed by a conformal field theory. Technically this means [25] that9 Footnote 9: Whereas (2.18) may be regarded as the minimal form in which IR-conformality manifests itself, a physical motivation can be provided by analogy to the charged pion electromagnetic form factor \(F_{+}^{\pi\to\pi}(q^{2})\). The quantities \(F_{+}^{\pi\to\pi}(0)\) and \(F_{+}^{\pi\to\pi^{\prime}}0)\) measure the total charge and the charge radius respectively. Since the former is an IR-property, not resolving the structure of the pion, the same can be expected to hold for \(\tau_{\rho}^{(\varphi)\rho}|_{q^{2}=0}\). \[\tau_{\rho}^{(\varphi)\rho}|_{q^{2}=0}=\langle\varphi(p)|T^{\rho}_{\ \rho}|\varphi(p)\rangle=0\;, \tag{2.18}\] is the way in which the conformal symmetry manifests itself, in its minimal form. This leads to the following conformality constraint \[g_{2}(0)=-\frac{1}{2(d-1)}\;, \tag{2.19}\] since \(g_{1}(0)=1\) follows, in full generality, from the momentum operator connection \(P_{\mu}=\int d^{d-1}x\,T^{0}_{\ \mu}(x)\). In summary, an IR conformal theory must satisfy the form factor constraint (2.19). This is what was concluded by Dolgov and Voloshin not to be possible, but the improvement will achieve as we shall see. #### 2.3.1 The Dolgov-Voloshin no-go theorem It is useful to recapitulate the no-go theorem of Dolgov and Voloshin [4]. In order to ease comparison, the form factor basis is slightly changed to \[\tau_{\mu\nu}^{(\varphi)}=(p_{\{\mu}}p^{\prime}_{\nu\}-\eta_{\mu\nu}p\cdot p ^{\prime})g_{1}(q^{2})+2(q_{\mu}q_{\nu}-q^{2}\eta_{\mu\nu})\bar{g}_{2}(q^{2})\;, \tag{2.20}\] with symmetrisation \(\{a,b\}=ab+ba\); transversity is automatic and in the massive case \(p\cdot p^{\prime}\to p\cdot p^{\prime}-m_{\varphi}^{2}\) is required. The conversion to the parameterisation (2.17) reads \[g_{2}(q^{2})=2\bar{g}_{2}(q^{2})-\frac{1}{2}g_{1}(q^{2})\;. \tag{2.21}\] In the new basis the conformality constraint (2.19) becomes \[\bar{g}_{2}(0)=\xi_{d}\;. \tag{2.22}\] Since \(\varphi\) is a Goldstone, the matrix element is amenable to soft-theorem techniques. Again, for illustrative purposes, the global symmetry is assumed to be the chiral one. Its generator is denoted by \(Q_{5}^{\varepsilon}\), \(c\) is the flavour index and \(\varphi\to\pi^{b}\) for the Goldstones. In the limit \(p^{\prime}\to 0\), the soft-pion theorem reads \[\tau_{\mu\nu}^{(\pi)}=-\frac{i}{F_{\pi}}\langle 0|\left[Q_{5}^{\varepsilon},T_ {\mu\nu}\right]|\pi^{b}(p)\rangle+\lim_{p^{\prime}\to 0}ip^{\prime}\! \cdot\!R^{bc} \tag{2.23}\] where \[R_{\alpha}^{bc}=-\frac{i}{F_{\pi}}\int d^{d}x\,e^{ip^{\prime}\cdot x}\langle 0 |TJ_{5\alpha}^{\varepsilon}(x)T_{\mu\nu}(0)|\pi^{b}(p)\rangle\;, \tag{2.24}\] stands for the remainder term, e.g. [15]. The commutator itself vanishes since the EMT respects the global symmetry and assuming regularity in the remainder implies \(\lim_{p^{\prime}\to 0}\tau^{(\pi)}_{\mu\nu}=0\). In [4], \(R_{\alpha}\) is taken to be regular as poles are excluded, that is \(p^{\prime}\cdot R\to 0\) for \(p^{\prime}\to 0\). Since the \(g_{1}\)-structure disappears in the limit taken, (2.20) then implies that \(\bar{g}_{2}(0)=0\) in direct contradiction with the conformality constraint (2.22). This led them to conclude the no-go theorem: Goldstone bosons due to global internal symmetry breaking cannot be improved. However, we already know from the previous section that the combined pion-dilaton system is improvable. The crucial point is that the dilaton invalidates the regularity assumption. Dolgov and Voloshin mention the possibility of a dilaton but do not follow it up in their paper. #### 2.3.2 A useful formula for the EMT matrix element The goal of this section is to show that the conformality constraint (2.22) \[\bar{g}_{2}^{(\varphi)}(0)=\xi_{d}\;,\quad\varphi=\pi^{b},D\;, \tag{2.25}\] holds in the Goldstone sector for the combined pion-dilaton system. In working this out explicitly, the following general formula, closely related to the LSZ formula, will prove useful \[i\int d^{d}xe^{iqx}\langle\varphi(p^{\prime})|T_{\mu\nu}(x)|\varphi(p)\rangle =\frac{\langle 0|T_{\mu\nu}(0)|D(q)\rangle\langle D\varphi|\varphi\rangle }{m_{D}^{2}-q^{2}}+\ldots\;, \tag{2.26}\] and the dilaton mass will be kept non-zero in intermediate steps in accordance with statements at the beginning of Sec. 2.3. The dots stand for higher resonances and multiparticle states which go beyond our LO or classical analysis. However, if the condition (2.18) holds then they vanish in the \(m_{D}^{2},q\to 0\) limit after after taking the trace. We therefore omit the dots in what follows but indicate with an arrow the identification. As we shall see, the massless dilaton saturates (2.18), together with the model-independent constraint \(g_{1}(0)=1\). Using the formal definition \(\langle D(q)\varphi(p^{\prime})|\varphi(p)\rangle=i(2\pi)^{d}\delta(p-p^{ \prime}-q)g_{D\varphi\varphi}\), the identification of the delta function with the volume, \((2\pi)^{d}\delta(0)=V\) and anticipating the \(q^{2}\to 0\) limit one arrives at \[\langle\varphi(p^{\prime})|T_{\mu\nu}(x)|\varphi(p)\rangle\to\frac{\langle 0|T_{ \mu\nu}(0)|D(q)\rangle g_{D\varphi\varphi}}{m_{D}^{2}-q^{2}}\;. \tag{2.27}\] We will use an EFT language cf. Fig. 1, that is an off-shell formalism, for which \(g_{D\varphi\varphi}\) can be a function of \(q^{2}\). This proves to be more efficient but either formalism is possible and leads to the same result. #### 2.3.3 The pion gravitational form factor It is instructive to start with the pion form factor as it is the simpler case of the two. The quantity \(g_{D\pi\pi}\) is derived from the LO effective Lagrangian, with \(d_{\varphi}\) defined above (1.3), \[\mathcal{L}_{D\pi\pi}=-d_{\varphi}\;\frac{D}{F_{D}}\left(\partial\pi^{a} \right)^{2}\,, \tag{2.28}\] and since \((\partial\pi^{a})^{2}\to(\frac{1}{2}q^{2}-m_{\pi}^{2})(\pi^{a})^{2}\) with anticipated momentum configuration.10 Footnote 10: Derived most straightforwardly with momentum space kinematics, \(p=p^{\prime}+q\), \(p^{2}=p^{\prime 2}=m_{\pi}^{2}\), \[p^{\prime}\cdot p=m_{\pi}^{2}-\frac{1}{2}q^{2}\,\quad p^{\prime}\cdot q=-\frac{1} {2}q^{2}\,\quad p\cdot q=\frac{1}{2}q^{2}\.\] that is consistent with the previous definition of \(g_{D\varphi\varphi}\), one gets \[g_{D\pi\pi}(q^{2})=d_{\varphi}\frac{q^{2}}{F_{D}}\, \tag{2.29}\] where \(m_{\pi}\to 0\) has been assumed, that is zero quark mass in our example. This expression may be inserted into (2.27), and using (2.15) one gets \[\langle\pi(p^{\prime})|T_{\mu\nu}(0)|\pi(p)\rangle\to\frac{\langle 0|T_{\mu\nu} (0)|D(q)\rangle g_{D\pi\pi}(q^{2})}{m_{D}^{2}-q^{2}}=(q^{2}\eta_{\mu\nu}-q_{\mu }q_{\nu})\frac{\frac{F_{D}}{d-1}\frac{q^{2}}{F_{D}}d_{\varphi}}{m_{D}^{2}-q^{ 2}}. \tag{2.30}\] Assuming the \(m_{D}\to 0\) limit and comparing with (2.17) or (2.20) one obtains \[\bar{g}_{2}^{(\pi)}(0)=\xi_{d}\, \tag{2.31}\] which does indeed satisfy the conformality constraint (2.25), completing our task. #### 2.3.4 The dilaton gravitational form factor The dilaton form factor is more involved in terms of combinatorics and contributions. The analogue of (2.28) is \[\mathcal{L}_{DDD}=-\frac{D}{F_{D}}d_{\varphi}(\partial D)^{2} \tag{2.32}\] and the first complication is that any of the three particles can couple to the EMT. Using the kinematics in footnote 10: \(p^{\prime}\cdot p=m_{\pi}^{2}-\frac{1}{2}q^{2}\,\quad p^{\prime}\cdot q=-\frac{1}{2}q^{2}\, \quad p\cdot q=\frac{1}{2}q^{2}\.\) \[g_{DDD}(q^{2})=-d_{\varphi}\,\frac{q^{2}}{F_{D}}\, \tag{2.33}\] Figure 1: Goldstone gravitational form factors. (left) pion form factor with \(g_{D\pi\pi}\) interaction as in Eqs. (2.30) and (2.31) (centre) analogous contribution to the dilaton form factor with \(g_{DDD}\) cf. (2.34) (right) contact interaction term due to the improvement term (1.6) as in (2.36). In the sum these last two terms fulfil the constraint in Eq. (2.25) for the dilaton form factor. which is of opposite sign as compared to the pion case (2.29). This contribution, referred to as the pole-term, gives in complete formal analogy to (2.30) \[g_{2}^{(D),\text{pole}}(0)=-\xi_{d}\;, \tag{2.34}\] and on its own contradicts (2.25), as it is of opposite sign. This is where the double improvement comes into play. The local term (2.12), referred to as a contact term in an EFT, has to be taken into account. Departing from (2.12) its contribution to the dilaton gravitational form factor in flat space is obtained straightforwardly \[\langle D(p^{\prime})|T^{R}_{\mu\nu}(0)|D(p)\rangle=-\kappa_{d}d_{\varphi}^{2} \,(q^{2}\eta_{\mu\nu}-q_{\mu}q_{\nu})\;, \tag{2.35}\] and using \(\kappa_{d}\) in (2.14) and \(d_{\varphi}=(d-2)/2\) one gets \[\bar{g}_{2}^{(D),\text{R}}(0)=2\xi_{d}\;. \tag{2.36}\] Adding the two terms results \[\bar{g}_{2}^{(D)}(0)=\bar{g}_{2}^{(D),\text{pole}}(0)+\bar{g}_{2}^{(D),\text{ R}}(0)=(-\xi_{d})+2\xi_{d}=\xi_{d}\;, \tag{2.37}\] in agreement with the conformality constraint (2.25). The additional insight into the double improvement provides further satisfaction and confidence in the approach. As stated in the introduction, the dilaton improves itself, and this completes our task. ## 3 Some General Considerations This paper may be considered the counterpart of the CCJ-procedure [1] for Goldstones, requiring an additional dilaton. As such it is a LO analysis based on the Lagrangian (2.1), and does not involve loops as can be seen from Fig. 1. However, since the Goldstones are described by a free theory this seems sufficient. The loophole in this argument is of course whether or not an IRFP with a (massless) dilaton exists. Establishing this matter is important but also beyond the scope of this paper. Nevertheless, it is instructive to discuss the impact of loops on the improvement parameters \(\xi\) and \(\kappa\), which govern their out of fixed point behaviour. Before doing so we return to the question of whether improvement is necessary or not.11 Footnote 11: Some remarks on the role of \(\xi\) for effective theories of gravity is deferred to App. C, as they are somewhat tangential to this paper but nevertheless interesting and related. ### Improvement or not - conformal versus scale invariance? The observation of CCJ [1] was that a free scalar field can be made conformal by adding the improvement term. This corresponds to changing the theory and hence the additional label of "improved EMT". For an UV complete theory such as QCD there should be no choice. The theory is either conformal or not in the IR. If the dilaton is a valid degree of freedom then \(\kappa\) in (1.6) should be seen as a low energy constant which is determined by the UV theory but in practice can be taken from experiment. Can it be guessed whether the theory is conformal or only scale invariant in the IR? It is the literature's view that scale invariance implies conformal invariance since the former requires a virial current with exact scaling dimension \(d-1\). This is unlikely without symmetry protection unless the theory is non-interacting, cf. [38] for a review and Refs.[39; 40; 41; 42; 22] for ever closer understanding of the matter. However, in the deep-IR the pion theory is trivial and thus the point made cannot be taken in favour of conformality. Nonetheless, the remarks about flow theorems and the weak equivalence principle, raised in Sec. 1.2, point in favour of IR-conformality. ### On the running \(\xi(\mu)\) and \(\kappa(\mu)\) Let us discuss RG effects on \(\xi(\mu)\) and then turn to the particular role at FPs. The non-minimal coupling \(\xi\) in (1) cannot be ignored since it will generally appear through RG effects, cf. [43; 44; 45; 46]. The expression in (4) should be seen as a LO-value, corrected by an expansion in a coupling constant \(\lambda\), order by order in perturbation theory, \(\xi(\mu)=\xi_{d}+\Delta\xi(\lambda(\mu))\). At trivial FPs \(\Delta\xi\to 0\) which is automatic since the coupling approaches zero. This leads to all the good properties such as UV-finiteness and compatibility with the weak equivalence principle in the deep-IR. Concretely, the renormalisation and the RG equation were first studied in \(\lambda\phi^{4}\)-models [43; 44; 45] where it was found that non-finite counterterms enter at \(\mathcal{O}(\lambda^{3})\). In fact the absence of such terms would imply the existence of an unknown or hidden symmetry [43], which would have been an exceptional circumstance. The counterterms render the RG equation non-homogeneous. This means that \(\Delta\xi(\mu)=0\) cannot be consistently imposed and that \(\xi=\xi_{d}\) is not an RGFP, in the interacting theory. In the \(\lambda\phi^{4}\)-model, \(\Delta\xi\to 0\) at the IRFP as noted in [45]. For the UV, the situation is unclear because \(\lambda\phi^{4}\) has either an unknown non-perturbative FP or the triviality problem [47]. The situation in this paper concerns Goldstones which are specific fields of an effective theory which is IR-free. One can therefore expect that for \(\kappa(\mu)=\kappa_{d}+\Delta\kappa(\mu)\), \(\Delta\kappa(\mu)\to 0\) for \(\mu\to 0\). In summary, \(\xi(\mu)\) and \(\kappa(\mu)\) ought to assume their free field values at trivial FPs. Outside FPs they are RG-scale dependent and as such renormalisation scheme-dependent quantities. ## 4 Conclusions The leading order Lagrangian (1) of the combined dilaton-pion system is locally Weyl invariant with the addition of the improvement term (6). This is the first step to IR-conformality and consists of the counterpart to the CCJ improvement procedure for Goldstone fields. In particular \(T^{\rho}_{\ \rho}=0\) has been shown to hold by the use of the equation of motion, cf. Sec. 2.1. The Dolgov-Voloshin no-go theorem is circumvented since the dilaton pole defies their regularity assumption in the use of the soft-pion theorem. Taking the dilaton into account leads to gravitational form factors satisfying the conformality constraint, as explicitly worked out in Sec. 2.3. The way in which this works out is instructive: for the pion gravitational form factor there is a single non-local term due to a propagating dilaton whereas for the dilaton form factor there is the additional contact term, from the improvement term, conspiring with its non-local term to satisfy the constraint. Another relevant aspect of the improvement term is that it realises the matrix element of the dilaton decay constant in the effective theory, see Sec. 2.2. The investigation of the conformality constraints on the form factors for massless hadrons consists in the counterpart of the massive hadron case worked out in Ref. [26]. In both cases it is a dilaton pole that plays the crucial role, albeit in very different ways. In the case where the dilaton has a mass, lower than the EFT cutoff \(m_{D}\ll\Lambda\), improvement is still possible since \(m_{D}\) can be treated as a perturbation in the dilaton chiral perturbation theory. The improvement of scalar particles seems generally important as it leads to the required renormalisation properties of the energy momentum tensor, finite Casimir energy and better behaviour in the infrared with regards to general relativity and flow theorems, as summarised in Sec. 1.2. In an ultraviolet complete theory like QCD the infrared limit of \(\kappa(\mu)\) must be settled by the dynamics of the theory, cf. Sec. 3. These arguments and the fact that conformal invariance is seen as more generic than scale symmetry lend some support to the idea that IR-conformality could be realised in the Goldstone sector of QCD-like theories [25; 27]. ## Acknowledgments RZ is supported by a CERN associateship and an STFC Consolidated Grant, ST/P0000630/1. I am grateful to Latham Boyle, Jose-Ramon Espinosa, Chris Hill, George Karananas, Heiri Leutwyler, Jeremie Quevillon, Christopher Smith, Lewis Tunstall, Neil Turok, Jorinde Van de Vis, Jens-Uwe Wiese, Sasha Zhiboedov and the participants of the ITP Bern seminar for useful correspondence and or discussions. ## Appendix A Conventions and Weyl Transformations The Minkowski metric reads \(\eta_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\), a generic spacetime metric is denoted by \(g_{\mu\nu}\) and \(g=\det(g_{\mu\nu})\) is its determinant. Weyl transformations are defined by \[g_{\mu\nu}\to e^{-2\alpha(x)}g_{\mu\nu}\;,\quad\hat{D}\to\hat{D}-\alpha(x)\;, \tag{10}\] where the normalised dilaton field \(\hat{D}\) is defined in (5). Kinetic terms are abbreviated as follows \((\partial\varphi)^{2}\equiv\partial_{\mu}\varphi\partial^{\mu}\varphi\). The sign conventions for the Ricci scalar are those of [48] which are rather standard. From the definition (10) the following Weyl transformations are deduced \[\sqrt{-g}\to e^{-d\alpha(x)}\sqrt{-g}\;,\quad\chi^{n}\to e^{-n\alpha(x)}\chi^ {n}\;. \tag{11}\] ## Appendix B Comments on an Earlier Approach In this appendix we would like to comment and compare to an earlier approach [49] in addressing the Dolgov-Voloshin no-go theorem. The problem was approached by considering a \(\phi^{4}\)-model with global \(U(1)\)-symmetry. SSB is imposed by a negative mass term and the following gravitational form factor (2.20) is found \[\bar{g}_{2}^{(\phi_{2})}(q^{2})|_{\text{HNR}}=\xi_{d}\frac{q^{2}}{q^{2}-2m^{2}}\] (B.1) where \(\phi=\phi_{1}+i\phi_{2}=\rho e^{i\theta}\), \(m\) is the mass of the \(\rho\) and \(\delta\mathcal{L}=-\xi_{d}R\rho^{2}\) takes on the role of the improvement term. They argued that \(\bar{g}_{2}^{(\phi_{2})}(0)|_{\text{HNR}}=0\) fulfils the Dolgov-Voloshin soft-theorem constraint and allowed at the same time for improvement. Their idea is that the form factor is non-trivial, carrying the improvement term, but at the same time fulfils the low energy constraint. * The solution presented in this paper is different in that \(\bar{g}_{2}^{(\phi_{2})}(0)|_{\text{this work}}=\xi_{d}\) satisfies the conformality constraint (2.25) but the dilaton pole allows to bypass the Dolgov-Voloshin soft-theorem conclusion. * One may consider (B.1) in the limit \(m\to\infty\) and \(\bar{g}_{2}^{(\phi_{2})}(q^{2}))|_{\text{HNR}}\to 0\) and come to the same conclusion which seems though trivial as in this limit the scalar field \(\rho\) decouples leaving behind a free field theory of the \(\theta\)-field. * One may interpret the approach in [49] in the \(m\to 0\) limit, \(\bar{g}_{2}^{(\phi_{2})}(q^{2})|_{\text{HNR}}\to\xi_{d}\), establishing a seemingly more close connection to the present work. The model is then simply the D\(\chi\)PT but not the underlying model that triggers the SSB. The \(\rho\) and \(\theta\) take on the roles of the dilaton and the "pion" due to the internal symmetry breaking. This is though rather accidental as the correspondence breaks down when the \(U(3)\) flavour symmetry in which case the \(\sigma\)-model contains 18 degrees of freedom, e.g. [50], whereas the D\(\chi\)PT it would amount to \(1+9=10\) degrees of freedom. Note that the \(U(2)\)-case, familiar from the Higgs sector of the Standard Model, is exceptional in this respect since \(SU(2)\) is pseudo-real. In summary the solution presented here is of a different nature. It is only in certain limits and for specific global symmetry groups that the connection becomes closer. It is the dilaton that evades the Dolgov-Voloshin soft-theorem constraint satisfying the conformality constraint (2.25), and not the other way around as in [49]. ## Appendix C The \(\xi\)-parameter in Effective Gravity Theories with a Cutoff This discussion below can be seen as an appendix to Sec. 3 where the physical relevance and the RG running of \(\xi\) and \(\kappa\) where discussed. In effective quantum theories of gravity with a cutoff, of the Planck-type such as in the Einstein-Hilbert term \(\mathcal{L}_{EH}\propto M_{P}^{2}R\), there is another twist. One can use the freedom of a field redefinition to eliminate the \(\xi\)-term, in the so-called Einstein frame, by a \(\phi\)-dependent metric transformation.12 The \(\xi\)-term can be interpreted as a kinetic mixing term which disappears in the Einstein frame. The consistency between the two frames has been discussed in [51], by taking into account contact interactions.13 However, in extended theories of gravity with no \({\cal L}_{EH}\)-term, e.g. conformal gravity [9; 52], \(\xi\) is not expected to be a redundant parameter. Footnote 13: In the Einstein frame, no separate RG equation is found at 1-loop for \(\xi\). One might wonder whether this program this pattern persists at the 3-loop level once \(\xi\) becomes a genuine independent coupling as briefly discussed in Sec. 3.2. A well-known model of this type is Higgs inflation \(\xi(\mu_{I})={\cal O}(10^{4})\)[53] where \(\mu_{I}\) is the inflation scale are few orders of magnitude below the Planck scale. The large value of \(\xi(\mu_{I})\) indicates that these models are not UV complete as field theories. E.g. in the minimal model the cutoff is forced below \(\mu_{I}\)[54] which seemingly invalidates the scenario. This has been addressed by introducing an additional scalar field [55; 56] and by making the cutoff field dependent [57]. Once quantum corrections are considered further operators such as \(R^{2}\)-terms are induced and interestingly the model becomes amenable to a non-linear \(\sigma\)-model interpretation [58].
2301.08733
Kudla-Millson forms and one-variable degenerations of Hodge structure
We consider arbitrary polarized variations of Hodge structure of weight two and $h^{2,0}=1$ over a non--singular complex algebraic curve $S$ and analyze the boundary behaviour of the associated Kudla--Millson theta series using Schmid's theorems on degenerations of Hodge structure. This allows us to prove that this theta series is always integrable over $S$ and to describe explicitly the non-holomorphic part of the Kudla--Millson generating series in terms of the mixed Hodge structures at infinity.
Luis E. García
2023-01-20T18:55:42Z
http://arxiv.org/abs/2301.08733v1
# Kudla-Millson forms and one-variable degenerations of Hodge structure ###### Abstract. We consider arbitrary polarized variations of Hodge structure of weight two and \(h^{2,0}=1\) over a non-singular complex algebraic curve \(S\) and analyze the boundary behaviour of the associated Kudla-Millson theta series using Schmid's theorems on degenerations of Hodge structure. This allows us to prove that this theta series is always integrable over \(S\) and to describe explicitly the non-holomorphic part of the Kudla-Millson generating series in terms of the mixed Hodge structures at infinity. ###### Contents * 1 Introduction * 2 Weight two PVHS over a complex algebraic curve * 3 Kudla-Millson theta series and degenerations of Hodge structure * 4 Integrability of Kudla-Millson theta series * 5 Generating series of Noether-Lefschetz numbers ## 1. Introduction The goal of this paper is to study the behaviour under degeneration of Hodge structure of certain theta series introduced by Kudla and Millson to study special cycles on Shimura varieties. Unlike previous work that analyzes the case where \(S\) is a special subvariety in a toroidal compactification of a Shimura variety, here we consider polarized variations of Hodge structure with \(h^{2,0}=1\) over an arbitrary non-singular complex curve \(S\). ### Main results Let \(\overline{S}\) be a connected compact Riemann surface and denote by \(S\) the Riemann surface obtained by removing a finite number of points from \(\overline{S}\). Consider an integral polarized variation of Hodge structure (\(\mathbb{Z}\)-PVHS) (1.1) of weight two with \(h^{2,0}=1\). Here \(\mathcal{V}_{\mathbb{Z}}\) denotes the local system underlying \(\mathbb{V}\) and we write \(Q\) for the polarization and \(\mathcal{F}^{\bullet}\) for the Hodge filtration. Let us write \(\mathcal{L}\) for the line bundle \(\mathcal{F}^{2}\) over \(S\) and \(\mathcal{V}_{\mathbb{Z}}^{\vee}\supseteq\mathcal{V}_{\mathbb{Z}}\) for the dual lattice of \(\mathcal{V}_{\mathbb{Z}}\), that is, \[\mathcal{V}_{\mathbb{Z}s}^{\vee}=\left\{v\in\mathcal{V}_{\mathbb{Q}s}\ |\ Q(v,v^{\prime})\in\mathbb{Z}\text{ for all }v\in \mathcal{V}_{\mathbb{Z}s}\right\}.\] In order to state our main results succintly we will assume the following mild condition on \(\mathbb{V}\) (cf. Remark 1.3 below). **Hypothesis 1.1**.: _For any \(s\in S\), the lattice \((\mathcal{V}_{\mathbb{Z}s},Q)\) is even and \(\mathcal{V}_{\mathbb{Q}s}\) is a simple \(\pi_{1}(S,s)\)-module. Moreover, the fundamental group of \(S\) acts trivially on \(\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}\) and the monodromy of \(\mathcal{V}_{\mathbb{Z}}\) around each \(P\in\overline{S}\!\setminus\!S\) is unipotent and non-trivial._ We will be interested in the Noether-Lefschetz loci of \(\mathbb{V}\): for a positive rational number \(m\) and \(\mu\in\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}\), define \[\operatorname{NL}_{\mathbb{V}}(m)_{\mu}=\{s\in S\ |\ \exists v\in(\mu+ \mathcal{V}_{\mathbb{Z}s})\cap\mathcal{F}_{s}^{1}\text{ with }Q(v,v)=2m\}. \tag{1.2}\] The locus \(\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\subset S\) has a natural complex analytic space structure [24, SS5.3.1]; in fact, by the celebrated theorem of Cattani-Deligne-Kaplan it is a proper algebraic subset of \(S\) with a natural scheme structure. Let us write \(\deg\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\) for the degree of the divisor naturally associated with \(\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\) and \(\overline{\mathcal{L}}\) for Deligne's canonical extension of \(\mathcal{L}\) to a line bundle over \(\overline{S}\) and form the generating series \[Z_{\mathbb{V}}^{+}(\tau)_{\mu}=-\deg(\overline{\mathcal{L}})\delta_{\mu,0}+ \sum_{m>0}\deg\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\cdot q^{m},\quad q=e^{2 \pi i\tau}. \tag{1.3}\] When \(S\) is compact, the series \(Z_{\mathbb{V}}^{+}(\tau)\) are known to be modular forms of possibly half-integral weight. More precisely, let \(\operatorname{Mp}_{2}(\mathbb{Z})\) denote the metaplectic double cover of \(\operatorname{SL}_{2}(\mathbb{Z})\) and let \(\rho_{\mathcal{V}_{\mathbb{Z}}}\) be the Weil representation of \(\operatorname{Mp}_{2}(\mathbb{Z})\) on the group algebra \(\mathbb{C}[\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}]\), which has a standard basis \(e^{\mu}\) indexed by \(\mu\in\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}\). The work of Kudla and Millson [18] implies that the generating series \[Z_{\mathbb{V}}^{+}(\tau)=\sum_{\mu}Z_{\mathbb{V}}^{+}(\tau)_{\mu}\cdot e^{\mu} \tag{1.4}\] is a \(\rho_{\mathcal{V}_{\mathbb{Z}}}\)-valued modular form of weight \(\operatorname{rk}(\mathcal{V}_{\mathbb{Z}})/2\). Their proof proceeds by constructing first certain theta series \[\Theta_{\mathbb{V}}(\tau)_{\mu}\in\Omega^{1,1}(S),\quad\mu\in\mathcal{V}_{ \mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}, \tag{1.5}\] depending on \(\tau\in\mathbb{H}\), that transform like non-holomorphic modular forms. When \(S\) is compact one can consider the integral \(\int\limits_{S}\Theta_{\mathbb{V}}(\tau)_{\mu}\), which inherits the transformation properties of \(\Theta_{\mathbb{V}}(\tau)_{\mu}\), and so the modularity of \(Z_{\mathbb{V}}^{+}(\tau)\) follows from the identity (cf. [18, Theorem 2]) \[Z_{\mathbb{V}}^{+}(\tau)_{\mu}=\int_{S}\Theta_{\mathbb{V}}(\tau)_{\mu}. \tag{1.6}\] The goal of this paper is to generalize these results to the setting of (1.1) with \(S\) non-compact. In this case the differential forms \(\Theta_{\mathbb{V}}(\tau)_{\mu}\) often have singularities around the points in \(\overline{S}\backslash S\). Our first result is the theorem below showing that these are always mild enough that \(\Theta_{\mathbb{V}}(\tau)_{\mu}\) is integrable on \(S\); note that we do not impose Hypothesis 1.1. **Theorem 1.1**.: _Let \(S\) be a smooth complex algebraic curve and \(\mathbb{V}\to S\) be a \(\mathbb{Z}\)-PVHS over \(S\) of weight two with \(h^{2,0}=1\) such that the action of \(\pi_{1}(S,s)\) on \(\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}/\mathcal{V}_{\mathbb{Z}}\) is trivial. Then the integral_ \[Z_{\mathbb{V}}(\tau)_{\mu}=\int_{S}\Theta_{\mathbb{V}}(\tau)_{\mu}\] _converges for every \(\mu\in\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}/\mathcal{V}_{\mathbb{Z}}\) and the expression_ \[Z_{\mathbb{V}}(\tau)=\sum_{\mu\in\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}/ \mathcal{V}_{\mathbb{Z}}}Z_{\mathbb{V}}(\tau)_{\mu}\cdot e^{\mu}\] _defines a (possibly non-holomorphic) modular form of weight \(\operatorname{rk}(\mathcal{V}_{\mathbb{Z}})/2\) valued in \(\rho_{\mathcal{V}_{\mathbb{Z}}}\)._ Our second main result gives the precise relation between the non-holomorphic modular form \(Z_{\mathbb{V}}(\tau)\) and the generating series \(Z_{\mathbb{V}}^{+}(\tau)\): one can write \[Z_{\mathbb{V}}(\tau)-Z_{\mathbb{V}}^{+}(\tau)=\sum_{P\in\overline{S}\backslash S }Z_{\mathbb{V},P}^{-}(\tau),\] where the term \(Z_{\mathbb{V},P}^{-}(\tau)\) indexed by a given point \(P\in\overline{S}\backslash S\) is determined explicitly from the polarized mixed Hodge structure defined by the degeneration of \(\mathbb{V}\) at \(P\). More precisely, let us write \(V_{\mathbb{Z}}\) for the space of global multivalued sections of \(\mathcal{V}_{\mathbb{Z}}\), that is, the space of global sections of the pull-back of \(\mathcal{V}_{\mathbb{Z}}\) to a universal cover of \(S\). The pair \((V_{\mathbb{Z}},Q)\) is an even lattice of signature \((h^{1,1},2)\). A point \(P\in\overline{S}\backslash S\) determines then an endomorphism \(N(P)\) of \(V_{\mathbb{Z}}\otimes\mathbb{Q}\) (the local monodromy logarithm) and an ascending filtration \(W(P)_{\bullet}\) of \(V_{\mathbb{Z}}\otimes\mathbb{Q}\) (the shifted weight filtration) such that the quotients \[\operatorname{Gr}_{k}^{W(P)}V_{\mathbb{Z}}:=(W(P)_{k}\cap V_{\mathbb{Z}})/(W(P )_{k-1}\cap V_{\mathbb{Z}})\] are free abelian groups of finite rank. The pair \((Q,N(P))\) determine bilinear forms \(Q_{k}\) on \(\operatorname{Gr}_{k}^{W(P)}V_{\mathbb{Z}}\) that define a structure of positive definite even lattice on \(\operatorname{Gr}_{4}^{W(P)}V_{\mathbb{Z}}\) and on a certain sublattice \(\operatorname{Gr}_{2,\operatorname{prim}}^{W(P)}V_{\mathbb{Z}}\subseteq \operatorname{Gr}_{2}^{W(P)}V_{\mathbb{Z}}\); the elements of \(\operatorname{Gr}_{2,\operatorname{prim}}^{W(P)}V_{\mathbb{Z}}\) can be thought of as classes that become Hodge "at infinity". Associated with these data are positive integers \(r_{k}(V_{\mathbb{Z}},N(P))\)\((k=1,2)\), \(\deg(Q_{3})\) and \(\operatorname{Vol}(\operatorname{Gr}_{4}^{W(P)}V_{\mathbb{Z}})\) as well as holomorphic theta series \[\Theta_{\operatorname{Gr}_{2,\operatorname{prim}}^{W(P)}V_{\mathbb{Z}}}(\tau),\quad\Theta_{\operatorname{Gr}_{4}^{W(P)}V_{\mathbb{Z}}}(\tau),\] valued in finite-dimensional representations \(\rho_{\operatorname{Gr}_{2,\operatorname{prim}}^{W(P)}V_{\mathbb{Z}}}\) and \(\rho_{\operatorname{Gr}_{4}^{W(P)}V_{\mathbb{Z}}}\). The representations \(\rho_{\operatorname{Gr}_{2}^{W(P)}V_{\mathbb{Z}}}\) and \(\rho_{\operatorname{Gr}_{2,\operatorname{prim}}^{W(P)}V_{\mathbb{Z}}}\otimes \rho_{\operatorname{Gr}_{4}^{W(P)}V_{\mathbb{Z}}}\) admit intertwining maps to \(\rho_{\mathcal{V}_{\mathbb{Z}}}\) that we denote by \(\iota\). **Theorem 1.2**.: _Assume that \(\mathbb{V}\) satisfies 1.1. For \(P\in\overline{S}\setminus S\), denote by \(N(P)\) the local monodromy logarithm and by \(W(P)_{\bullet}\) the corresponding (shifted) weight filtration and define_ \[Z_{\mathbb{V},P}^{-}(\tau)=\frac{r_{1}(V_{\mathbb{Z}},N(P))}{\deg(Q_{3})} \frac{1}{4\pi\mathrm{Im}(\tau)}\iota(\Theta_{\operatorname{Gr}_{2}^{W(P)}V_{ \mathbb{Z}}})\] _if \(N(P)^{2}=0\) and_ \[Z_{\mathbb{V},P}^{-}(\tau)= \frac{r_{2}(V_{\mathbb{Z}},N(P))}{\mathrm{Vol}(\operatorname{Gr} _{4}^{W(P)}V_{\mathbb{Z}})^{1/2}}\frac{1}{4\pi i}\int_{-\overline{\tau}}^{i \infty}\frac{\iota(\Theta_{\operatorname{Gr}_{4}^{W(P)}V_{\mathbb{Z}}}(z) \otimes\Theta_{\operatorname{Gr}_{2,\operatorname{prim}}^{W(P)}V_{\mathbb{Z}}} (\tau))}{((z+\tau)/i)^{3/2}}dz\] _if \(N(P)^{2}\neq 0\). Then_ \[Z_{\mathbb{V}}(\tau)=Z_{\mathbb{V}}^{+}(\tau)+\sum_{P\in\overline{S}\setminus S }Z_{\mathbb{V},P}^{-}(\tau).\] _In particular, the right hand side is a \(\rho_{\mathcal{V}_{\mathbb{Z}}}\)-valued modular form of weight \(rk(\mathcal{V}_{\mathbb{Z}})/2\)._ **Remark 1.3**.: Hypothesis 1.1 is very mild: let \(\mathbb{V}\) be an arbitrary \(\mathbb{Z}\)-PVHS of weight two with \(h^{2,0}=1\) such that \(\mathcal{V}_{\mathbb{Q}s}\) is a simple \(\pi_{1}(S,s)\)-module (recall that the category of polarizable \(\mathbb{Q}\)-VHS over \(S\) is semisimple [20, Cor. 13]). Since the monodromy of a \(\mathbb{Z}\)-PVHS on the punctured disk is quasi-unipotent [22, Lemma (4.5)], one can guarantee that 1.1 holds by picking a finite index even sublattice \(\mathcal{V}_{\mathbb{Z}}\)-module and passing to an appropriate finite cover of \(S\) so that the local monodromies around \(\overline{S}\backslash S\) are unipotent (note that \(\mathbb{V}\) extends across any point in \(\overline{S}\backslash S\) with trivial monodromy by [22, Cor. (4.11)]). **Remark 1.4**.: If \(\mathbb{V}\) is the PVHS associated with a polarized family \(\mathcal{X}\) of K3 surfaces parametrized by \(S\), we can (after replacing \(S\) by a finite cover if necessary) interpret the non-holomorphic terms \(Z_{\mathbb{V},P}^{-}\) in terms of Hodge classes in the irreducible components of the singular fibers of a semistable model of \(\mathcal{X}\). This follows from the Clemens-Schmid exact sequence (see e.g. [19]). A similar remark applies if \(\mathbb{V}_{\mathbb{Q}}=(\mathcal{V}_{\mathbb{Q}},Q,\mathcal{F}^{\bullet})\) appears as a direct summand of the PVHS naturally attached to a polarized family of non-singular projective surfaces parametrized by \(S\). **Remark 1.5**.: Let \[\mathbb{G}_{2}(q)=-\frac{1}{24}+\sum_{n\geq 1}\sigma_{1}(n)q^{n},\quad\sigma_{ 1}(n):=\sum_{d|n}d.\] Then \(\mathbb{G}_{2}^{*}(\tau):=\mathbb{G}_{2}(q)+(8\pi y)^{-1}\) is a (non-holomorphic) modular form of weight \(2\) for the full modular group \(\operatorname{SL}_{2}(\mathbb{Z})\) (see [26, eqs. (17) and (21)]); moreover, the operator \[f\mapsto q\frac{d}{dq}f+2k\mathbb{G}_{2}(q)\cdot f\] sends modular forms of weight \(k\) to modular forms of weight \(k+2\) [26, SS5.1]. Thus in Theorem 1.2 we still obtain a \(\rho_{\mathbb{V}_{\mathbb{Z}}}\)-valued modular form if we replace any term \(Z^{-}_{\mathbb{V},P}(\tau)\) associated with a point \(P\) of type II with any of the holomorphic expressions \[-2\frac{r_{1}(V_{\mathbb{Z}},N)}{\deg(Q_{3})}\mathbb{G}_{2}(q)\Theta_{\mathrm{ Gr}_{2}^{W}V_{\mathbb{Z}}}(\tau)\] or \[\frac{r_{1}(V_{\mathbb{Z}},N)}{\deg(Q_{3})}\frac{2}{\operatorname{rk}( \mathcal{V}_{\mathbb{Z}})-4}\cdot q\frac{d}{dq}\Theta_{\mathrm{Gr}_{2}^{W}V_{ \mathbb{Z}}}(\tau).\] Similarly, the Eichler integral of \(\Theta_{\mathrm{Gr}_{4}^{W(P)}V_{\mathbb{Z}}}\) appearing in the contribution \(Z^{-}_{\mathbb{V},P}\) of a type III degeneration is the non-holomorphic part of a weight \(3/2\)-Eisenstein series defined in [25] and so we may replace any term \(Z^{-}_{\mathbb{V},P}(\tau)\) associated with a degeneration of type III with a holomorphic expression involving the holomorphic part of Zagier's Eisenstein series. ### Relation with other works In the setting of the PVHS parametrized by Shimura varieties of orthogonal or unitary type, several recent works address the explicit computation of correction terms to the generating series of special divisors coming from an appropriate toroidal compactification: the case of modular curves was treated by Funke in [10] and related computations for toroidal compactifications of unitary Shimura varieties where only type II degenerations appear are in [3]. Recently Bruinier and Zemel [4] have proved a result for special divisors on orthogonal Shimura varieties that is similar to the modularity statement in Theorem 1.2. Their proof involves studying the asymptotic behaviour of Borcherds lifts along components of a toroidal compactification. A different proof (and refinement) using more geometric methods has been very recently obtained by Engel, Greer and Tayou in [8]. Our paper contributes the explicit description of boundary terms in terms of limiting mixed Hodge structures and, like [8], it also clarifies the rationality properties of coefficients along type III contributions. ### Strategy of proof In contrast to the above works, this paper does not rely on the theory of toroidal compactifications of Shimura varieties. Instead, our proofs are analytic in nature and use Schmid's results on degenerations of Hodge structure, particularly his characterization of the weight filtration by growth of the Hodge norm and his nilpotent and \(\mathrm{SL}_{2}\)-orbit theorems. For a fixed point \(P\in\overline{S}\backslash S\) these results imply that in a neighbourhood of \(P\) the variation \(\mathbb{V}\) is well-approximated by a special type of nilpotent orbit \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\). We prove Theorem 1.1 by showing that \(\Theta_{\mathbb{V}}(\tau)_{\mu}-\Theta_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}( \tau)_{\mu}\) and \(\Theta_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(\tau)_{\mu}\) are both locally integrable around \(P\). The proof of Theorem 1.2 reduces to the computation of the residue of certain canonical Green functions for \(\mathrm{NL}_{\mathbb{V}}(m)_{\mu}\) along \(P\in\overline{S}\backslash S\). We show that the residue agrees with that of the corresponding Green function for \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\), which can be computed exactly thanks to the explicit nature of nilpotent orbits. One advantage of our methods over the use of toroidal compactifications that originally motivated the author's interest is that the theta series \(\Theta_{\mathbb{V}}(\tau)\) and Schmid's theorems are available and in definitive form for arbitrary PVHS of weight two over the complement of a normal crossing divisor in a higher-dimensional base. Schmid's several variables \(\mathrm{SL}_{2}\)-orbit theorem [6] approximates a degeneration of Hodge structure in \(n\) variables by an \((n-1)\)-dimensional pencil of one-dimensional nilpotent orbits; in particular, to use his theorem to study the boundary behaviour of \(\Theta_{\mathbb{V}}(\tau)\) one must first understand \(\Theta_{\mathbb{V}}(\tau)\) along one-variable degenerations. In future work, the author intends to develop the methods in this paper to address the conjectural (mock) modularity of generating series of Noether-Lefschetz loci for certain VHS with \(h^{2,0}>1\) that are not naturally parametrized by Shimura varieties; for a particularly interesting example see [9]. ### Acknowledgements The author would like to thank Nicolas Bergeron and Keerthi Madapusi for their interest and feedback, Salim Tayou for conversations regarding boundary behaviour of special divisors and Richard Thomas for his questions regarding modularity of Noether-Lefschetz loci for variations with \(h^{2,0}>1\). ## 2. Weight two PVHS over a complex algebraic curve In this section we briefly review some relevant facts on variations of Hodge structure over a one-dimensional base. We will only consider integral polarized variations of weight two with Hodge numbers \((1,n,1)\) for some positive integer \(n\). Throughout the paper we fix a connected compact Riemann surface \(\overline{S}\) and a finite collection of points \(P_{1},\dots,P_{r}\in\overline{S}\), and write \[S=\overline{S}-\{P_{1},\dots,P_{r}\}.\] Sections 2.1 and 2.2 collect definitions and known facts on degenerations of Hodge structure and approximation by nilpotent orbits. We refer the reader to [22, 6] for proofs; our exposition follows closely Hain's account [12]. Sections 2.3 and 2.4 compute some nilpotent orbits explicitly. The formulas in these Sections will be used later to understand the behaviour of Kudla-Millson forms around the points \(P\in\overline{S}\backslash S\). ### Definitions Consider an integral polarized variation of Hodge structure (\(\mathbb{Z}\)-PVHS) \(\mathbb{V}\to S\) of weight two over \(S\). Here \(\mathbb{V}\) is a triple \((\mathcal{V}_{\mathbb{Z}},Q,\mathcal{F}^{\bullet})\) consisting of: * a local system \(\mathcal{V}_{\mathbb{Z}}\) of free and finite rank abelian groups over \(S\), * a (locally constant) non-degenerate symmetric bilinear form \[Q:\mathcal{V}_{\mathbb{Z}}\times\mathcal{V}_{\mathbb{Z}}\to\mathbb{Z},\] * a descending filtration \[\mathcal{V}=\mathcal{F}^{0}\supset\mathcal{F}^{1}\supset\mathcal{F}^{2}\] of the flat complex vector bundle \(\mathcal{V}:=\mathcal{V}_{\mathbb{Z}}\otimes\mathcal{O}_{S}\) by holomorphic vector bundles \(\mathcal{F}^{k}\) that are locally direct summands, such that the fiber \(\mathbb{V}_{s}=((\mathcal{V}_{\mathbb{Z}})_{s},Q_{s},\mathcal{F}_{s}^{\bullet})\) over any \(s\in S\) defines a polarized Hodge structure of weight two. We assume that \[h^{2,0}=\mathrm{rk}\mathcal{F}^{2}=1,\] so that \(\mathcal{F}^{2}\) is a holomorphic line bundle over \(S\) that we denote by \(\mathcal{L}\). For each \(s\in S\) we have the Hodge decomposition \[\mathcal{V}_{s}=\oplus_{p+q=2}\mathcal{V}_{s}^{p,q}. \tag{2.1}\] Let \(C_{s}\in\mathrm{End}(\mathcal{V}_{s})\) denote the Weil operator: it acts as the identity on \(\mathcal{V}_{s}^{1,1}\) and as \(-1\) on \(\mathcal{V}_{s}^{2,0}\oplus\mathcal{V}_{s}^{0,2}\). The polarization induces a hermitian metric \(\|\cdot\|_{\mathcal{V},s}\) on \(\mathcal{V}_{s}\). This is the Hodge metric, defined by \[\|v\|_{\mathcal{V},s}^{2}=Q(C_{s}v,\overline{v}),\quad v\in\mathcal{V}_{s}. \tag{2.2}\] When only one PVHS is being considered, we will suppress \(\mathcal{V}\) from the notation and denote the Hodge norm of \(v\in\mathcal{V}_{s}\) simply by \(\|v\|_{s}^{2}\). The polarization \(Q\) induces an isomorphism \(\mathcal{V}\simeq\mathcal{V}^{\vee}\) sending a vector \(v\in\mathcal{V}_{s}\) to the linear functional \(v^{\prime}\mapsto Q(v^{\prime},v)\). Composing this isomorphism with the canonical surjection \(\mathcal{V}^{\vee}\to\mathcal{L}^{\vee}\) dual to the inclusion \(\mathcal{L}\subset\mathcal{V}\) gives an isomorphism \(\mathcal{V}/\mathcal{F}^{1}\simeq\mathcal{L}^{\vee}\). In particular, to a section \(v\in\mathrm{H}^{0}(U,\mathcal{V})\) defined over \(U\subset S\) corresponds a section \(s_{v}\in\mathrm{H}^{0}(U,\mathcal{L}^{\vee})\). It will be convenient to define \[h(s_{v})=2\|s_{v}\|_{\mathcal{L}^{\vee}}^{2}. \tag{2.3}\] Writing \(v_{z}=\Sigma v_{z}^{p,q}\) for the Hodge decomposition of \(v_{z}\in\mathcal{V}_{z}\), the value of \(h(s_{v})\) at \(z\in U\) is given by \[h(s_{v})_{z}=2\|v_{z}^{2,0}\|_{\mathcal{V}}^{2}=-2Q(v_{z}^{2,0},\overline{v_{z }^{2,0}}). \tag{2.4}\] ### Local monodromy and limit mixed Hodge structure The asymptotic behaviour of \(\mathbb{V}\to S\) around each of the points \(P\in\overline{S}\backslash S\) can be described precisely in terms of limit mixed Hodge structures using the results of Schmid in [22]. We briefly recall the results that we will use. Let \(\Delta=\{t\in\mathbb{C}\mid|t|<1\}\) denote the open unit disk in \(\mathbb{C}\) and let \(\Delta^{*}=\Delta-\{0\}\) be the punctured open unit disk. Consider a polarized variation of Hodge structure \(\mathbb{V}=(\mathcal{V}_{\mathbb{Z}},Q,\mathcal{F}^{\bullet})\) of weight two over \(\Delta^{*}\) with \(h^{2,0}=1\). #### 2.2.1. Local monodromy and weight filtration For \(s\in\Delta^{*}\), let \(\mathcal{V}_{\mathbb{Z}s}\) be the fiber of \(\mathcal{V}_{\mathbb{Z}}\) over \(s\). This fiber carries an action of the fundamental group \(\pi_{1}(\Delta^{*},s)\). We denote by \[T\in\operatorname{O}(\mathcal{V}_{\mathbb{Z}s},Q)\subset\operatorname{GL}( \mathcal{V}_{\mathbb{Z}s})\] the monodromy operator, that is, the image in \(\operatorname{GL}(\mathcal{V}_{\mathbb{Z}s})\) of the generator of \(\pi_{1}(\Delta^{*},s)\) defined by the loop \(t\mapsto se^{2\pi it}\) for \(t\in[0,1]\). Then ([22, Thm. 6.1]) \(T\) is quasi-unipotent, i.e. there exist positive integers \(e\) and \(M\) such that \[(T^{e}-1)^{M}=0.\] Passing to a cover of \(\Delta^{*}\) of degree \(e\), we may assume that \(e=1\). Moreover, we can take \(M\leq 3\) ([22, Thm. 6.1]) and, if \(T=1\), then the polarized variation of Hodge structure \((\mathcal{V}_{\mathbb{Z}},Q,\mathcal{F}^{\bullet})\) can be extended to the open unit disk \(\Delta\) ([22, Cor. 4.11]). Let \[N=\log T=\sum_{k=1}^{M-1}(-1)^{k-1}\frac{(T-1)^{k}}{k}.\] Then \(N^{3}=0\). If \(T\neq 1\), this leaves two possibilities: * \(N^{2}=0\) (Type II degeneration), and * \(N^{2}\neq 0\) (Type III degeneration). To the nilpotent endomorphism \(N\) of \(\mathcal{V}_{\mathbb{Q}s}=\mathcal{V}_{\mathbb{Z}s}\otimes\mathbb{Q}\) corresponds an increasing filtration \(W_{\bullet}(N)\) of \(\mathcal{V}_{\mathbb{Q}s}\) by \(\mathbb{Q}\)-vector spaces called the weight filtration. It is the unique filtration \[\cdots\subseteq W_{k}(N)\subseteq W_{k+1}(N)\subseteq\cdots\] of \(\mathcal{V}_{\mathbb{Z}s}\otimes\mathbb{Q}\) satisfying \(N\cdot W_{k}(N)\subseteq W_{k-2}(N)\) and such that \[N^{k}:W_{k}(N)/W_{k-1}(N)\to W_{-k}(N)/W_{-k-1}(N)\] is an isomorphism. We write \[W_{k}=W_{k-2}(N)\] for the (shifted) weight filtration of \(N\). Since \(N^{3}=0\), this filtration satisfies \(W_{-1}=0\) and \(W_{4}=\mathcal{V}_{\mathbb{Z}s}\otimes\mathbb{Q}\). Abusing notation, we denote by \(W_{\bullet}\) the corresponding filtration of the local system \(\mathcal{V}_{\mathbb{Q}}\): \[0=W_{-1}\subseteq W_{0}\subseteq W_{1}\subseteq W_{2}\subseteq W_{3} \subseteq W_{4}=\mathcal{V}_{\mathbb{Q}}.\] Since \(N=\log T\in\mathfrak{so}(\mathcal{V}_{\mathbb{Q}s},Q)\), we have \[Q(Nv,w)=-Q(v,Nw). \tag{2.5}\] It follows that the weight filtration \(W_{\bullet}\) is self-dual: writing \(W_{k}^{\perp}\) for the orthogonal complement of \(W_{k}\) under \(Q\), we have \[W_{k}^{\perp}=W_{3-k}.\] Moreover, the quotients \[\operatorname{Gr}_{k}^{W}\mathcal{V}_{\mathbb{Q}}:=W_{k}/W_{k-1}\] carry canonical bilinear forms \(Q_{k}\) defined as follows: if \(k\geq 2\) and \(\tilde{v},\tilde{w}\in\operatorname{Gr}_{k}^{W}\mathcal{V}_{\mathbb{Q}}\) are represented by \(v,w\in W_{k}\), we define \[Q_{k}(\tilde{v},\tilde{w})=Q(v,N^{k-2}w). \tag{2.6}\] If \(k<2\), then we define \(Q_{k}\) so that the isomorphism \(N^{2-k}:\operatorname{Gr}_{4-k}^{W}\mathcal{V}_{\mathbb{Q}}\to\operatorname{ Gr}_{k}^{W}\mathcal{V}_{\mathbb{Q}}\) is an isometry ([22, Lemma 6.4]). #### 2.2.2. Canonical extension and limit MHS The vector bundle \(\mathcal{V}=\mathcal{V}_{\mathbb{Z}}\otimes\mathcal{O}_{\Delta^{*}}\) carries a canonical flat connection \(\nabla\). Let us fix a flat multi-valued basis \(v_{1},\dots,v_{n+2}\) of \(\mathcal{V}_{\mathbb{Q}}\). We assume that this basis is chosen so as to provide a splitting of the weight filtration: that is, \[W_{k}=\langle v_{1},\dots,v_{\dim W_{k}}\rangle\] for \(0\leq k\leq 4\). Define a new basis \((\tilde{v}_{i})\) of \(\mathcal{V}\) by \[\tilde{v}_{i}(q)=\exp\left(\frac{i}{2\pi}\log t\cdot N\right)v_{i}(q)\] Note that parallel translation of along a positively oriented circle changes \(v_{i}\) to \(Tv_{i}\) and \(\exp\left(\frac{i}{2\pi}\log t\cdot N\right)\) to \[\exp\left(\frac{i}{2\pi}(\log t+2\pi i)\cdot N\right)=\exp\left(\frac{i}{2\pi} \log t\cdot N\right)\cdot T^{-1}.\] It follows that the basis \((\tilde{v}_{i})\) is single-valued and so it defines a trivialization \(\mathcal{O}_{\Delta^{*}}^{n+2}\simeq\mathcal{V}\) over \(\Delta^{*}\). The canonical extension \(\tilde{\mathcal{V}}\) of \(\mathcal{V}\) is defined to be the extension of \(\mathcal{V}\) as a constant bundle over \(\Delta\), that is, the extension corresponding to \(\mathcal{O}_{\Delta}^{n+2}\) under the above isomorphism. We denote by \(\tilde{\mathcal{V}}_{0}\) its fiber over \(0\in\Delta\). By (2.5), we have \[Q(\tilde{v}_{i}(q),\tilde{v}_{j}(q))=Q(v_{i},v_{j})\] and so the polarization \(Q\) extends to a symmetric bilinear form on the fiber \(\tilde{\mathcal{V}}_{0}\) that we still denote by \(Q\). Schmid's nilpotent orbit theorem [22, Thm. 4.9] states that the Hodge filtration \(\mathcal{F}^{\bullet}\) extends to a filtration \(\tilde{\mathcal{F}}^{\bullet}\) of the canonical extension \(\tilde{\mathcal{V}}\) by locally direct factors. We write \(F^{\bullet}_{\lim}=\tilde{\mathcal{F}}_{0}\) for the limit Hodge filtration, i.e. the corresponding filtration of \(\tilde{\mathcal{V}}_{0}\). Then we have \[Q(F^{1}_{\lim},F^{2}_{\lim}) =0\] \[N\cdot F^{2}_{\lim} \subseteq F^{1}_{\lim}. \tag{2.7}\] Moreover, the basis \(\tilde{v}_{1}(0),\dots,\tilde{v}_{n+2}(0)\) defines a \(\mathbb{Z}\)-structure on the fiber \(\tilde{\mathcal{V}}_{0}\) that we denote by \(V_{\mathbb{Z}}\), and the weight and limit Hodge filtrations \[(W_{\bullet},F^{\bullet}_{\lim})\] define a mixed \(\mathbb{Q}\)-Hodge structure on \(V:=V_{\mathbb{Z}}\otimes\mathbb{Q}\). Together with the action of \(N\) and the extension of \(Q\) to \(V\), these filtrations define a polarized mixed \(\mathbb{Q}\)-Hodge structure. More precisely, we have (cf. [6, Def. (2.26)]) 1. \((V,W_{\bullet},F^{\bullet}_{\rm lim})\) is a mixed \(\mathbb{Q}\)-Hodge structure satisfying (2.7). 2. \(W_{\bullet}=W_{\bullet}(N)[-2]\). 3. Define \[P_{2}=\ker(N:{\rm Gr}_{2}^{W}V\to{\rm Gr}_{0}^{W}V)\subseteq{\rm Gr}_{2}^{W}V\] and, for \(k\neq 2\), set \(P_{k}={\rm Gr}_{k}^{W}V\). Then (2.8) \[{\rm Gr}_{2}^{W}V=P_{2}\oplus NP_{4}\] and the restriction of the bilinear form \(Q_{k}\) in (2.6) to \(P_{k}\) defines a polarized \(\mathbb{Q}\)-Hodge structure of weight \(k\). We write \[h^{p,q}_{\rm lim}={\rm Gr}_{F_{\rm lim}}^{p}\,{\rm Gr}_{p+q}^{W}\,V\] for the Hodge numbers of the limiting mixed Hodge structure. #### 2.2.3. Nilpotent orbit Using the isomorphism \(\mathcal{O}_{\Delta}^{n+2}\simeq\tilde{\mathcal{V}}\), we extend the filtration \(F^{\bullet}_{\rm lim}\) of \(\tilde{\mathcal{V}}_{0}\) to a filtration, still denoted by \(F^{\bullet}_{\rm lim}\), of \(\tilde{\mathcal{V}}\). The corresponding nilpotent orbit is then given by the filtration \[\mathcal{F}^{\bullet}_{\rm nilp}:=\exp\left(\frac{1}{2\pi i}\log t\cdot N \right)F^{\bullet}_{\rm lim}\subset\mathcal{V}. \tag{2.9}\] Using the uniformisation of \(\Delta^{*}\) via the exponential map \[\pi:\mathbb{H}\to\Delta^{*},\quad z\mapsto t:=e^{2\pi iz}.\] we may write \[\mathcal{F}^{k}_{\rm nilp}=e^{zN}F^{k}_{\rm lim}. \tag{2.10}\] In a small enough neighbourhood of \(0\), the triple \[\mathbb{V}^{\rm nilp}:=(\underline{V_{\underline{Z}}},Q,\mathcal{F}^{\bullet} _{\rm nilp}) \tag{2.11}\] defines a variation of Hodge structures of weight two polarized by \(Q\) with the same Hodge numbers as \(\mathbb{V}\). The variation \(\mathbb{V}^{\rm nilp}\) approximates \(\mathbb{V}\) in the following sense. Let us denote by \(\mathbb{D}\) the period domain parametrising Hodge structures on \(V_{\mathbb{R}}\) polarised by \(Q\) with \(h^{2,0}=1\): it is the hermitian symmetric domain attached to the Lie group \[G_{\mathbb{R}}:={\rm Aut}(V_{\mathbb{R}},Q).\] We write \(\mathbb{D}^{\vee}\) for the compact dual of \(\mathbb{D}\); it contains \(\mathbb{D}\) and is a homogeneous complex manifold for \(G_{\mathbb{C}}\). The pullback \(\pi^{*}\mathbb{V}\) to \(\mathbb{H}\) of the PVHS \(\mathbb{V}\) defines a holomorphic map \[\Phi_{\mathbb{V}}:\mathbb{H}\to\mathbb{D}\] satisfying \(\Phi_{\mathbb{V}}(z+1)=e^{N}\Phi_{\mathbb{V}}(z)\). Since \(e^{zN}\) belongs to \(G_{\mathbb{C}}\), we have \(e^{-zN}\Phi_{\mathbb{V}}(z)\in\mathbb{D}^{\vee}\) for every \(z\in\mathbb{H}\). This gives a holomorphic map \[\tilde{\Psi}_{\mathbb{V}}:\mathbb{H}\to\mathbb{D}^{\vee},\qquad\tilde{\Psi}_ {\mathbb{V}}(z)=e^{-zN}\Phi_{\mathbb{V}}(z)\] that is invariant under \(z\mapsto z+1\), and so \(\tilde{\Psi}_{\mathbb{V}}(z)\) induces a holomorphic map \[\Psi_{\mathbb{V}}:\Delta^{*}\to\mathbb{D}^{\vee},\quad\Psi_{\mathbb{V}}(e^{2 \pi iz}):=e^{-zN}\Phi_{\mathbb{V}}(z).\] Schmid's nilpotent orbit theorem states that \(\Psi_{\mathbb{V}}\) extends to a holomorphic map defined on \(\Delta\)[7, SS2.3]. For the nilpotent orbit \(\mathbb{V}^{\mathrm{nilp}}\) this map is constant with \[\Psi_{\mathbb{V}^{\mathrm{nilp}}}(t)=\Psi_{\mathbb{V}}(0)=F_{\mathrm{lim}}^{ \bullet}.\] The map \(\Psi_{\mathbb{V}}\) can be written as \[\Psi_{\mathbb{V}}(t)=\psi_{\mathbb{V}}(t)\cdot F_{\mathrm{lim}}^{\bullet}\] with \[\psi_{\mathbb{V}}:\Delta\to G_{\mathbb{C}}\] a holomorphic map satisfying \(\psi_{\mathbb{V}}(0)=1\) (for a canonical choice of \(\psi_{\mathbb{V}}\), see [5, (2.5)]). Thus we may write \[\Phi_{\mathbb{V}}(z)=e^{zN}\Psi_{\mathbb{V}}(e^{2\pi iz})=e^{zN}\psi_{\mathbb{ V}}(t)F_{\mathrm{lim}}^{\bullet}=e^{zN}\psi_{\mathbb{V}}(t)e^{-zN}\Phi_{ \mathbb{V}^{\mathrm{nilp}}}(z).\] Equivalently, the Hodge filtrations of \(\mathbb{V}\) and \(\mathbb{V}^{\mathrm{nilp}}\) satisfy \[\mathcal{F}_{t}^{\bullet}=e^{zN}\psi_{\mathbb{V}}(t)e^{-zN}\mathcal{F}_{ \mathrm{nilp},t}^{\bullet},\qquad t\in\Delta^{*}. \tag{2.12}\] Given a norm \(|\cdot|\) on \(\mathrm{End}(V_{\mathbb{C}})\) and assuming that \(|\mathrm{Re}(z)|\leq 1/2\), we have the trivial estimate \[|e^{zN}\psi_{\mathbb{V}}(t)e^{-zN}-1|=O(|t|(\log|t|)^{2k}) \tag{2.13}\] for some positive integer \(k\). #### 2.2.4. Approximation and Hodge norm estimates We also need Schmid's estimates for the Hodge norm: fix an angular sector \[U=U(t_{0},\epsilon)=\{t\in\Delta^{*}|0<\arg(t-t_{0})<2\pi-\epsilon\}\] of \(\Delta^{*}\) and let \(v\in\mathcal{V}_{\mathbb{C}}|_{U}\). If \(v\in W_{k}-W_{k-1}\), then \[\|v\|_{t}^{2}\sim(-\log|t|)^{k-2},\] uniformly on \(U\) ([22, Thm. 6.6']). In 4.2 we will state a more precise version that also gives bounds for the derivatives of \(\|v\|_{t}^{2}\). ### Type II degenerations We say that a degeneration is of type II if \(N^{2}=0\) but \(N\) is non-trivial. For this type of degeneration, the Hodge numbers of the limit mixed Hodge structure are \[h_{\mathrm{lim}}^{1,0}=h_{\mathrm{lim}}^{0,1}=h_{\mathrm{lim}}^{2,1}=h_{ \mathrm{lim}}^{1,2}=1,\quad h_{\mathrm{lim}}^{1,1}=\mathrm{dim}V_{\mathbb{C}}-4\] (all other Hodge numbers are zero) and the weight filtration is \[W_{0} =0\] \[W_{1} =\mathrm{Im}\,N\] \[W_{2} =\ker N\] \[W_{3} =W_{4}=V.\] Below we compute explicitly the Hodge norms \(\|v\|^{2}\) and Chern form \(\Omega\) associated with the corresponding nilpotent orbit; these computations will allow us to derive explicit expressions for Kudla-Millson forms for such degenerations. Let us assume that the limit mixed Hodge structure is \(\mathbb{R}\)-split, i.e. that \(V_{\mathbb{R}}\) is the direct sum of pure Hodge structures. This will suffice for our intended application. #### 2.3.1. Let \(V_{\mathbb{C}}=\oplus_{a,b}I^{a,b}\) (\(0\leq a,b\leq 2\)) denote the canonical bigrading defined by Deligne [6, (2.12)]. Since we assume that \(V\) is \(\mathbb{R}\)-split, this bigrading is simply given by \[I^{a,b}=F^{a}_{\lim}\cap\overline{F^{b}_{\lim}}\cap W_{a+b,\mathbb{C}}. \tag{2.14}\] Then we have \[W_{k,\mathbb{C}}=\oplus_{a+b\leq k}I^{a,b},\quad F^{p}_{\lim}=\oplus_{a\geq p} I^{a,b}\] and \(I^{b,a}=\overline{I^{a,b}}\). Let us define \[V_{k}=(\oplus_{a+b=k}I^{a,b})\cap W_{k,\mathbb{R}},\] so that \(V_{\mathbb{R}}=\oplus_{1\leq k\leq 3}V_{k}\). Via the isomorphism \[V_{k}\simeq W_{k,\mathbb{R}}/W_{k-1,\mathbb{R}}=\operatorname{Gr}^{W}_{k}V_{ \mathbb{R}},\] the Hodge filtration on \(\operatorname{Gr}^{W}_{k}V_{\mathbb{R}}\) induces on \(V_{k}\) a pure \(\mathbb{R}\)-Hodge structure of weight \(k\) (with Hodge filtration \(F^{\bullet}_{\lim}\cap V_{k,\mathbb{C}}\)). Since the form \(Q(\cdot,N\cdot)\) polarizes the Hodge structure on \(\operatorname{Gr}^{W}_{3}V_{\mathbb{R}}\simeq V_{3}\), we can find a vector \(e^{2,1}\in I^{2,1}\) such that \[iQ(e^{2,1},N\overline{e^{2,1}})=1.\] We fix such a vector and write \(e^{1,2}=\overline{e^{2,1}}\in I^{1,2}\), \(e^{1,0}=Ne^{2,1}\in I^{1,0}\) and \(e^{0,1}=\overline{e^{1,0}}\in I^{0,1}\). Then \(\{e^{2,1},e^{1,2}\}\) is a basis for \(V_{3}\) and \(\{e^{1,0},e^{0,1}\}\) is a basis for \(V_{1}\). Using \(Q(F^{2}_{\lim},F^{1}_{\lim})=0\) and \(Q(W_{1},W_{2})=0\), one sees that \(V_{2}\) is orthogonal to \(V_{1}\oplus V_{3}\) and that in the basis \(\{e^{2,1},e^{1,2},e^{1,0},e^{0,1}\}\) the restriction of \(Q\) to \(V_{1}\oplus V_{3}\) is given by the matrix \[\begin{pmatrix}0&0&0&-i\\ 0&0&i&0\\ 0&i&0&0\\ -i&0&0&0\end{pmatrix}. \tag{2.15}\] #### 2.3.2. We now consider the nilpotent orbit: for \(z\in\mathbb{H}\), we write \[F^{k}_{z}=e^{zN}F^{k}_{\lim}.\] Since \(N^{2}=0\) and \(I^{1,1}\subset\ker N\), we have \(e^{zN}=1+zN\) and hence \[F^{2}_{z} =\langle e^{2,0}_{z}\rangle\] \[F^{1}_{z} =e^{zN}(F^{2}_{\lim}\oplus I^{1,1})=F^{2}_{z}\oplus I^{1,1},\] where \[e^{2,0}_{z}:=e^{2,1}+ze^{1,0}.\] The Hodge norm of \(e_{z}^{2,0}\) can be computed explicitly using (2.15): \[\begin{split}\|e_{z}^{2,0}\|_{\mathcal{V}}^{2}&=-Q(e_ {z}^{2,0},\overline{e_{z}^{2,0}})\\ &=-Q(e^{2,1}+ze^{1,0},e^{1,2}+\overline{z}e^{0,1})\\ &=i(\overline{z}-z)\\ &=2\text{Im}(z).\end{split} \tag{2.16}\] We may think of \(z\mapsto e_{z}^{2,0}\) as a holomorphic section of the hermitian line bundle \(F_{z}^{2}\). Writing \(\Omega\) for its Chern form, we find \[\Omega=\frac{1}{2\pi i}\partial\overline{\partial}\log\|e_{z}^{2,0}\|_{ \mathcal{V}}^{2}=\frac{i}{8\pi}\frac{dz\wedge d\overline{z}}{\text{Im}(z)^{2}} \tag{2.17}\] Using (2.16) we can compute the Hodge norm \(\|v\|_{\mathcal{V}}^{2}\) of vectors \(v\in V_{\mathbb{R}}\). Fix such \(v\) and let \(v_{z}^{p,q}\) be the components of \(v\in\mathcal{V}_{z}=\oplus_{p,q}\mathcal{V}_{z}^{p,q}\). Then \[v_{z}^{2,0}=f(z)e_{z}^{2,0}\] for some holomorphic function \(f:\mathbb{H}\to\mathbb{C}\). We have \(v_{z}^{0,2}=\overline{v_{z}^{2,0}}\) and hence \[Q(v,e_{z}^{2,0})=Q(v_{z}^{0,2},e_{z}^{2,0})=-2\overline{f(z)}\text{Im}(z).\] This gives \[\begin{split} h(s_{v})&=-2Q(v_{z}^{2,0},v_{z}^{0,2 })\\ &=4|f(z)|^{2}\text{Im}(z)\\ &=\frac{|Q(v,e_{z}^{2,0})|^{2}}{\text{Im}(z)}\end{split} \tag{2.18}\] and, for the Hodge norm, \[\|v\|_{\mathcal{V},z}^{2}=Q(v,v)+2h(s_{v})=Q(v,v)+\frac{2|Q(v,e_{z}^{2,0})|^{ 2}}{\text{Im}(z)}. \tag{2.19}\] Let us consider the special case \(v\in W_{2,\mathbb{R}}\). Such a vector can be written uniquely as \[v=v_{2}+ae^{1,0}+\overline{a}e^{0,1} \tag{2.20}\] with \(v_{2}\in V_{2}\) and a complex number \(a\). Using \(Q(W_{1},W_{2})=Q(F_{\text{lim}}^{1},F_{\text{lim}}^{2})=0\) we find that \[Q(v_{2},e_{z}^{2,0})=Q(v_{2},e^{2,1})+zQ(v_{2},e^{1,0})=0\] and hence \[Q(v,e_{z}^{2,0})=Q(ae^{1,0}+\overline{a}e^{0,1},e^{2,1}+ze^{1,0})=-i\overline {a}.\] We conclude that for \(v\in W_{2,\mathbb{R}}\) we have \[h(s_{v})=\frac{|a|^{2}}{\text{Im}(z)} \tag{2.21}\] and \[\|v\|_{\mathcal{V},z}^{2}=Q(v_{2},v_{2})+\frac{2|a|^{2}}{\mathrm{Im}(z)}. \tag{2.22}\] ### Type III degenerations We say that a degeneration is of type III if \(N^{2}\neq 0\). In this case we have \[h_{\mathrm{lim}}^{0,0}=h_{\mathrm{lim}}^{2,2}=1,\quad h_{\mathrm{lim}}^{1,1}= \mathrm{rank}V_{\mathbb{Z}}-2,\] and all other Hodge numbers are zero, i.e. the real mixed Hodge structure \((V_{\mathbb{R}},W,F)\) is Hodge-Tate. The weight filtration is \[W_{0} =W_{1}=\mathrm{Im}N^{2}\] \[W_{2} =W_{3}=\mathrm{Im}N+\ker N.\] We will now do some computations analogous to the ones above in the type II case. We assume again that the limit mixed Hodge structure is \(\mathbb{R}\)-split. #### 2.4.1. Let \(V_{\mathbb{C}}=\oplus_{a,b}I^{a,b}\) denote Deligne's canonical bigrading [6, (2.12)]. For \(\mathbb{R}\)-split type III degenerations we have \(I^{p,q}=0\) if \(p\neq q\) and \[I^{p,p}=F_{\mathrm{lim}}^{p}\cap\overline{F_{\mathrm{lim}}^{p}}\cap W_{2p, \mathbb{C}}.\] The bigrading satisfies \[W_{2k,\mathbb{C}}=\oplus_{a\leq k}I^{a,a},\quad F_{\mathrm{lim}}^{p}=\oplus_{ a\geq p}I^{a.a}.\] We set \[V_{2k}=I^{k,k}\cap W_{k,\mathbb{R}},\quad k=0,1,2.\] Then \(V_{2k}\) is a real Hodge structure of type \((k,k)\) and \(V_{\mathbb{R}}=\oplus V_{2k}\). Since the form \(Q(\cdot,N^{2}\cdot)\) polarizes the Hodge structure on \(\mathrm{Gr}_{4}^{W}V_{\mathbb{R}}\simeq V_{4}\), we can find a vector (unique up to multiplication by \(\pm 1\)) \(e^{2,2}\in V_{4}\) such that \[Q(e^{2,2},N^{2}e^{2,2})=1. \tag{2.23}\] Then \(Ne^{2,2}\in V_{2}\) and \(N^{2}e^{2,2}\in V_{0}\) and both vectors are non-zero. Since \(V_{0}\) and \(V_{4}\) are one-dimensional we have \[V_{4}=\langle e^{2,2}\rangle,\quad V_{0}=\langle N^{2}e^{2,2}\rangle.\] Let us define \[U=\ker(N:V_{2}\to V_{0})\] Under the isomorphism \(V_{2}\simeq\mathrm{Gr}_{2}^{W}V_{\mathbb{R}}\) induced by the quotient map \(W_{2,\mathbb{R}}\to\mathrm{Gr}_{2}^{W}V_{\mathbb{R}}\), the subspace \(U\subset V_{2}\) corresponds to the primitive part \(P_{2,\mathbb{R}}\subset\mathrm{Gr}_{2}^{W}V_{\mathbb{R}}\); in particular, the restriction of \(Q\) to \(U\) is positive definite. We have \[V_{2}=U\oplus\langle Ne^{2,2}\rangle.\] This decomposition is orthogonal for \(Q\) since \(N\in\mathfrak{so}(V,Q)\). It follows that \(V_{\mathbb{R}}\) can be written as \[V_{\mathbb{R}}=U\oplus\langle e^{2,2},Ne^{2,2},N^{2}e^{2,2}\rangle \tag{2.24}\] with \(\langle e^{2,2},Ne^{2,2},N^{2}e^{2,2}\rangle=U^{\perp}\) (in fact this is a decomposition as real mixed Hodge structures with the natural Hodge filtrations defined by intersecting \(F^{\bullet}_{\rm lim}\) with each summand). By (2.23), the matrix of the restriction of \(Q\) to \(U^{\perp}\) in the basis \(\{e^{2,2},Ne^{2,2},N^{2}e^{2,2}\}\) is \[\begin{pmatrix}0&0&1\\ 0&-1&0\\ 1&0&0\end{pmatrix}. \tag{2.25}\] #### 2.4.2. Consider now the nilpotent orbit corresponding to an \(\mathbb{R}\)-split type III degeneration: for \(z\in\mathbb{H}\), let \[F^{k}_{z}=e^{zN}F^{k}_{\rm lim}.\] Since the restriction of \(N\) to \(U\) vanishes, we have \[F^{2}_{z} =\langle e^{2,0}_{z}\rangle\] \[F^{1}_{z} =e^{zN}(V_{4,\mathbb{C}}\oplus V_{2,\mathbb{C}})=U_{\mathbb{C}} \oplus\langle e^{2,0}_{z},Ne^{2,0}_{z}\rangle,\] with \[e^{2,0}_{z}:=e^{zN}e^{2,2}=e^{2,2}+zNe^{2,2}+\frac{z^{2}}{2}N^{2}e^{2,2}.\] Using (2.25) we can compute the Hodge norm of \(e^{2,0}_{z}\): \[\begin{split}\|e^{2,0}_{z}\|^{2}_{\mathcal{V}}&=-Q(e ^{2,0}_{z},\overline{e^{2,0}_{z}})\\ &=-Q(e^{2,2}+zNe^{2,2}+\frac{z^{2}}{2}N^{2}e^{2,2},e^{2,2}+ \overline{z}Ne^{2,2}+\frac{\overline{z}^{2}}{2}N^{2}e^{2,2})\\ &=-(\frac{z^{2}}{2}+\frac{\overline{z}^{2}}{2}-|z|^{2})\\ &=2{\rm Im}(z)^{2}.\end{split} \tag{2.26}\] Writing \(\Omega\) for the first Chern form of the hermitian line bundle \(F^{2}_{z}\), this gives \[\Omega=\frac{1}{2\pi i}\partial\overline{\partial}\log\|e^{2,0}_{z}\|^{2}_{ \mathcal{V}}=\frac{i}{4\pi}\frac{dz\wedge d\overline{z}}{{\rm Im}(z)^{2}}. \tag{2.27}\] The argument we used in the case of type II degenerations shows that \[h(s_{v})=-2\frac{|Q(v,e^{2,0}_{z})|^{2}}{Q(e^{2,0}_{z},\overline{e^{2,0}_{z}}) }=\frac{|Q(v,e^{2,0}_{z})|^{2}}{{\rm Im}(z)^{2}} \tag{2.28}\] and, for the Hodge norm, \[\|v\|^{2}_{\mathcal{V},z}=Q(v,v)+2h(s_{v})=Q(v,v)+\frac{2|Q(v,e^{2,0}_{z})|^{2 }}{{\rm Im}(z)^{2}}. \tag{2.29}\] Again we consider the special case \(v\in W_{2,\mathbb{R}}=V_{0}\oplus V_{2}\). Such a vector can be written as \[v=v_{U}+aNe^{2,2}+bN^{2}e^{2,2} \tag{2.30}\] for unique \(v_{U}\in U\) and real numbers \(a\) and \(b\). Using \(Q(F^{1}_{\rm lim},F^{2}_{\rm lim})=0\) and \(Q(v,Nv^{\prime})=-Q(Nv,v^{\prime})\), we find that \[Q(v_{U},e^{2,0}_{z})=Q(v_{U},e^{2,2})+zQ(v_{U},Ne^{2,2})+\tfrac{z^{2}}{2}Q(v_{ U},N^{2}e^{2,2})=0\] and hence \[Q(v,e_{z}^{2,0}) =Q(aNe^{2.2}+bN^{2}e^{2,2},e^{2,2}+zNe^{2,2}+\tfrac{z^{2}}{2}N^{2}e^ {2,2})\] \[=b-az. \tag{2.31}\] We conclude that for \(v\in W_{2,\mathbb{R}}\) we have \[h(s_{v})=\frac{|b-az|^{2}}{\operatorname{Im}(z)^{2}}=a^{2}+\left(\frac{b-a \operatorname{Re}(z)}{\operatorname{Im}(z)}\right)^{2} \tag{2.32}\] and \[\|v\|_{\mathcal{V},z}^{2} =Q(v_{U},v_{U})-a^{2}+\frac{2|b-az|^{2}}{\operatorname{Im}(z)^{2}}\] \[=Q(v_{U},v_{U})+a^{2}+2\left(\frac{b-a\operatorname{Re}(z)}{ \operatorname{Im}(z)}\right)^{2}. \tag{2.33}\] **Remark 2.1**.: The formulas for Hodge norms will be used in Section 4.5 to derive explicit expressions for \(\varphi_{\mathbb{V}}(v)\) for type III degenerations that agree with those computed by Funke in [10]. ## 3. Kudla-Millson theta series and degenerations of Hodge structure In this section we briefly review some of the needed background regarding the Weil representation and the construction of theta series attached to a \(\mathbb{Z}\)-PVHS \(\mathbb{V}\) on \(S\) by pulling back Kudla-Millson theta series via the period map associated with \(\mathbb{V}\). We will also define certain theta series attached to limiting mixed Hodge structures. ### Weil representation #### 3.1.1. Let \(L\) be an even lattice, that is, a free abelian group of finite rank endowed with a non-degenerate symmetric bilinear form \(Q:L\times L\to\mathbb{Z}\) such that \(Q(v,v)\) is even for every \(v\in L\). We denote its signature by \((b^{+},b^{-})\) and write \[L^{\vee}=\{v\in L\otimes\mathbb{Q}\ |\ Q(v,w)\in\mathbb{Z}\text{ for all }w\in L\}\] for the dual of \(L\). Thus \(L\subseteq L^{\vee}\), and the finite group \(L^{\vee}/L\) is known as the discriminant group of \(L\). We denote by \[\mathbb{C}[L^{\vee}/L]\] its group algebra and by \(e^{\mu}\) (\(\mu\in L^{\vee}/L\)) its standard basis. We write \(\operatorname{Mp}_{2}(\mathbb{Z})\) for the metaplectic double cover of \(\operatorname{SL}_{2}(\mathbb{Z})\). Its elements are pairs of the form \[\left(\begin{pmatrix}a&b\\ c&d\end{pmatrix},\phi(\tau)\right),\] where \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\operatorname{SL}_{2}(\mathbb{Z})\) and \(\phi(\tau)\) satisfies \(\phi(\tau)^{2}=c\tau+d\). It is generated by the elements \[\begin{split}& T=\left(\begin{pmatrix}1&1\\ 0&1\end{pmatrix},1\right),\\ & S=\left(\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\sqrt{\tau}\right).\end{split} \tag{3.1}\] There is a representation \(\rho_{L}\) of \(\operatorname{Mp}_{2}(\mathbb{Z})\) on \(\mathbb{C}[L^{\vee}/L]\) determined by the formulas \[\begin{split}&\rho_{L}(T)(e^{\mu})=e^{\pi iQ(\mu,\mu)}e^{\mu}\\ &\rho_{L}(S)(e^{\mu})=\frac{e^{\pi i(b^{-}-b^{+})/4}}{\sqrt{|L^{ \vee}/L|}}\sum_{\lambda\in L^{\vee}/L}e^{-2\pi iQ(\mu,\lambda)}e^{\lambda}. \end{split} \tag{3.2}\] The representation \(\rho_{L}\) factors through a double cover of \(\operatorname{SL}_{2}(\mathbb{Z}/N\mathbb{Z})\), where \(N\) (sometimes called the level of \(L\)) is the smallest integer such that \(NQ(\lambda,\lambda)/2\) is an integer for all \(\lambda\in L^{\vee}\). #### 3.1.2. Let \(V=L\otimes\mathbb{Q}\) and suppose given a filtration \[0\neq W_{1}\subseteq W_{2}\subseteq V\] of \(V\) by \(\mathbb{Q}\)-vector spaces such that \(W_{1}\) is isotropic and \(W_{2}=W_{1}^{\perp}\). Define \(L_{k}=W_{k}\cap L\) and set \(\operatorname{Gr}_{2}^{W}L=L_{2}/L_{1}\). Then \(\operatorname{Gr}_{2}^{W}L\) is an even lattice with respect to the bilinear form induced by \(Q\). The group \(\operatorname{Mp}_{2}(\mathbb{Z})\) acts on \[\mathbb{C}[(\operatorname{Gr}_{2}^{W}L)^{\vee}/\operatorname{Gr}_{2}^{W}L]= \mathbb{C}[(L^{\vee}\cap W_{2})/(L^{\vee}\cap W_{1}+L_{2})]\] via the corresponding Weil representation \(\rho_{\operatorname{Gr}_{2}^{W}L}\). The map \[\iota:\rho_{\operatorname{Gr}_{2}^{W}L}\to\rho_{L},\quad e^{\mu}\mapsto\sum_{ \lambda\in(L^{\vee}\cap W_{1}+L)/L}e^{\lambda+\mu} \tag{3.3}\] intertwines the \(\operatorname{Mp}_{2}(\mathbb{Z})\)-actions [21, Prop. 6.1]. #### 3.1.3. We briefly recall some notions of modular forms for \(\operatorname{Mp}_{2}(\mathbb{Z})\) valued in \(\rho_{L}\); see [2, Chap. 1] for more details. Let \[f:\mathbb{H}\to\rho_{L}\] be a smooth function and \(k^{+},k^{-}\in\frac{1}{2}\mathbb{Z}\). We say that \(f\) is a non-holomorphic modular form of weight \((k^{+},k^{-})\) valued in \(\rho_{L}\) if \[f\left(\frac{a\tau+b}{c\tau+d}\right)=\phi(\tau)^{2k^{+}}\overline{\phi(\tau)}^ {2k^{-}}\rho_{L}\left(\begin{pmatrix}a&b\\ c&d\end{pmatrix},\phi(\tau)\right)f(\tau) \tag{3.4}\] for every \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right),\phi(\tau)\right)\in\operatorname{Mp}_{2}(\mathbb{Z})\). If \(f:\mathbb{H}\to\rho_{L}\) is holomorphic and satisfies (3.4) with \((k^{+},k^{-})=(k,0)\), then we may write \[f(\tau)=\sum_{\mu}f_{\mu}\cdot e^{\mu},\] where the components \(f_{\mu}\) of \(f\) are weakly modular forms of weight \(k\). We say that \(f\) is a modular (resp. cusp) form of weight \(k\) valued in \(\rho_{L}\) if it each component is a modular (resp. cusp) form of weight \(k\). The most important examples of modular forms valued in \(\rho_{L}\) arise from theta series. For a positive definite even lattice \(L\) and \(\mu\in L^{\vee}\), define \[\Theta_{L}(\tau)_{\mu}=\sum_{\lambda\in\mu+L}e^{\pi iQ(\lambda,\lambda)\tau}\] and set \[\Theta_{L}(\tau)=\sum_{\mu}\Theta_{L}(\tau)_{\mu}\cdot e^{\mu}.\] Then \(\Theta_{L}(\tau)\) is a modular form valued in \(\rho_{L}\) of weight \(\operatorname{rk}(L)/2\) ([1, Thm 4.1]). ### Kudla-Millson theta series #### 3.2.1. Let \(\mathbb{V}\to S\) be a \(\mathbb{Z}\)-PVHS satisfying 1.1. Associated with \(\mathbb{V}\) there is a period map \[\Phi_{\mathbb{V}}:S\to\Gamma\backslash\mathbb{D}\] into a quotient of the hermitian symmetric space attached to \(\operatorname{SO}(h^{1,1},2)\) (see e.g. [22, p. 227-228]). More precisely, fix a point \(s_{0}\in S\) and let \(\pi:\tilde{S}\to S\) be the universal cover of \(S\). The pullback \(\pi^{*}\mathcal{V}_{\mathbb{Z}}\) to \(\tilde{S}\) is then a constant local system endowed with a constant bilinear form induced by \(Q\), i.e. of the form \(\underline{V}_{\mathbb{Z}}\) for some indefinite lattice \((V_{\mathbb{Z}},Q)\). It carries a canonical action \[\pi_{1}(S,s_{0})\to\operatorname{Aut}(V_{\mathbb{Z}},Q). \tag{3.5}\] Let now \(V_{\mathbb{R}}=V_{\mathbb{Z}}\otimes\mathbb{R}\) and denote by \(\mathbb{D}\) the space of all Hodge structures on \(V_{\mathbb{R}}\) polarized by \(Q\) with \(h^{2,0}=1\); thus \(\mathbb{D}\) is the hermitian symmetric domain attached to the orthogonal group \(\operatorname{Aut}(V_{\mathbb{R}},Q)\). The pullback \(\pi^{*}\mathbb{V}\) induces a holomorphic map \(\Phi_{\pi^{*}\mathbb{V}}:\tilde{S}\to\mathbb{D}\). If \(\Gamma\subseteq\operatorname{Aut}(V_{\mathbb{Z}},Q)\) is any subgroup containing the image of (3.5), then the composite of \(\Phi_{\pi^{*}\mathbb{V}}\) with the projection \(\mathbb{D}\to\Gamma\backslash\mathbb{D}\) induces a holomorphic map \(S\to\Gamma\backslash\mathbb{D}\). Under assumption 1.1, and identifying \(\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}\simeq V_{\mathbb{Z}}^ {\vee}/V_{\mathbb{Z}}\), we can take for \(\Gamma\) the group \[\Gamma:=\Gamma_{V_{\mathbb{Z}}}=\left\{\gamma\in\operatorname{Aut}(V_{\mathbb{ Z}},Q)\ |\ \gamma\equiv\operatorname{id}\ \text{on}\ V_{\mathbb{Z}}^{\vee}/V_{\mathbb{Z}}\right\},\] and denote the corresponding period map by \[\Phi_{\mathbb{V}}:S\to\Gamma\backslash\mathbb{D}. \tag{3.6}\] #### 3.2.2. Let \(\mathcal{S}(V_{\mathbb{R}})\) be the Schwartz space of \(V_{\mathbb{R}}\). In their seminal works [16, 17, 18], Kudla and Millson have introduced certain differential forms \[\varphi_{\operatorname{KM}}\in(\Omega^{1,1}(\mathbb{D})\otimes\mathcal{S}(V_ {\mathbb{R}}))^{\operatorname{SO}(V_{\mathbb{R}},Q)}\] and associated theta series \[\Theta_{\operatorname{KM}}(\tau)_{\mu}=\sum_{v\in\mu+V_{\mathbb{Z}}}\varphi_{ \operatorname{KM}}(y^{1/2}v)e^{\pi ixQ(v,v)}\in\Omega^{1,1}(\mathbb{D})^{ \Gamma}\simeq\Omega^{1,1}(\Gamma\backslash\mathbb{D}).\] Using the period map \(\Phi_{\mathbb{V}}\) we can define differential forms on \(S\) canonically associated with \(\mathbb{V}\) by pulling back the Kudla-Millson theta series. **Definition 3.1**.: For \(\mu\in\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}/\mathcal{V}_{\mathbb{Z}}\), define \[\Theta_{\mathbb{V}}(\tau)_{\mu}=\Phi_{\mathbb{V}}^{*}\Theta_{\mathrm{KM}}(\tau )_{\mu}.\] The results of Kudla and Millson imply that the forms \(\Theta_{\mathbb{V}}(\tau)_{\mu}\) have modularity properties that can be described most easily by saying that the differential form \[\Theta_{\mathbb{V}}(\tau):=\sum_{\mu\in\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}/ \mathcal{V}_{\mathbb{Z}}}\Theta_{\mathbb{V}}(\tau)_{\mu}\cdot e^{\mu}\in\Omega ^{1,1}(S)\otimes\rho_{\mathcal{V}_{\mathbb{Z}}}. \tag{3.7}\] transforms under \(\mathrm{Mp}_{2}(\mathbb{Z})\) like a non-holomorphic modular form of weight \(\mathrm{rk}(\mathcal{V}_{\mathbb{Z}})/2\) valued in \(\rho_{\mathcal{V}_{\mathbb{Z}}}\). Using the formulas given by Kudla and Millson we can describe \(\Theta_{\mathbb{V}}(\tau)_{\mu}\) as \[\Theta_{\mathbb{V}}(\tau)_{\mu}=\sum_{\mu+\mathcal{V}_{\mathbb{Z}}}\varphi_{ \mathbb{V}}(y^{1/2}v)e^{\pi ixQ(v,v)}, \tag{3.8}\] with \(\varphi_{\mathbb{V}}(v)\) (which is only locally defined, e.g. on a small disk around a given point in \(S\)) given by \[\varphi_{\mathbb{V}}(v)=e^{-\pi\|v\|_{\mathcal{V}}^{2}}(-\Omega+ih(s_{v}) \theta\wedge\overline{\theta}),\quad\theta=\frac{\partial h(s_{v})}{h(s_{v})}. \tag{3.9}\] We briefly explain the terms in this formula; for more details, see also [11]. The terms \(\|v\|_{\mathcal{V}}^{2}\) and \(h(s_{v})\) have been defined in 2.1: \(\|v\|_{\mathcal{V}}^{2}\) denotes the Hodge norm of \(v\) and the value of \(h(s_{v})\) at \(z\in S\) is \[h(s_{v})_{z}=-2Q(v_{z}^{2,0},v_{z}^{0,2}).\] The term \(\Omega\) denotes the first Chern form of \(\mathcal{L}\), i.e. \[\Omega=(2\pi i)^{-1}\partial\overline{\partial}\log\|s\|_{\mathcal{V}}^{2} \tag{3.10}\] for any meromorphic section \(s\) of \(\mathcal{L}\). We will sometimes write \[\varphi_{\mathbb{V}}(v)=e^{-\pi Q(v,v)}\varphi_{\mathbb{V}}^{\circ}(v), \tag{3.11}\] with \[\varphi_{\mathbb{V}}^{\circ}(v)=e^{-2\pi h(s_{v})}(-\Omega+ih(s_{v})\theta \wedge\overline{\theta}),\quad\theta=\frac{\partial h(s_{v})}{h(s_{v})}. \tag{3.12}\] ### Theta series and limit MHS We now associate a vector-valued theta series to a limiting mixed Hodge structure of type II or III. #### 3.3.1. Type II For a type II degeneration, the polarization \(Q\) induces a quadratic form on \(\mathrm{Gr}_{2}^{W}V\) that we still denote by \(Q\); note that in this case \(\mathrm{Gr}_{2}^{W}V=P_{2}\) and hence this quadratic form is positive definite. The image of \(V_{\mathbb{Z}}\cap W_{2}\) in \(\mathrm{Gr}_{2}^{W}V\) defines a lattice that we denote by \(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}\). We write \[\begin{split}\rho_{\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}}& =\mathbb{C}[(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})^{\vee}/\mathrm{Gr }_{2}^{W}V_{\mathbb{Z}}]\\ &=\mathbb{C}\left[W_{2}\cap V_{\mathbb{Z}}^{\vee}/(W_{1}\cap V_{ \mathbb{Z}}^{\vee}+W_{2}\cap V_{\mathbb{Z}})\right]\end{split} \tag{3.13}\] for the corresponding Weil representation of \(\mathrm{Mp}_{2}(\mathbb{Z})\). Associated to the positive-definite even lattice \((\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}},Q)\) is the theta series \[\Theta_{\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}}(\tau)=\sum_{\mu}\Theta_{\mathrm{Gr }_{2}^{W}V_{\mathbb{Z}}}(\tau)_{\mu}\cdot e^{\mu}.\] It is a modular form valued in \(\rho_{\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}}\) of weight \((\mathrm{rk}(\mathcal{V}_{\mathbb{Z}})-4)/2\). More generally, the image of \(V_{\mathbb{Z}}\cap W_{k}\) in \(\mathrm{Gr}_{k}^{W}V\) is a lattice that we denote by \(\mathrm{Gr}_{k}^{W}V_{\mathbb{Z}}\). Then \[N:\mathrm{Gr}_{3}^{W}V_{\mathbb{Z}}\to\mathrm{Gr}_{1}^{W}V_{\mathbb{Z}}\] is an injective map between lattices of the same rank; following [23] we write \[r_{1}(V_{\mathbb{Z}},N)\] for the size of its cokernel. The form \(Q\) also induces a non-degenerate bilinear pairing \[\mathrm{Gr}_{3,1}^{W}Q:\mathrm{Gr}_{3}^{W}V_{\mathbb{Z}}\times\mathrm{Gr}_{1} ^{W}V_{\mathbb{Z}}\to\mathbb{Z}. \tag{3.14}\] Let \(\mathrm{disc}(\mathrm{Gr}_{3,1}^{W}Q)\) be its discriminant, that is \[\mathrm{disc}(\mathrm{Gr}_{3,1}^{W}Q)=|\det(\mathrm{Gr}_{3,1}^{W}Q(\tilde{v}_{ i},\tilde{w}_{j}))|\] for any bases \((\tilde{v}_{i})\) of \(\mathrm{Gr}_{3}^{W}V_{\mathbb{Z}}\) and \((\tilde{w}_{j})\) of \(\mathrm{Gr}_{1}^{W}V_{\mathbb{Z}}\). Note that the form \(Q_{3}(v,w)=Q(v,Nw)\) is symplectic and takes integral values on \(\mathrm{Gr}_{3}^{W}V_{\mathbb{Z}}\), and hence \[r_{1}(V_{\mathbb{Z}},N)\mathrm{disc}(\mathrm{Gr}_{3,1}^{W}Q)=|\det(Q_{3}(\tilde {v}_{i},\tilde{v}_{j}))|=\deg(Q_{3})^{2}\] for a positive integer \(\deg(Q_{3})\). Let us write \[\iota:\rho_{\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}}\to\rho_{\mathcal{V}_{\mathbb{Z}}} \tag{3.15}\] for the \(\mathrm{Mp}_{2}(\mathbb{Z})\)-intertwining map defined in (3.3); we recall that \[\iota(e^{\lambda})=\sum_{\gamma\in(W_{1}\cap V_{\mathbb{Z}}^{\vee}+V_{ \mathbb{Z}})/V_{\mathbb{Z}}}e^{\gamma+\lambda}.\] For \(\mu\in(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})^{\vee}/\mathrm{Gr}_{2}^{W}V_{ \mathbb{Z}}\), define \[Z_{\mathcal{V},P}^{-}(\tau)_{\mu}=\left(\frac{r_{1}(V_{\mathbb{Z}},N)}{\mathrm{ disc}(\mathrm{Gr}_{3,1}^{W}Q)}\right)^{1/2}\frac{1}{4\pi y}\Theta_{\mathrm{Gr}_{2}^{W}V_{ \mathbb{Z}}}(\tau)_{\mu} \tag{3.16}\] and set \[Z^{-}_{\mathbb{V},P}(\tau)=\sum_{\mu\in(\operatorname{Gr}^{W}_{2}V_{\mathbb{Z}})^ {\vee}/\operatorname{Gr}^{W}_{2}V_{\mathbb{Z}}}Z^{-}_{\mathbb{V},P}(\tau)_{\mu} \cdot\iota(e^{\mu}). \tag{3.17}\] #### 3.3.2. Type III Let us now consider degenerations of type III. Let \[\operatorname{Gr}^{W}_{2,\operatorname{prim}}V=\ker(N:W_{2}/W_{1}\to W_{0}) \subset\operatorname{Gr}^{W}_{2}V. \tag{3.18}\] Then \(\operatorname{Gr}_{2,\operatorname{prim}}V\) is a vector space over \(\mathbb{Q}\) of dimension \(n-1\). The subgroup \[\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}}:=\operatorname{Gr }^{W}_{2,\operatorname{prim}}V\cap\operatorname{Gr}^{W}_{2}V_{\mathbb{Z}}\] is a lattice in \(\operatorname{Gr}^{W}_{2,\operatorname{prim}}V\). Since \(W_{1}\) is \(Q\)-isotropic, the polarization \(Q\) induces a quadratic form on \(\operatorname{Gr}^{W}_{2,\operatorname{prim}}V\) that is positive definite and that we still denote by \(Q\). Let us write \[\rho_{\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}}}=\mathbb{C} [(\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}})^{\vee}/ \operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}}] \tag{3.19}\] for be the corresponding Weil representation of \(\operatorname{Mp}_{2}(\mathbb{Z})\). Associated to the positive definite even lattice \((\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}},Q)\) is the theta series \[\Theta_{\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}}}(\tau)= \sum_{\mu}\Theta_{\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}} }(\tau)_{\mu}\cdot e^{\mu}\] that transforms under \(\operatorname{Mp}_{2}(\mathbb{Z})\) like a holomorphic modular form valued in \(\rho_{\operatorname{Gr}^{W}_{2,\operatorname{prim}}V_{\mathbb{Z}}}\) of weight \((n-1)/2\). The bilinear form \(Q\) induces a pairing \[\operatorname{Gr}^{W}_{4,0}Q:\operatorname{Gr}^{W}_{4}V_{\mathbb{Z}}\times \operatorname{Gr}^{W}_{0}V_{\mathbb{Z}}\to\mathbb{Z}.\] Let \(\operatorname{disc}(\operatorname{Gr}^{W}_{4,0}Q)\) be its discriminant, that is \[\operatorname{disc}(\operatorname{Gr}^{W}_{4,0}Q)=|\det(\operatorname{Gr}^{W }_{4,0}Q(\tilde{v}_{i},\tilde{w}_{j}))|\] for any bases \((\tilde{v}_{i})\) of \(\operatorname{Gr}^{W}_{4}V_{\mathbb{Z}}\) and \((\tilde{w}_{j})\) of \(\operatorname{Gr}^{W}_{0}V_{\mathbb{Z}}\) respectively. Let us now consider the rank one lattice \[\operatorname{Gr}^{W}_{4}V_{\mathbb{Z}}=\text{image of $V_{\mathbb{Z}}$ in $ \operatorname{Gr}^{W}_{4}V$},\] endowed with the positive-definite quadratic \(Q_{4}(v,v)=Q(v,N^{2}v)\) defined in (2.6). **Lemma 3.2**.: _Let \(L\) be the rank one lattice \(\operatorname{Gr}^{W}_{4}V_{\mathbb{Z}}\) endowed with the positive-definite quadratic form \(v\mapsto Q_{4}(v,v)\)._ 1. _The lattice_ \(L\) _is even._ 2. _The image of_ \(L\) _under_ \(N:\operatorname{Gr}^{W}_{4}V\to\operatorname{Gr}^{W}_{2}V\) _lies in_ \(\operatorname{Gr}^{W}_{2}V_{\mathbb{Z}}\)_._ 3. _The image of_ \(L^{\vee}\) _under_ \(N:\operatorname{Gr}^{W}_{4}V\to\operatorname{Gr}^{W}_{2}V\) _contains_ \((\operatorname{Gr}^{W}_{2}V_{\mathbb{Z}})^{\vee}\cap N(\operatorname{Gr}^{W}_{4 }V)\)_._ Proof.: For a degeneration of type III we have \[N=(T-1)-(T-1)^{2}/2\] and so \(N^{2}=(T-1)^{2}\). Hence we can write \(N=(T-1)-N^{2}/2\) and for \(v\in V_{\mathbb{Z}}\) we have \[Q(v,N^{2}v)=-Q(Nv,Nv)=-Q((T-1)v,(T-1)v)\] which is an even integer since \((T-1)v\in V_{\mathbb{Z}}\). This proves (i). Part (ii) follows from the identity \(N=(T-1)-N^{2}/2\), which implies that \(N\equiv(T-1)\mod W_{0}\). For part (iii), suppose that \(w\in(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})^{\vee}\cap N(\mathrm{Gr}_{4}^{W}V)\) and write \(w=Nv\) with \(v\in\mathrm{Gr}_{4}^{W}V\). For any \(w^{\prime}\in\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}\), we have \[Q(v,Nw^{\prime})=-Q(w,w^{\prime})\in\mathbb{Z},\] i.e. \(Q(v,v^{\prime})\in\mathbb{Z}\) for every \(v^{\prime}\in N(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})\). Part (ii) implies that \(N^{2}(L)\subseteq N(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})\) and hence that \(v\in L^{\vee}\). The proof the lemma shows that \(N^{2}=(T-1)^{2}\) is integral. Thus \[\mathrm{Gr}^{W}N^{2}:\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}\to\mathrm{Gr}_{0}^{W} V_{\mathbb{Z}}\] is an injective map of lattices of the same rank; we denote its cokernel by \[r_{2}(V_{\mathbb{Z}},N).\] Writing \(\mathrm{Vol}(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}):=Q_{4}(v_{0},v_{0})\in 2\mathbb{N}\) where \(v_{0}\) is a generator of \(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}\), we have \[\mathrm{Vol}(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}})=r_{2}(V_{\mathbb{Z}},N) \mathrm{disc}(\mathrm{Gr}_{4,0}^{W}Q).\] Associated to the positive-definite even lattice \((\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}},Q_{4})\) is the Weil representation \[\rho_{\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}}=\mathbb{C}[(\mathrm{Gr}_{4}^{W}V_{ \mathbb{Z}})^{\vee}/\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}]\] of \(\mathrm{Mp}_{2}(\mathbb{Z})\) and the non-holomorphic unary theta series \[R_{\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}}(\tau)_{\nu}=\frac{1}{4\pi\sqrt{y}}\sum_ {v\in\nu+\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}}\beta_{3/2}(2\pi yQ_{4}(v,v))q^{-Q _{4}(v,v)/2}, \tag{3.20}\] where \[\beta_{3/2}(t)=\int_{1}^{\infty}u^{-3/2}e^{-tu}du.\] Let us write \((\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}})^{-}\) for the negative-definite even lattice defined by \(-Q_{4}\). By the above lemma, the map \[\mathrm{Gr}_{2,\mathrm{prim}}^{W}V_{\mathbb{Z}}\hat{\oplus}(\mathrm{Gr}_{4}^ {W}V_{\mathbb{Z}})^{-}\to\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}},\quad(v,w)\mapsto v +Nw\] identifies \(\mathrm{Gr}_{2,\mathrm{prim}}V_{\mathbb{Z}}\hat{\oplus}(\mathrm{Gr}_{4}^{W}V_ {\mathbb{Z}})^{-}\) with a sublattice of \(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}\) of finite index. This induces a natural \(\mathrm{Mp}_{2}(\mathbb{Z})\)-intertwining map \[\rho_{\mathrm{Gr}_{2,\mathrm{prim}}V_{\mathbb{Z}}}\otimes\rho_{( \mathrm{Gr}_{4}^{W}V_{\mathbb{Z}})^{-}}\simeq\rho_{\mathrm{Gr}_{2,\mathrm{prim} }V_{\mathbb{Z}}\hat{\oplus}(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}})^{-}}\to\rho_{ \mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}}\] \[e^{\lambda}\otimes e^{\nu}\mapsto\left\{\begin{array}{cc}0,& \mbox{if }\lambda+N\nu\notin(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})^{\vee},\\ \lambda+N\nu+\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}},&\mbox{otherwise}.\end{array}\right. \tag{3.21}\] Let \[\iota:\rho_{\operatorname{Gr}_{2,\operatorname{prim}}V_{\mathbb{Z}}}\otimes\rho_{ (\operatorname{Gr}_{4}^{W}V_{\mathbb{Z}})^{-}}\to\rho_{\mathbb{Z}} \tag{3.22}\] be the map obtained by composing (3.21) with the map (3.15). For \(\mu\in(\operatorname{Gr}_{2}^{W}V_{\mathbb{Z}})^{\vee}/\operatorname{Gr}_{2}^ {W}V_{\mathbb{Z}}\), define \[\begin{split} Z^{-}_{\mathbb{V},P}(\tau)_{\mu}=& \left(\frac{r_{2}(V_{\mathbb{Z}},N)}{2\mathrm{disc}(\operatorname{Gr}_{4,Q}^ {W}Q)}\right)^{1/2}\\ &\times\sum_{\begin{subarray}{c}\lambda+N\nu\equiv\mu\\ \mathrm{mod}\ \operatorname{Gr}_{2}^{W}V_{\mathbb{Z}}\end{subarray}}R_{ \operatorname{Gr}_{4}^{W}V_{\mathbb{Z}}}(\tau)_{\nu}\cdot\Theta_{ \operatorname{Gr}_{2,\operatorname{prim}}^{W}V_{\mathbb{Z}}}(\tau)_{\lambda} \end{split} \tag{3.23}\] and set \[Z^{-}_{\mathbb{V},P}(\tau)=\sum_{\mu\in(\operatorname{Gr}_{2}^{W}V_{\mathbb{Z }})^{\vee}/\operatorname{Gr}_{2}^{W}V_{\mathbb{Z}}}Z^{-}_{\mathbb{V},P}(\tau)_ {\mu}\cdot\iota(e^{\mu}). \tag{3.24}\] ## 4. Integrability of Kudla-Millson theta series ### A convergence result Let \(\overline{S}\) be a compact Riemann surface and let \(S\) be obtained by removing a finite set of points from \(\overline{S}\). Consider a polarized variation of Hodge structure \(\mathbb{V}\to S\) of weight two with \(h^{2,0}=1\) and \(h^{1,1}=n\) satisfying 1.1. In the previous section we have attached to \(\mathbb{V}\) a collection of closed differential forms \[\Theta_{\mathbb{V}}(\tau)_{\mu}\in\Omega^{1,1}(S),\quad\mu\in\mathcal{V}_{ \mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}},\] that vary smoothly in \(\tau\in\mathbb{H}\) and transform like (non-holomorphic) modular forms of weight \(\operatorname{rk}(\mathcal{V}_{\mathbb{Z}})/2\). If \(S=\overline{S}\), i.e. when \(S\) is compact, the integral \[Z_{\mathbb{V}}(\tau)_{\mu}:=\int_{S}\Theta_{\mathbb{V}}(\tau)_{\mu} \tag{4.1}\] is obviously convergent, and the results of Kudla and Millson [18] show that \(Z_{\mathbb{V}}(\tau)_{\mu}\) is a holomorphic modular form of weight \(1+n/2\) with \(q\)-expansion \[-\mathrm{deg}(\mathcal{L})\delta_{\mu,0}+\sum_{m>0}\mathrm{deg}\ \mathrm{NL}_{\mathbb{V}}(m)_{\mu}\cdot q^{m}.\] A little more precisely: the \(Z_{\mathbb{V}}(\tau)_{\mu}\) are the components of a modular form of weight \(1+n/2\) valued in \(\rho_{\mathbb{V}_{\mathbb{Z}}}\). Now suppose that \(S\) is not compact; in that case, the differential form \(\Theta_{\mathbb{V}}(\tau)\) might not extend to a smooth form on \(\overline{S}\). Fortunately, as we will show below, the singularities of \(\Theta_{\mathbb{V}}(\tau)\) around the points in \(\overline{S}\setminus S\) are very mild. In particular, \(\Theta_{\mathbb{V}}(\tau)\) is always integrable over \(S\) and so one can define \(Z_{\mathbb{V}}(\tau)\) as in (4.1) for arbitrary \(S\). This is the content of the following proposition, whose proof will comprise most of this section. Let us write \[\Theta_{\mathbb{V}}(\tau)_{\mu}=\sum_{m\in\frac{1}{2}Q(\mu,\mu)+\mathbb{Z}} \Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}\cdot q^{m},\] with \[\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}=\sum_{\begin{subarray}{c}v\in\mu+\mathcal{V} _{\mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\varphi^{\circ}_{\mathbb{V}}(y^{1/2}v).\] **Theorem 4.1**.: _Let \(\mathbb{V}\) be a \(\mathbb{Z}\)-PVHS over \(S\) of weight two with \(h^{2,0}=1\). Then the integral_ \[Z_{\mathbb{V}}(\tau)_{\mu}:=\int_{S}\Theta_{\mathbb{V}}(\tau)_{\mu}\] _converges for all \(\mu\in\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V}_{\mathbb{Z}}\). The forms \(\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}\) are also integrable over \(S\) for any \(m\) and \(\mu\) and we have_ \[Z_{\mathbb{V}}(\tau)_{\mu}=\sum_{m\in\frac{Q(\mu,\mu)}{2}+\mathbb{Z}}\left( \int_{S}\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}\right)q^{m}.\] _The expression_ \[Z_{\mathbb{V}}(\tau):=\sum_{\mu\in\mathcal{V}_{\mathbb{Z}}^{\vee}/\mathcal{V }_{\mathbb{Z}}}Z_{\mathbb{V}}(\tau)_{\mu}\cdot e^{\mu}\] _defines a (possibly non-holomorphic) modular form of weight \(1+n/2\) valued in \(\rho_{\mathcal{V}_{\mathbb{Z}}}\)._ The following remarks reduce the proof of this theorem to the analogous local question around each cusp. Moreover, they show that when addressing the local question we may assume the local monodromy to be unipotent and non-trivial. 1. Fix a point \(s_{0}\in S\) and a simply connected neighbourhood \(U\) of \(s_{0}\) and choose a trivialization of \(\mathcal{V}_{\mathbb{Z}}|_{U}\). The Hodge metric on \(\mathcal{V}|_{U}\) can then be identified with a smooth map from \(U\) to the space of hermitian metrics on \(\mathbb{C}^{n+2}\). Hence, after possibly shrinking \(U\), the Hodge metric on \(\mathcal{V}|_{U}\) is uniformly bounded below by some constant metric \(v\mapsto|v|_{0}\) on \(\mathbb{C}^{n+2}\). It follows that on such a neighbourhood we can find \(\epsilon>0\) so that \[|\varphi_{\mathbb{V}}(v)|_{s}<e^{-\epsilon|v|_{0}^{2}}\] for every \(s\in U\) and every flat section \(v\in\mathcal{V}_{\mathbb{Z}}\) over \(U\). By dominated convergence, this implies that the above proposition holds locally, that is, replacing \(S\) by a small enough neighbourhood of any given point \(s_{0}\in S\). Taking a finite covering of \(\overline{S}\) shows that it suffices to prove the proposition for a coordinate neighbourhood of each cusp, i.e. for \(S\simeq\Delta^{*}\). In the rest of Section 4 we will assume that \(S=\Delta^{*}\) and denote by \(T\) the local monodromy as in Section 2.2. 2. Recall that \(T\) is quasi-unipotent: there exist positive integers \(e\) and \(m\) such that \((T^{e}-\mathrm{id})^{m}=0\). Let \(\pi:\Delta^{*}\to\Delta^{*}\) be the covering map of degree \(e\). Then \(\pi^{*}\mathbb{V}\) is a PVHS over \(\Delta^{*}\) with unipotent monodromy \(T^{e}\). Since \[\Theta_{\pi^{*}\mathbb{V}}(\tau)=\pi^{*}\Theta_{\mathbb{V}}(\tau),\] the integrability of \(\Theta_{\pi^{*}\mathbb{V}}(\tau)\) over \(\Delta^{*}\) implies that of \(\Theta_{\mathbb{V}}(\tau)\). It follows that we may assume that \(T\) is unipotent. 3. Finally, recall from Section 2.2 that if \(T=\operatorname{id}\), then the period map associated to the PVHS \(\mathbb{V}\) extends to \(\Delta\); in this case the argument in (1) proves the proposition. So in the rest of Section 4 we will assume that \(T\) is unipotent and \(T\neq 1\). So we will prove the proposition for \(S=\Delta^{*}\) and a PVHS \(\mathbb{V}\to\Delta^{*}\) with unipotent non-trivial monodromy. In order to ensure that the only degeneration of \(\mathbb{V}\) happens as \(t\to 0\), we may and do assume that \(\mathbb{V}\) extends to a punctured disk centered at \(0\) of radius strictly larger than one. The proof is based on Schmid's Hodge norm estimates and his nilpotent orbit and \(\operatorname{SL}_{2}\)-orbit theorems. As a first step, let us write \[\Theta_{\mathbb{V}}(\tau)_{\mu}=\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}+ \Theta_{\mathbb{V}}(\tau)^{\prime\prime}_{\mu},\] with \[\begin{split}\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}& =\sum_{\begin{subarray}{c}v\in\mu+\mathcal{V}_{\mathbb{Z}}\\ v\in W_{2}\end{subarray}}\varphi_{\mathbb{V}}(y^{1/2}v)\ e^{\pi iQ(v,v)x}\\ \Theta_{\mathbb{V}}(\tau)^{\prime\prime}_{\mu}&= \sum_{\begin{subarray}{c}v\in\mu+\mathcal{V}_{\mathbb{Z}}\\ v\notin W_{2}\end{subarray}}\varphi_{\mathbb{V}}(y^{1/2}v)\ e^{\pi iQ(v,v)x}, \end{split} \tag{4.2}\] and similarly we write \(\Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}=\Theta_{\mathbb{V}}^{\circ}(y)^{\prime} _{m,\mu}+\Theta_{\mathbb{V}}^{\circ}(y)^{\prime\prime}_{m,\mu}\). It turns out that the integrability of \(\Theta_{\mathbb{V}}(\tau)^{\prime\prime}_{\mu}\) and \(\Theta_{\mathbb{V}}^{\circ}(y)^{\prime\prime}_{m,\mu}\) is easier to prove: it is a straightforward consequence of the Hodge norm estimates. The integrability of \(\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}\) and \(\Theta_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}\) is more delicate: our proof proceeds by bounding the difference \(\varphi_{\mathbb{V}}(v)-\varphi_{\mathbb{V}^{\mathrm{nilp}}}(v)\) and computing \(\varphi_{\mathbb{V}^{\mathrm{nilp}}}(v)\) explicitly. ### Integrability of \(\Theta_{\mathbb{V}}^{\prime\prime}\) Let us fix a \(\mathbb{Z}\)-PVHS \(\mathbb{V}\) with \(h^{2,0}=1\) over the punctured unit disk \[\Delta^{*}=\{t\in\mathbb{C}\ |\ 0<|t|<1\}\] with unipotent non-trivial monodromy. We will first establish the integrability of \(\Theta_{\mathbb{V}}(\tau)^{\prime\prime}\) and \(\Theta_{\mathbb{V}}^{\circ}(\tau)^{\prime\prime}_{m}\). For this it suffices to work on a fixed angular sector \[U:=\{t\in\Delta^{*}|\epsilon<\arg(t)<2\pi-\epsilon\}\subset\Delta^{*}. \tag{4.3}\] In order to estimate the size of differential forms on \(\Delta^{*}\), we work with the Poincare metric, defined by declaring that the coframe \(\frac{dt}{t\log|t|}\) and \(\frac{d\overline{t}}{t\log|t|}\) is unitary. In particular, a form \(\alpha\in\Omega^{1,1}(\Delta^{*})\) can be written as \[\alpha=\alpha_{11}\frac{dtd\overline{t}}{|t|^{2}(\log|t|)^{2}}\] for a unique smooth function \(\alpha_{11}\) on \(\Delta^{*}\), and for \(t\in\Delta^{*}\) we set \[|\alpha|_{t}:=|\alpha_{11}(t)|.\] We say that \(\alpha\) is rapidly decreasing if \(|\alpha|_{t}=O(t^{\epsilon})\) for some \(\epsilon>0\) and \(t\) in a given \(U\). Fix a basis \(v_{1},\ldots,v_{n+2}\) adapted to the weight filtration as in 2.2.2 giving a trivialization \(\mathcal{V}|_{U}\simeq\underline{\mathbb{C}}^{n+2}\) and denote by \(h(t)=(h_{ij}(t))\) the matrix of the Hodge metric in this basis: for a flat section \(v=a_{1}v_{1}+\ldots a_{n+2}v_{n+2}\) and \(t\in U\) we have \[\|v\|_{t}^{2}=a^{*}h(t)a=\sum_{i,j}\overline{a_{i}}a_{j}h_{ij}(t). \tag{4.4}\] The basis \(v_{1},\ldots,v_{n+2}\) gives a splitting of the complexified weight filtration: writing \[Y_{k}=\langle v_{1+\dim W_{k-1}},\ldots v_{\dim W_{k}}\rangle\subset V_{ \mathbb{C}}, \tag{4.5}\] we have \(W_{k,\mathbb{C}}=Y_{k}\oplus W_{k-1,\mathbb{C}}\) for all \(k\geq 0\). Following Kollar [13, Definition 5.3.(v)], denote by \[e:\mathcal{V}|_{U}\to\mathcal{V}|_{U}\] the endomorphism of the vector bundle \(\mathcal{V}|_{U}\) that acts on the fiber \(\mathcal{V}_{t}\) by \[v\mapsto(-\log|t|)^{(k-2)/2}v\quad\text{ if }v\in Y_{k}\] and set \[\tilde{h}={}^{t}e^{-1}he^{-1}.\] It follows from the Hodge norm estimates that the entries \(\tilde{h}_{ij}\) of \(\tilde{h}\) and \((\det\tilde{h})^{-1}\) are bounded. More precisely, write \(\mathcal{C}^{\omega}(\Delta)\) for the set of real analytic functions on \(\Delta\) and \(L\) for the set of Laurent polynomials in \((-\log|t|)^{1/2}\) with complex coefficients and define \[B\Delta=\{f\in\mathcal{C}^{\omega}(\Delta)\otimes L\ |\ f\ \text{bounded}\}.\] Then \[\tilde{h}_{ij},\ (\det\tilde{h})^{-1}\in B\Delta \tag{4.6}\] (cf. [13, Prop. 5.4]). Moreover, \(B\Delta\) is closed under the operators \(t\log|t|\frac{d}{dt}\) and \(\overline{t}\log|t|\frac{d}{dt}\). Hence, in the coframe given by \(\frac{dt}{t\log|t|}\) and \(\frac{d\overline{t}}{t\log|t|}\), the forms \[\partial\tilde{h}_{ij},\overline{\partial}\tilde{h}_{ij},\partial\overline{ \partial}\tilde{h}_{ij} \tag{4.7}\] have components that belong to \(B\Delta\); we will refer to forms with this property as nearly bounded (loc. cit., Def. 5.3). Note that the product of two nearly bounded forms is nearly bounded. We can use the bounds (4.6) and (4.7) to give an estimate for the form \(\varphi_{\mathbb{V}}(v)\). **Lemma 4.2**.: _There exists a positive constant \(C\) such that_ \[|\varphi_{\mathbb{V}}(v)|_{t}<Ce^{-\pi\|v\|_{t}^{2}}\left(1+\|v\|_{t}^{2}\right)\] _for any \(t\in U\) and any \(v\in V_{\mathbb{R}}\)._ Proof.: Let \[p_{\mathbb{V}}(v)=\left(-\Omega+ih(s_{v})\theta\wedge\overline{\theta}\right), \quad\theta:=\frac{\partial h(s_{v})}{h(s_{v})}. \tag{4.8}\] Since \(\varphi_{\mathbb{V}}(v)=e^{-\pi\|v\|_{t}^{2}}p_{\mathbb{V}}(v)\), it suffices to show that \[|p_{\mathbb{V}}(v)|_{t}<C(1+\|v\|_{t}^{2})\] for \(t\in U\) and \(v\in V_{\mathbb{R}}\). Now the form \(\Omega\) is the first Chern form of the hermitian line bundle \(F^{2}\). It is known to be bounded when the monodromy is unipotent [27, Prop. 1.11]; that is, \(|\Omega|_{t}\) is bounded on \(\Delta^{*}\). To estimate the term \(h(s_{v})\theta\wedge\overline{\theta}\), recall that \[-2\pi i\Omega=\partial\overline{\partial}\log h(s_{v})=\frac{\partial \overline{\partial}h(s_{v})}{h(s_{v})}-\theta\wedge\overline{\theta}.\] Multiplying by \(h(s_{v})\) gives \[h(s_{v})\theta\wedge\overline{\theta}=2\pi ih(s_{v})\Omega+\partial\overline {\partial}h(s_{v}). \tag{4.9}\] Since \(h(s_{v})=2\|v^{2,0}\|_{t}^{2}\leq 2\|v\|_{t}^{2}\), we have \(|h(s_{v})\Omega|_{t}\leq 2\|v\|_{t}^{2}|\Omega|_{t}\), and so it remains to estimate the term \(\partial\overline{\partial}h(s_{v})\). Writing \(v_{t}=\Sigma v_{t}^{p,q}\) for the Hodge decomposition of \(v_{t}\in\mathcal{V}_{t}\), we have \(h(s_{v})=-2Q(v_{t}^{2,0},v_{t}^{0,2})\) and hence \[\|v\|_{t}^{2}=Q(v_{t}^{1,1},v_{t}^{1,1})-2Q(v_{t}^{2,0},v_{t}^{0,2})=Q(v,v)+2h (s_{v}) \tag{4.10}\] and \[\partial\overline{\partial}h(s_{v})=2^{-1}\partial\overline{\partial}\|v\|_{ t}^{2}=2^{-1}\sum\overline{a_{i}}a_{j}\partial\overline{\partial}h_{ij}(t).\] Let us next give a bound for the forms \(\partial\overline{\partial}h_{ij}(t)\). We have \(\tilde{h}_{ij}=e_{ij}^{-1}h_{ij}\), where \(e_{ij}(t)=(-\log|t|)^{a_{ij}/2}\) and \(a_{ij}\) is the integer defined by \(a_{ij}=(m-2)+(l-2)\) if \(v_{i}\in W_{m}-W_{m-1}\) and \(v_{j}\in W_{l}-W_{l-1}\). A direct computation shows that the forms \(e_{ij}^{-1}\partial e_{ij}\), \(e_{ij}^{-1}\overline{\partial}e_{ij}\) and \(e_{ij}^{-1}\partial\overline{\partial}e_{ij}\) are nearly bounded; writing \[\partial\overline{\partial}h_{ij}=\tilde{h}_{ij}\partial\overline{\partial}e _{ij}+\partial\tilde{h}_{ij}\overline{\partial}e_{ij}-\overline{\partial} \tilde{h}_{ij}\partial e_{ij}+e_{ij}\partial\overline{\partial}\tilde{h}_{ij}\] and applying (4.7) shows that \(e_{ij}^{-1}\partial\overline{\partial}h_{ij}\) is nearly bounded too. Thus \[|\partial\overline{\partial}h(s_{v})|_{t}=O(\sum|a_{i}a_{j}|e_{ij}(t)),\] and by [13, Lemma 5.6], we have \[\sum|a_{i}a_{j}|e_{ij}(t)=O(\|v\|_{t}^{2}). \tag{4.11}\] This finishes the proof. The lemma implies the (very coarse) bound \[|\Theta_{\mathbb{V}}(\tau)^{\prime\prime}_{\mu}|_{t} \leq C\sum_{\begin{subarray}{c}v\in\mu+\mathcal{V}_{\mathbb{Z}}\\ v\notin W_{2}\end{subarray}}e^{-\pi y\|v\|_{t}^{2}}(1+y\|v\|_{t}^{2})\] \[\leq C^{\prime}\sum_{\begin{subarray}{c}v\in\mathcal{V}_{ \mathbb{Z}}^{\prime}\\ v\notin W_{2}\end{subarray}}e^{-\pi y\|v\|_{t}^{2}/2}, \tag{4.12}\] for some constant \(C^{\prime}>0\) depending only on \(\mathbb{V}\) and the basis \((v_{i})\). Using this bound we can prove the integrability of \(\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime\prime}\). **Proposition 4.3**.: _For any \(m\) and \(\mu\), the forms \(\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime\prime}\) and \(\Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime\prime}\) are rapidly decreasing as \(t\to 0\), uniformly on any angular sector \(U\). We have_ \[\int_{\Delta^{*}}\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime\prime}=\sum_{m}\left( \int_{\Delta^{*}}\Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime\prime}\right) \cdot q^{m}.\] Proof.: Fix a basis \(v_{1},\ldots,v_{n+2}\) of \(\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}|_{U}\) adapted to the weight filtration and denote by \(|\cdot|\) the metric on \(\mathcal{V}\) obtained from the standard metric on \(\mathbb{C}^{n+2}\) via the corresponding trivialization \(\mathcal{V}|_{U}\simeq\underline{\mathbb{C}}^{n+2}\). Define \(Y_{k}\) as in (4.5). For \(v\in\mathcal{V}_{\mathbb{Q}}\), let us write \(v=\sum v_{k}\) with \(v_{k}\in Y_{k}\). It follows from (4.6) that there is a positive constant \(c\), depending only on \(\mathbb{V}\) and the basis \((v_{i})\), such that for all \(v\in\mathcal{V}_{\mathbb{Q}}\) we have \[\|v\|_{t}^{2}>2c\sum_{k}|v_{k}|^{2}(-\log|t|)^{k-2}. \tag{4.13}\] Combined with (4.12), this immediately implies the following bound: let us write \(Y_{k}^{\mathbb{Z}}=Y_{k}\cap\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}\) and let \(l=1\) if \(\mathbb{V}\) is of type II and \(l=0\) if \(\mathbb{V}\) is of type III; then \[W_{2}\cap\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}=Y_{2}^{\mathbb{Z}}\oplus(W_{1} \cap\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}})=Y_{2}^{\mathbb{Z}}\oplus Y_{l}^{ \mathbb{Z}}\] and \[\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}=Y_{4-l}^{\mathbb{Z}}\oplus(W_{2}\cap \mathcal{V}_{\mathbb{Z}}^{\mathbb{V}})=Y_{4-l}^{\mathbb{Z}}\oplus Y_{2}^{ \mathbb{Z}}\oplus Y_{l}^{\mathbb{Z}}\] and we have \[\begin{split}|\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime\prime}|_{t }&\leq C^{\prime}\sum_{\begin{subarray}{c}v\in\mathcal{V}_{ \mathbb{Z}}^{\mathbb{V}}\\ v\notin W_{2}\end{subarray}}e^{-\pi y\|v\|_{t}^{2}/2}\\ &=C^{\prime}\sum_{\begin{subarray}{c}0\neq u\in Y_{4-l}^{\mathbb{Z }}\\ v\in Y_{2}^{\mathbb{Z}}\\ w\in Y_{l}^{\mathbb{Z}}\end{subarray}}e^{-\pi y\|u+v+w\|_{t}^{2}/2}\\ &<C^{\prime}\sum_{0\neq u\in Y_{4-l}^{\mathbb{Z}}}e^{-\pi cy|u|^{2} (-\log|t|)^{2-l}}\\ &\quad\times\sum_{v\in Y_{2}^{\mathbb{Z}}}e^{-\pi cy|v|^{2}}\\ &\quad\times\sum_{w\in Y_{l}^{\mathbb{Z}}}e^{-\pi cy|w|^{2}(-\log|t| )^{l-2}}.\end{split} \tag{4.14}\] In the last expression, the sum over \(u\) is clearly rapidly decreasing as \(t\to 0\), and the sum over \(v\) is independent of \(t\). It remains to estimate the sum over \(w\). This can be done by Poisson summation: since \(Y_{l}^{\mathbb{Z}}=W_{1}\cap\mathcal{V}_{\mathbb{Z}}^{\vee}\) is a lattice of rank \(l+1\), we have \[\sum_{w\in Y_{l}^{\mathbb{Z}}}e^{-\pi cy|w|^{2}(-\log|t|)^{l-2}}=O\left(-\log|t| \right). \tag{4.15}\] This shows that \(\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime\prime}\) and \(\Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime\prime}\) are rapidly decreasing as \(t\to 0\). The identity in the statement follows by dominated convergence. Note that (4.14) and (4.15) give the bound \[\begin{split}|\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}|_{t}& =O\left(\sum_{v\in\mathcal{V}_{\mathbb{Z}}^{\vee}\cap W_{2}}e^{- \pi y\|v\|_{1}^{2}/2}\right)\\ &=O(-\log|t|).\end{split} \tag{4.16}\] This estimate will be useful later but it is not enough to guarantee the integrability of \(\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}\). ### Reduction to nilpotent orbits Following the strategy outlined at the end of 4.1, we must now consider the integrability of \(\Theta_{\mathbb{V}}(\tau)^{\prime}\) and \(\Theta_{\mathbb{V}}^{\circ}(y)_{m}^{\prime}\). Our next goal is to prove the following Proposition, which shows that it is enough to consider the case where \(\mathbb{V}\) is a nilpotent orbit. **Proposition 4.4**.: _Let \(\mathbb{V}\to\Delta^{*}\) be a weight two polarized variation of Hodge structure with \(h^{2,0}=1\). Assume that the monodromy is unipotent and non-trivial and let \(\mathbb{V}^{\mathrm{nilp}}\) be the corresponding nilpotent orbit. For any \(\tau\in\mathbb{H}\) and any \(m\) and \(\mu\), the forms_ \[\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}-\Theta_{\mathbb{V}^{\mathrm{nilp}}} (\tau)_{\mu}^{\prime}\ \ \text{and}\ \ \Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime}-\Theta_{\mathbb{V}^{\mathrm{nilp }}}^{\circ}(y)_{m,\mu}^{\prime}\in\Omega^{1,1}(\Delta^{*})\] _are rapidly decreasing as \(t\to 0\). We have_ \[\int_{\Delta^{*}}\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}-\Theta_{\mathbb{V}^ {\mathrm{nilp}}}(\tau)_{\mu}^{\prime}=\sum_{m}\left(\int_{\Delta^{*}}\Theta_{ \mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime}-\Theta_{\mathbb{V}^{\mathrm{nilp}}}^{ \circ}(y)_{m,\mu}^{\prime}\right)\cdot q^{m}.\] As a first step towards the proof let us establish an estimate for the difference between the Hodge norms \(\|v\|_{\mathcal{V},t}\) and \(\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}\) of a flat section \(v\in V_{\mathbb{R}}\) as \(t\to 0\). **Lemma 4.5**.: _There are positive constants \(A\), \(B\) such that_ \[\left|\|v\|_{\mathcal{V},t}^{2}-\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2} \right|\leq A|t|^{B}\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}\] _for every \(t\in U\) and \(v\in V_{\mathbb{R}}\)._ Proof.: Let \(\pi:\mathbb{H}\to\Delta^{*}\) be the uniformizing map \(z\mapsto t=e^{2\pi iz}\) and let \(\Phi_{1}:\mathbb{H}\to\mathbb{D}\) and \(\Phi_{2}:\mathbb{H}\to\mathbb{D}\) be the period maps induced by \(\pi^{*}\mathbb{V}\) and \(\pi^{*}\mathbb{V}^{\mathrm{nilp}}\) respectively. Let us write \(G_{\mathbb{R}}=\mathrm{SO}(V_{\mathbb{R}},Q)\) and fix \(z_{0}\in\mathbb{D}\). Pick differentiable lifts \[\phi_{1},\phi_{2}:\mathbb{H}\to G_{\mathbb{R}}\] of \(\Phi_{1}\) and \(\Phi_{2}\), i.e. maps \(\phi_{i}\) that satisfy \[\Phi_{i}(z)=\phi_{i}(z)x_{0},\quad z\in\mathbb{H},\quad i=1,2.\] (For example, use the Iwasawa decomposition of \(G_{\mathbb{R}}\).) Writing \(\|\cdot\|_{x}\) for the euclidean metric on \(V_{\mathbb{R}}\) corresponding to \(x\in\mathbb{D}\), we have the equivariance property \(\|gv\|_{gx}=\|v\|_{x}\) for all \(g\in G_{\mathbb{R}}\). Hence \[\|v\|_{\mathcal{V},t}^{2}=\|v\|_{\Phi_{1}(t)}^{2}=\|\phi_{1}(t)^{-1}v\|_{x_{0}} ^{2}.\] and similarly \(\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}=\|\phi_{2}(t)^{-1}v\|_{x_{0}}^{2}\). Thus it suffices to prove that \[|\|\phi_{1}(t)^{-1}\phi_{2}(t)v\|_{x_{0}}^{2}-\|v\|_{x_{0}}^{2}|\leq A|t|^{B}\| v\|_{x_{0}}^{2}\] for all \(v\in V_{\mathbb{R}}\), or equivalently to show that the norm \(\|\phi_{1}(t)^{-1}\phi_{2}(t)\|\) of the operator \(\phi_{1}(t)^{-1}\phi_{2}(t)\in\mathrm{End}(V_{\mathbb{R}})\) satisfies \[|\|\phi_{1}(t)^{-1}\phi_{2}(t)\|^{2}-1|\leq A|t|^{B}.\] This follows directly from Schmid's nilpotent orbit theorem [22, Thm 4.12] (see also p. 244 of loc. cit. for a comparison between the operator norm and Riemannian distance on \(\mathbb{D}\)). Lemma 4.5 leads to the following upper bound for the difference between \(\varphi_{\mathbb{V}}\) and \(\varphi_{\mathbb{V}^{\mathrm{nilp}}}\). **Lemma 4.6**.: _There exist positive constants \(A\), \(B\) and \(C\) such that_ \[|\varphi_{\mathbb{V}}(v)-\varphi_{\mathbb{V}^{\mathrm{nilp}}}(v)|_{t}<C|t|^{B} e^{-\pi(1-A|t|^{B})\|v\|_{\mathcal{V},t}^{2}}(1+\|v\|_{\mathcal{V},t}^{4})\] _for any \(t\in U\) and any \(v\in V_{\mathbb{R}}\)._ Proof.: Let \(p_{\mathbb{V}}\) be as in (4.8). The proof of Lemma 4.2 shows that \[|p_{\mathbb{V}}(v)|_{t}=O(1+\|v\|_{\mathcal{V},t}^{2}). \tag{4.17}\] By Lemma 4.5 the same upper bound holds for \(|p_{\mathbb{V}^{\mathrm{nilp}}}(v)|_{t}\). Hence it suffices to establish the bounds \[|e^{-\pi\|v\|_{\mathcal{V},t}^{2}}-e^{-\pi\|v\|_{\mathcal{V}^{ \mathrm{nilp}},t}^{2}}| <C|t|^{B}e^{-\pi(1-A|t|^{B})\|v\|_{\mathcal{V},t}^{2}}\|v\|_{ \mathcal{V},t}^{2} \tag{4.19}\] \[|p_{\mathbb{V}}(v)-p_{\mathbb{V}^{\mathrm{nilp}}}(v)|_{t} <C|t|^{B}(1+\|v\|_{\mathcal{V},t}^{2}). \tag{4.18}\] The first bound is equivalent to \[|e^{-\pi(\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}-\|v\|_{\mathcal{V},t}^{2} )}-1| <C|t|^{B}e^{\pi A|t|^{B}\|v\|_{\mathcal{V},t}^{2}}\|v\|_{\mathcal{V},t}^{2},\] which follows readily from Lemma 4.5 and the inequality \(|e^{x}-1|\leq|x|e^{|x|}\) valid for all real \(x\). As to the second bound, let us use (4.9) and (4.10) to write \(p_{\mathbb{V}}(v)\) as \[p_{\mathbb{V}}(v) =-(1+2\pi h(s_{v}))\Omega_{\mathcal{L}}+i\partial\overline{ \partial}h(s_{v})\] \[=-(1+\pi\|v\|_{\mathcal{V}}^{2}-\pi Q(v,v))\Omega_{\mathcal{L}}+i \partial\overline{\partial}\|v\|_{\mathcal{V}}^{2}/2 \tag{4.20}\] and similarly \[p_{\mathbb{V}^{\mathrm{nilp}}}(v)=-(1+\pi\|v\|_{\mathcal{V}^{ \mathrm{nilp}}}^{2}-\pi Q(v,v))\Omega_{\mathcal{L}^{\mathrm{nilp}}}+i \partial\overline{\partial}\|v\|_{\mathcal{V}^{\mathrm{nilp}}}^{2}/2.\] By Lemma 4.5 and the fact that \(\Omega_{\mathcal{L}}\) and \(\Omega_{\mathcal{L}^{\mathrm{nilp}}}\) are nearly bounded, it suffices to show that \[|\Omega_{\mathcal{L}}-\Omega_{\mathcal{L}^{\mathrm{nilp}}}|_{t}=O(|t|^{B}) \tag{4.21}\] and \[|\partial\overline{\partial}(\|v\|_{\mathcal{V},t}^{2}-\|v\|_{\mathcal{V}^{ \mathrm{nilp}},t}^{2})|_{t}=O(|t|^{B}\|v\|_{\mathcal{V},t}^{2}). \tag{4.22}\] Both bounds follow from (4.6). Namely, let us write \(\|v\|_{\mathcal{V},t}^{2}=a^{*}h_{\mathcal{V}}(t)a\) and \(\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}=a^{*}h_{\mathcal{V}^{\mathrm{nilp}} }(t)a\) as in (4.4) and let \(f_{ij}(t)=e_{ij}^{-1}(h_{\mathcal{V}}(t)_{ij}-h_{\mathcal{V}^{\mathrm{nilp}}}( t)_{ij})\); then \[\|v\|_{\mathcal{V},t}^{2}-\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}=\sum_{i,j }a_{i}a_{j}e_{ij}(t)\cdot f_{ij}(t).\] Let us write \(a^{*}ea=\Sigma_{i,j}a_{i}a_{j}e_{ij}\); then \(|a^{*}e(t)a|=O(\|v\|_{\mathcal{V},t}^{2})\) by (4.11). Since the forms \(e_{ij}^{-1}\partial e_{ij}\), \(e_{ij}^{-1}\overline{\partial}e_{ij}\) and \(e_{ij}^{-1}\partial\overline{\partial}e_{ij}\) are nearly bounded, the expressions \(|\partial(a^{*}ea)|_{t}\), \(|\overline{\partial}(a^{*}ea)|_{t}\) and \(|\partial\overline{\partial}(a^{*}ea)|_{t}\) are all \(O(\|v\|_{\mathcal{V},t}^{2})\). Hence to establish the bound (4.22) it suffices to prove that the expressions \(|f_{ij}(t)|\), \(|\partial f_{ij}|_{t}\), \(|\overline{\partial}f_{ij}|_{t}\) and \(|\partial\overline{\partial}f_{ij}|_{t}\) are all \(O(|t|^{B})\) for some positive constant \(B\). Now Lemma 4.5 gives the bound \[t^{-B}(h_{\mathcal{V}}(t)_{ij}-h_{\mathcal{V}^{\mathrm{nilp}}}(t)_{ij}))\in B\Delta \tag{4.23}\] for some \(B>0\), and so the required bounds on \(f_{ij}\) and its derivatives follow from the fact that \(B\Delta\) is closed under the operators \(t\log|t|\frac{d}{dt}\) and \(\overline{t}\log|t|\frac{d}{dt}\). It remains to prove the bound (4.21). To see this, pick a non-zero element \(v\in W_{1}\); then \(Q(v,v)=0\) and hence \(\|v\|_{\mathcal{V}}^{2}=2h_{\mathcal{V}}(s_{v})\) and \(\|v\|_{\mathcal{V}^{\mathrm{nilp}}}^{2}=2h_{\mathcal{V}^{\mathrm{nilp}}}(s_{v})\) by (4.10). We can then write \(-2\pi i\Omega_{\mathcal{L}}=\partial\overline{\partial}\log\|v\|_{\mathcal{V}} ^{2}\) and similarly \(-2\pi i\Omega_{\mathcal{L}^{\mathrm{nilp}}}=\partial\overline{\partial}\log\|v \|_{\mathcal{V}^{\mathrm{nilp}}}^{2}\) and so to prove (4.21) it suffices to establish that \[\left|\frac{\partial\overline{\partial}\|v\|_{\mathcal{V}}^{2}}{\|v\|_{ \mathcal{V}}^{2}}-\frac{\partial\overline{\partial}\|v\|_{\mathcal{V}^{ \mathrm{nilp}}}^{2}}{\|v\|_{\mathcal{V}^{\mathrm{nilp}}}^{2}}\right|_{t}\text{ and }\left|\frac{\partial\|v\|_{\mathcal{V}}^{2}}{\|v\|_{\mathcal{V}}^{2}}-\frac{ \partial\|v\|_{\mathcal{V}^{\mathrm{nilp}}}^{2}}{\|v\|_{\mathcal{V}^{\mathrm{nilp }}}^{2}}\right|_{t}\] are of the form \(O(|t|^{B})\) for some \(B>0\). This follows the Hodge norm estimates \(\|v\|_{\mathcal{V}}^{2}\), \(\|v\|_{\mathcal{V}^{\mathrm{nilp}}}^{2}\sim(-\log|t|)^{k-2}\) (for \(v\in W_{k}-W_{k-1}\)) together with the bounds provided by Lemma 4.5 and (4.22). Proof of Proposition 4.4.: Lemma 4.6 implies that for \(|t|\) small enough we have \[|\varphi_{\forall}(v)-\varphi_{\forall\mathrm{nilp}}(v)|_{t}<C|t|^{B}e^{-\pi \|v\|_{\mathcal{V},t}^{2}/2}\] for all \(v\in V_{\mathbb{R}}\). Then \[|\Theta_{\forall}(\tau)^{\prime}_{\mu}-\Theta_{\forall\mathrm{nilp}}(\tau)^{ \prime}_{\mu}|_{t}<C|t|^{B}\sum_{v\in\mathcal{V}^{2}_{\forall}\cap W_{2}}e^{- \pi y\|v\|_{\mathcal{V},t}^{2}/2}\] With the notation of the proof of Proposition 4.3 we have \[\sum_{v\in\mathcal{V}_{\mathbb{Z}}^{\mathbb{V}}\cap W_{2}}e^{-\pi y \|v\|_{\mathcal{V},t}^{2}/2} <\sum_{v\in Y_{2}^{\mathbb{Z}}}e^{-\pi cy|v|^{2}}\] \[\times\sum_{w\in Y_{l}^{\mathbb{Z}}}e^{-\pi cy|w|^{2}(-\log|t|)^{l -2}} \tag{4.24}\] and as argued in that proof the last expression is \(O(-\log|t|)\). This gives \[|\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}-\Theta_{\mathbb{V}^{\mathrm{nilp}}}( \tau)^{\prime}_{\mu}|_{t}=O(-|t|^{B}\log|t|)\] and a similar upper bound for \(|\Theta_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}-\Theta_{\mathbb{V}^{\mathrm{nilp }}}^{\circ}(y)^{\prime}_{m,\mu}|_{t}\). The identity in the statement follows from dominated convergence. It will convenient to pass from an arbitrary nilpotent orbit \(\mathbb{V}^{\mathrm{nilp}}\) to a special type of nilpotent orbit that we denote by \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\). The special feature of \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\) is that the corresponding limiting mixed Hodge structure splits over \(\mathbb{R}\); one might refer to \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\) as an "\(\mathbb{R}\)-split nilpotent orbit". To a nilpotent orbit \(\mathbb{V}^{\mathrm{nilp}}\) one can canonically attach an \(\mathbb{R}\)-split nilpotent orbit \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\). The Hodge filtration \(\tilde{\mathcal{F}}^{\bullet}\) of \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\) lives in the same complex vector space as the Hodge filtration \(\mathcal{F}^{\bullet}\) of \(\mathbb{V}^{\mathrm{nilp}}\), and both are related by \[\tilde{\mathcal{F}}^{\bullet}=e^{i\delta}\mathcal{F}^{\bullet}\] for a certain element \(\delta\) defined by Deligne (see [6, Prop. 2.20, p. 480]). The two orbits \(\mathbb{V}^{\mathrm{nilp}}\) and \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\) are close in a sense made precise by Schmid's \(\mathrm{SL}_{2}\)-orbit theorem [6, Thm 3.25]. As a result one obtains the following bound for the difference between the forms \(\varphi_{\mathbb{V}^{\mathrm{nilp}}}(v)\) and \(\varphi_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(v)\). **Lemma 4.7**.: _There exists a positive constant \(C\) such that_ \[|\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}-\|v\|_{\tilde{\mathbb{V}}^{ \mathrm{nilp}},t}^{2}|\leq C(-\log|t|)^{-1}\|v\|_{\mathcal{V}^{\mathrm{nilp} },t}^{2} \tag{4.25}\] _for all \(t\in U\) and all \(v\in V_{\mathbb{R}}\)._ Proof.: As in the proof of Lemma 4.5, this bound is equivalent to a bound for operator norm of an element \(g_{z}\in G_{\mathbb{R}}\) relating \(\Phi_{\mathbb{V}^{\mathrm{nilp}}}(t)\) and \(\Phi_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(t)\). The relevant bound is proved in [6, pp. 480-481]: in the notation of that paper, the element \[g_{z}=e^{xN}\tilde{g}(y)e^{-xN}\in G_{\mathbb{R}}\] (cf. loc. cit., eq. (3.19)) relates both filtrations, i.e. it satisfies \[\Phi_{\mathbb{V}^{\mathrm{nilp}}}(t)=g_{z}\Phi_{\tilde{\mathbb{V}}^{\mathrm{nilp }}}(t).\] The bound (4.25) is then equivalent to \[\sup_{v\in V_{\mathbb{R}}-0}\left|\frac{\|g_{z}v\|_{\mathcal{V}^{\mathrm{nilp }},t}}{\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}}-1\right|\leq C(-\log|t|)^{-1}. \tag{4.26}\] Schmid's \(\mathrm{SL}_{2}\)-orbit theorem [6, Thm 3.25] shows that \(\tilde{g}(y)\) admits a convergent expansion \[\tilde{g}(y)=\tilde{g}(\infty)(1+\tilde{g}_{1}y^{-1}+\tilde{g}_{2}y^{-2}+ \cdots).\] To prove (4.26) it suffices to establish that \[\sup_{v\in V_{\mathbb{R}}-0}\frac{\|(g_{z}\tilde{g}(\infty)^{-1}-1)v\|_{\mathcal{V }^{\mathrm{nilp}},t}}{\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}}\;\;\text{and}\;\; \sup_{v\in V_{\mathbb{R}}-0}\frac{\|(\tilde{g}(\infty)-1)v\|_{\mathcal{V}^{ \mathrm{nilp}},t}}{\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}}\] are \(O((-\log|t|)^{-1})\). For the first expression this follows directly from the above expansion for \(\tilde{g}(y)\) together with the fact that \(\tilde{g}_{k}\) maps \(W_{n,\mathbb{R}}\) to \(W_{n+k-1,\mathbb{R}}\). For the second expression one uses that \(\tilde{g}(\infty)-1\) maps \(W_{n,\mathbb{R}}\) to \(W_{n-2,\mathbb{R}}\) (cf. [6, Thm. 3.25.(ii) and (iv)]). **Lemma 4.8**.: _There exists a positive constant \(C\) such that_ \[|\varphi_{\mathbb{V}^{\mathrm{nilp}}}(v)-\varphi_{\mathbb{V}^{\mathrm{nilp}}} (v)|_{t}<C(-\log|t|)^{-1}e^{-\frac{\pi}{2}\|v\|_{\mathcal{V}^{\mathrm{nilp}},t} ^{2}}\] _for any \(v\in V_{\mathbb{R}}\) and any \(t\in U\) with \(|t|\) sufficiently small._ Proof.: The proof follows closely that of Lemma 4.6, replacing the use of Lemma 4.5 by (4.25). The needed bounds \[|\Omega_{\mathcal{L}^{\mathrm{nilp}}}-\Omega_{\tilde{\mathcal{L}}^{\mathrm{ nilp}}}|_{t}=O((-\log|t|)^{-1})\] and \[|\partial\overline{\partial}(\|v\|_{\mathcal{V}^{\mathrm{nilp}},t}^{2}-\|v\|_ {\tilde{\mathcal{V}}^{\mathrm{nilp}},t}^{2})|_{t}=O((-\log|t|)^{-1}\|v\|_{ \mathcal{V}^{\mathrm{nilp}},t}^{2})\] follow in the same way as (4.21) and (4.22) using the fact that the subspace \[(-\log|t|)^{-1}B\Delta\subset B\Delta\] is stable under the operators \(t\log|t|\frac{d}{dt}\) and \(\overline{t}\log|t|\frac{d}{dt}\). Combined with the estimate (4.16), the lemma implies the bound \[\begin{split}|\Theta_{\mathbb{V}^{\mathrm{nilp}}}(\tau)^{\prime} _{\mu}-\Theta_{\bar{\mathbb{V}}^{\mathrm{nilp}}}(\tau)^{\prime}_{\mu}|_{t}& \leq\sum_{v\in\mathcal{V}^{\vee}_{\mathbb{Z}}\cap W_{2}}|\varphi _{\mathbb{V}^{\mathrm{nilp}}}(v)-\varphi_{\bar{\mathbb{V}}^{\mathrm{nilp}}}( v)|_{t}\\ &\leq C(-\log|t|)^{-1}\left(\sum_{v\in\mathcal{V}^{\vee}_{ \mathbb{Z}}\cap W_{2}}e^{-\frac{\pi}{2}\|v\|_{\mathcal{V}^{\mathrm{nilp}},t} ^{2}}\right)\\ &=O(1),\end{split} \tag{4.27}\] and similarly that \(|\Theta_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(y)^{\prime}_{m,\mu}-\Theta_{ \bar{\mathbb{V}}^{\mathrm{nilp}}}^{\circ}(y)^{\prime}_{m,\mu}|_{t}\) is also bounded for all \(m\) and \(\mu\). It follows that to prove Theorem 4.1 it suffices to show that \(\Theta_{\mathbb{V}^{\mathrm{nilp}}}(\tau)^{\prime}_{\mu}\) and \(\Theta_{\bar{\mathbb{V}}^{\mathrm{nilp}}}^{\circ}(y)^{\prime}_{m,\mu}\) are integrable over \(\Delta^{*}\) for all \(m\) and \(\mu\) and satisfy \[\int_{\Delta^{*}}\Theta_{\bar{\mathbb{V}}^{\mathrm{nilp}}}(\tau)^{\prime}_{\mu} =\sum_{m}\left(\int_{\Delta^{*}}\Theta_{\bar{\mathbb{V}}^{\mathrm{nilp}}}^{ \circ}(y)^{\prime}_{m,\mu}\right)\cdot q^{m}. \tag{4.28}\] We will prove (4.28) in the next two sections by distinguishing the nilpotent orbits of types II and III, using the explicit nature of \(\varphi_{\bar{\mathbb{V}}^{\mathrm{nilp}}}(v)\) in each case. ### Integrability for type II nilpotent orbits #### 4.4.1. Let us first determine explicitly the form \(\varphi_{\mathbb{V}}(v)\) corresponding to a type II nilpotent orbit \(\mathbb{V}=\tilde{\mathbb{V}}^{\mathrm{nilp}}\). The setting is that of Section 2.3; in particular, we assume that the associated limiting mixed Hodge structure is \(\mathbb{R}\)-split. For a vector \(v\in W_{2,\mathbb{R}}\), we can write \(v=v_{2}+ae^{1,0}+\overline{a}e^{0,1}\) with \(v_{2}\in V_{2}\), and by (2.22) we have \[\varphi_{\mathbb{V}}(v)=e^{-\pi Q(v_{2},v_{2})}\varphi_{\mathbb{V}}(ae^{1,0}+ \overline{a}e^{0,1}) \tag{4.29}\] and \[\|ae^{1,0}+\overline{a}e^{0,1}\|_{z}^{2}=2|a|^{2}/\mathrm{Im}(z). \tag{4.30}\] For \(v\in W_{1}\), we have \[Q(v,v)=0\Rightarrow\|v_{z}^{1,1}\|_{z}^{2}=2\|v_{z}^{2,0}\|_{z}^{2}\] and hence \[h(s_{v})=2\|v^{2,0}\|_{z}^{2}=(\|v^{1,1}\|_{z}^{2}+2\|v^{2,0}\|_{z}^{2})/2=\|v \|_{z}^{2}/2.\] For \(v\in W_{1}\), we conclude using (2.17) that \[\theta =\frac{\partial h(s_{v})}{h(s_{v})}=\frac{\partial\|v\|_{z}^{2}}{ \|v\|_{z}^{2}}=-\frac{\partial\mathrm{Im}(z)}{\mathrm{Im}(z)}=\frac{idz}{2 \mathrm{Im}(z)}\] \[\theta\wedge\overline{\theta} =\frac{dz\wedge d\overline{z}}{4\mathrm{Im}(z)^{2}}=-2\pi i\Omega\] and hence \[\varphi_{\mathbb{V}}(v) =e^{-\pi\|v\|_{z}^{2}}(-\Omega+ih(s_{v})\theta\wedge\overline{ \theta})\] \[=e^{-\pi\|v\|_{z}^{2}}(\pi\|v\|_{z}^{2}-1)\Omega \tag{4.31}\] For \(v=ae^{1,0}+\overline{a}e^{0,1}\), by (4.30) we obtain \[\varphi_{\mathbb{V}}(ae^{1,0}+\overline{a}e^{0,1})=\phi\left(\frac{a}{( \mathrm{Im}(z)/2)^{1/2}}\right)\Omega, \tag{4.32}\] where \(\phi:\mathbb{C}\to\mathbb{R}\) is the Schwartz form defined by \[\phi(a)=e^{-\pi|a|^{2}}(\pi|a|^{2}-1). \tag{4.33}\] Let us write \(\mathcal{F}\phi\) for the Fourier transform of \(\phi\). In order to estimate \(\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}\) we will need to compute \(\mathcal{F}\phi(0)\). Using polar coordinates we find \[\mathcal{F}\phi(0) =\int_{\mathbb{C}}\phi(a)da\] \[=\int_{0}^{\infty}\int_{0}^{2\pi}\phi(r\cos\theta,r\sin\theta)rdrd\theta\] \[=2\pi\int_{0}^{\infty}e^{-\pi r^{2}}(\pi r^{2}-1)rdr\] \[=2\pi\left(-e^{-\pi r^{2}}\frac{r^{2}}{2}\right|_{0}^{\infty}=0. \tag{4.34}\] #### 4.4.2. Using the above description of \(\varphi_{\mathbb{V}}\) we can compute \(\Theta_{\mathbb{V}}(\tau)^{\prime}\) explicitly. We write \(W_{j}^{\mathbb{Z}}=V_{\mathbb{Z}}\cap W_{j}\) and obtain a filtration \[0=W_{0}^{\mathbb{Z}}\subset W_{1}^{\mathbb{Z}}\subset W_{2}^{\mathbb{Z}} \subset W_{3}^{\mathbb{Z}}=V_{\mathbb{Z}}\] of the local system \(V_{\mathbb{Z}}\). The associated quotients \[\operatorname{Gr}_{j}^{W}V_{\mathbb{Z}}:=W_{j}^{\mathbb{Z}}/W_{j-1}^{\mathbb{ Z}},\quad j=1,2,3,\] are local systems of free abelian groups of ranks \(2\), \(n-2\) and \(2\) respectively. Note that if \(\mu\notin W_{2}+V_{\mathbb{Z}}\), then \((\mu+V_{\mathbb{Z}})\cap W_{2}=\emptyset\) and \(\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}=0\). Recall the Deligne splitting introduced in (2.3): we have \[W_{1}\otimes\mathbb{C}=I^{1,0}\oplus I^{0,1},\quad W_{2}\otimes\mathbb{C}=I^{ 1,0}\oplus I^{0,1}\oplus I^{1,1}.\] Since \(I^{1,1}\) is stable under complex conjugation, this induces a splitting of the filtration \(W_{1}\otimes\mathbb{R}\subset W_{2}\otimes\mathbb{R}\): \[W_{2}\otimes\mathbb{R}=W_{1}\otimes\mathbb{R}\oplus(I^{1,1}\cap V_{\mathbb{R}}).\] We denote by \(\pi_{1}:W_{2}\otimes\mathbb{R}\to W_{1}\otimes\mathbb{R}\) and \(\pi_{2}:W_{2}\otimes\mathbb{R}\to(I^{1,1}\cap V_{\mathbb{R}})\) the resulting projections. By (4.29), for \(v\in W_{2}^{\mathbb{Z}}\) we have \[\varphi_{\mathbb{V}}(v)=e^{-\pi Q(\pi_{2}(v),\pi_{2}(v))}\varphi_{\mathbb{V}} (\pi_{1}(v)). \tag{4.35}\] Since \(Q(W_{1},W_{2})=0\), we can rewrite this as \[\varphi_{\mathbb{V}}(v)=e^{-\pi Q(v,v)}\varphi_{\mathbb{V}}(\pi_{1}(v)). \tag{4.36}\] For the theta series \(\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}\) with \(\mu\in W_{2}+V_{\mathbb{Z}}\) we have \((\mu+V_{\mathbb{Z}})\cap W_{2}=\mu+W_{2}^{\mathbb{Z}}\) and hence \[\begin{split}\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}& =\sum_{v\in\mu+W_{2}^{\mathbb{Z}}}\varphi_{\mathbb{V}}(y^{1/2}v)e ^{\pi iQ(v,v)x}\\ &=\sum_{v\in(\mu+W_{2}^{\mathbb{Z}})/W_{1}^{\mathbb{Z}}}q^{Q(v,v )/2}\sum_{v_{1}\in W_{1}^{\mathbb{Z}}}\varphi_{\mathbb{V}}(y^{1/2}(v_{1}+\pi_ {1}(v))).\end{split} \tag{4.37}\] We will now use Poisson summation to give an upper bound for the inner sum. Let us fix flat sections \(\lambda_{1}\), \(\lambda_{2}\) of \(W_{1}^{\mathbb{Z}}\) giving a trivialization \[\Phi:\underline{\mathbb{Z}}^{2}\stackrel{{\sim}}{{\to}}W_{1}^{ \mathbb{Z}},\quad\Phi((a_{1},a_{2}))=a_{1}\lambda_{1}+a_{2}\lambda_{2}. \tag{4.38}\] Write \[\lambda_{i}=\alpha_{i}e^{1,0}+\overline{\alpha_{i}}e^{0,1},\quad i=1,2, \tag{4.39}\] for some complex numbers \(\alpha_{1}\), \(\alpha_{2}\). By (2.22), for a vector \(a=(a_{1},a_{2})\in\mathbb{R}^{2}\), the Hodge metric of \(\Phi(a)\) is given by \[\|\Phi(a)\|_{z}=2|a_{1}\alpha_{1}+a_{2}\alpha_{2}|^{2}/\mathrm{Im}(z),\] Let us define a Schwartz function \(\tilde{\phi}:\mathbb{R}^{2}\to\mathbb{C}\) by \[\tilde{\phi}(x_{1},x_{2})=\phi\left(x_{1}\alpha_{1}+x_{2}\alpha_{2}\right).\] Writing \(\pi_{1}(v)=\Phi(a)\) for some \(a\in\mathbb{R}^{2}\) and using (4.32) and (4.33) we have \[\sum_{v_{1}\in W_{1}^{\mathbb{Z}}}\varphi_{\mathbb{V}}(y^{1/2}(v_{1}+\pi_{1}(v)) )=\left(\sum_{n\in\mathbb{Z}^{2}}\tilde{\phi}\left(\frac{a+n}{\sqrt{\operatorname {Im}(z)/2y}}\right)\right)\Omega. \tag{4.40}\] Writing \(\mathcal{F}\tilde{\phi}\) for the Fourier transform of \(\tilde{\phi}\), an application of Poisson summation gives \[\sum_{n\in\mathbb{Z}^{2}}\tilde{\phi}\left(\frac{a+n}{\sqrt{ \operatorname{Im}(z)/2y}}\right)=\frac{\operatorname{Im}(z)}{2y}\sum_{m\in \mathbb{Z}^{2}}e^{2\pi im\cdot a}\mathcal{F}\tilde{\phi}(\sqrt{\operatorname {Im}(z)/2y}\cdot m). \tag{4.41}\] By (4.34), the term corresponding to \(m=0\) vanishes, and we obtain the upper bound, uniform in \(a\), \[\left|\sum_{v_{1}\in W_{1}^{\mathbb{Z}}}\varphi_{\mathbb{V}}(\sqrt{y}(v_{1}+ \pi_{1}(v)))\right|_{t}\leq\frac{\operatorname{Im}(z)}{2y}\sum_{m\in\mathbb{Z }^{2}-0}|\mathcal{F}\tilde{\phi}(\sqrt{\operatorname{Im}(z)/2y}\cdot m)|\cdot| \Omega|_{t} \tag{4.42}\] This expression decreases rapidly as \(\operatorname{Im}(z)\to\infty\), which implies the integrability of \(\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}\). Similarly, using \(Q(W_{1},W_{2})=0\), we may write \[\begin{split}\Theta_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}& =\sum_{\begin{subarray}{c}v\in\mu+W_{2}^{\mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\varphi_{\mathbb{V}}^{\circ}(y^{1/2}v)\\ &=\sum_{\begin{subarray}{c}v\in(\mu+W_{2}^{\mathbb{Z}})/W_{1}^{ \mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\sum_{v_{1}\in W_{1}^{\mathbb{Z}}}\varphi_{\mathbb{V}} ^{\circ}(y^{1/2}(v_{1}+\pi_{1}(v))).\end{split} \tag{4.43}\] Note that the outer sum is finite since \(Q\) polarizes the pure Hodge structure \(\operatorname{Gr}_{2}^{W}\mathbb{V}\), which is purely of type \((1,1)\). The above bound establishes that \(\Theta_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}\) is integrable for all \(m\) and implies the identity \[\int_{\Delta^{*}}\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}=\sum_{m}\left(\int_ {\Delta^{*}}\Theta_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}\right)\cdot q^{m}\] by dominated convergence. ### Integrability for type III nilpotent orbits We now determine the form \(\varphi_{\mathbb{V}}(v)\) and theta series \(\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}\) for a type III nilpotent orbit \(\mathbb{V}=\tilde{\mathbb{V}}^{\operatorname{nilp}}\). We will work in the setting of Section 2.4; in particular, we assume that the associated limiting mixed Hodge structure is \(\mathbb{R}\)-split. #### 4.5.1. Let \(v\in W_{2,\mathbb{R}}\). As in (2.30), we may write \(v=v_{U}+aNe^{2,2}+bN^{2}e^{2,2}\) with \(v_{U}\in U\) and real numbers \(a\) and \(b\). By (2.32) and (2.33), we have \[\varphi_{\mathbb{V}}(v)=e^{-\pi Q(v_{U},v_{U})}\varphi_{\mathbb{V}}(aNe^{2,2} +bN^{2}e^{2,2}). \tag{4.44}\] Differentiating \(h(s_{v})=|b-az|^{2}/\mathrm{Im}(z)^{2}\) gives \[\theta(s_{v})=\frac{\partial h(s_{v})}{h(s_{v})}=\left(-\frac{a}{b-az}+\frac{i}{ \mathrm{Im}(z)}\right)dz\] and hence \[\theta\wedge\overline{\theta}=\left|\frac{b-a\mathrm{Re}(z)}{(b-az)\mathrm{Im }(z)}\right|^{2}dz\wedge d\overline{z}.\] Using (2.27) we obtain \[\begin{split} ih(s_{v})\theta\wedge\overline{\theta}& =\frac{|b-a\mathrm{Re}(z)|^{2}}{\mathrm{Im}(z)^{2}}\cdot\frac{idz \wedge d\overline{z}}{\mathrm{Im}(z)^{2}}\\ &=\left(\frac{b-a\mathrm{Re}(z)}{\mathrm{Im}(z)}\right)^{2}4\pi \Omega.\end{split} \tag{4.45}\] For \(v=aNe^{2,2}+bN^{2}e^{2,2}\) we obtain the explicit formula \[\varphi_{\mathbb{V}}(aNe^{2,2}+bN^{2}e^{2,2})=e^{-\pi a^{2}}\phi\left(\frac{b -a\mathrm{Re}(z)}{\mathrm{Im}(z)}\right)\Omega, \tag{4.46}\] where \(\phi:\mathbb{R}\to\mathbb{R}\) is the Schwartz function defined by \[\phi(b)=e^{-2\pi b^{2}}(4\pi b^{2}-1). \tag{4.47}\] The most important consequence of this explicit description of \(\phi\) is that its Fourier transform \(\mathcal{F}\phi\) satisfies \[\mathcal{F}\phi(0)=0, \tag{4.48}\] as follows from the identity \(\frac{d}{db}(-be^{-2\pi b^{2}})=e^{-2\pi b^{2}}(4\pi b^{2}-1)\). #### 4.5.2. Let us now compute \(\Theta_{\mathbb{V}}(\tau)^{\prime}\) explicitly using the above description of \(\varphi_{\mathbb{V}}\). As in 4.4, we define \(W_{j}^{\mathbb{Z}}=W_{j}\cap V_{\mathbb{Z}}\) and obtain a filtration \[0\subset W_{0}^{\mathbb{Z}}=W_{1}^{\mathbb{Z}}\subset W_{2}^{\mathbb{Z}}=W_{3 }^{\mathbb{Z}}\subset W_{4}^{\mathbb{Z}}=V_{\mathbb{Z}}\] whose associated quotients \(\mathrm{Gr}_{k}^{W}V_{\mathbb{Z}}:=W_{k}^{\mathbb{Z}}/W_{k-1}^{\mathbb{Z}}\) are free abelian groups (of rank one in the case of \(\mathrm{Gr}_{0}^{W}V_{\mathbb{Z}}\) and \(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}\)). Let us define \[W_{2,\mathrm{prim}}^{\mathbb{Z}}=W_{2}^{\mathbb{Z}}\cap\ker N.\] Fix a generator \(v_{0}\) of \(W_{0}^{\mathbb{Z}}\) and vectors \(v_{1},\dots,v_{n-1}\) of \(W_{2,\mathrm{prim}}^{\mathbb{Z}}\) such that \[W_{2,\mathrm{prim}}^{\mathbb{Z}}=\langle v_{0},v_{1},\dots,v_{n-1}\rangle,\] i.e. so that \(Y_{2,\mathrm{prim}}^{\mathbb{Z}}:=\langle v_{1},\dots,v_{n-1}\rangle\) is a complement of \(W_{0}^{\mathbb{Z}}\) in \(W_{2,\mathrm{prim}}^{\mathbb{Z}}\). We also fix a vector \(v_{n}^{\prime}\in V_{\mathbb{Z}}\) mapping to a generator of \(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}\) and a vector \(v_{n}\in W_{2}^{\mathbb{Z}}\) such that \(v_{n}\equiv Nv_{n}^{\prime}\mod W_{0}\); then the image of \(v_{n}\) in \(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}\) generates the rank one lattice \[N(\mathrm{Gr}_{4}^{W}V_{\mathbb{Z}})=\mathrm{im}(N:\mathrm{Gr}_{4}^{W}V_{ \mathbb{Z}}\to\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}})\] (cf. Lemma 3.2.(ii)). Define \[Y_{2}^{\mathbb{Z}}:=\langle v_{1},\dots,v_{n}\rangle=Y_{2,\mathrm{prim}}^{ \mathbb{Z}}\oplus\langle v_{n}\rangle\] (orthogonal sum). Then \(W_{0}^{\mathbb{Z}}+Y_{2}^{\mathbb{Z}}\) is a (finite index) sublattice of \(W_{2}^{\mathbb{Z}}\) and the quotient map \(W_{2}\to\mathrm{Gr}_{2}^{W}V\) induces an isometry \[Y_{2}^{\mathbb{Z}}\simeq\mathrm{Gr}_{2,\mathrm{prim}}^{W}V_{\mathbb{Z}}\oplus N( \mathrm{Gr}_{4}^{W}V_{\mathbb{Z}}) \tag{4.49}\] onto a (finite-index) sublattice of \(\mathrm{Gr}_{2}^{W}V_{\mathbb{Z}}\). With the notation of 3.3.2 we may then write \(\mu=\mu_{2}+\mu_{0}\) with \(\mu_{2}\in Y_{2}^{\mathbb{Z}}\otimes\mathbb{Q}\) and \(\mu_{0}\in W_{0}\) and \[\Theta_{\mathbb{V}}(\tau)^{\prime}_{\mu}=\sum_{\begin{subarray}{c}N\lambda+ \nu\equiv\mu_{2}\\ \mod W_{2}^{\mathbb{Z}}\end{subarray}}\Theta_{\mathbb{V}}(\tau)^{\prime}_{ \lambda\otimes\nu} \tag{4.50}\] with \[\Theta_{\mathbb{V}}(\tau)^{\prime}_{\lambda\otimes\nu}:=\sum_{ \begin{subarray}{c}v^{\prime}\in N\lambda+\langle v_{n}\rangle\\ v\in\nu+Y_{2,\mathrm{prim}}^{\mathbb{Z}}\\ w\in\mu_{0}+W_{0}^{\mathbb{Z}}\end{subarray}}\varphi_{\mathbb{V}}(y^{1/2}(v^ {\prime}+v+w))e^{\pi ix(Q(v^{\prime},v^{\prime})+Q(v,v))}. \tag{4.51}\] To estimate this sum, we use (2.30) to write \[v^{\prime} =a(v^{\prime})Ne^{2,2}+b(v^{\prime})N^{2}e^{2,2}\] \[v =\pi_{U}(v)+b(v)N^{2}e^{2,2}\] \[w =b(w)N^{2}e^{2,2}, \tag{4.52}\] where \(\pi_{U}:W_{2,\mathbb{R}}\to U\) is the projection to \(U\) and \(a,b\) are linear functionals on \(W_{2,\mathbb{R}}\). Since \(W_{0}\) is anisotropic and \(Q(W_{0},W_{2})=0\), we have \(Q(\pi_{U}(v),\pi_{U}(v))=Q(v,v)\) and \(Q(v^{\prime},v^{\prime})=-a(v^{\prime})^{2}\) and hence, by (4.44) and (4.46), \[\varphi_{\mathbb{V}}(v^{\prime}+v+w) =e^{-\pi Q(v,v)}\] \[\quad\times\varphi_{\mathbb{V}}(a(v^{\prime})Ne^{2,2}+b(v+v^{ \prime})N^{2}e^{2,2}+w).\] \[=e^{-\pi(Q(v,v)-Q(v^{\prime},v^{\prime}))}\] \[\quad\times\phi\left(\frac{b(v+v^{\prime}+w)-a(v^{\prime})\mathrm{ Re}(z)}{\mathrm{Im}(z)}\right)\Omega, \tag{4.53}\] with \(\phi\) given by (4.47). For the theta series \(\Theta_{\mathbb{V}}(\tau)^{\prime}_{\lambda\otimes\nu}\) this gives \[\Theta_{\mathbb{V}}(\tau)^{\prime}_{\lambda\otimes\nu} =\sum_{\begin{subarray}{c}v^{\prime}\in N\lambda+\langle v_{n} \rangle\\ v\in\nu+Y_{2,\mathrm{prim}}^{\mathbb{Z}}\\ \end{subarray}}q^{Q(v,v)/2}\overline{q}^{(-Q(v^{\prime},v^{\prime})/2)}\] \[\quad\times\sum_{w\in\mu_{0}+W_{0}^{\mathbb{Z}}}\phi\left(\sqrt{y }\frac{b(v+v^{\prime}+w)-a(v^{\prime})\mathrm{Re}(z)}{\mathrm{Im}(z)}\right)\Omega. \tag{4.54}\] To estimate the sum over \(\mu_{0}+W_{0}^{\mathbb{Z}}=\mathbb{Z}v_{0}\) we apply Poisson summation: writing \(\mathcal{F}\phi\) for the Fourier transform of \(\phi\) and \[A=\frac{b(v+v^{\prime}+\mu_{0})-a(v^{\prime})\mathrm{Re}(z)}{\mathrm{Im}(z)/ \sqrt{y}},\] we have \[\sum_{n\in\mathbb{Z}}\phi\left(A+\frac{nb(v_{0})}{\operatorname{Im}(z)/\sqrt{y}} \right)=\frac{\operatorname{Im}(z)}{|b(v_{0})|\sqrt{y}}\sum_{m\in\mathbb{Z}}e^{2 \pi imA}\mathcal{F}\phi\left(\frac{\operatorname{Im}(z)}{b(v_{0})\sqrt{y}}m \right).\] The term corresponding to \(m=0\) vanishes by (4.48); this gives the upper bound, uniform in \(A\), \[\left|\sum_{w\in W_{0}^{\mathbb{Z}}}\phi\left(\sqrt{y}\frac{b(v+v ^{\prime}+w)-a(v^{\prime})\text{Re}(z)}{\operatorname{Im}(z)}\right)\right|_{t}\] \[\leq\frac{\operatorname{Im}(z)}{|b(v_{0})|\sqrt{y}}\sum_{m\in \mathbb{Z}-0}\left|\mathcal{F}\phi\left(\frac{\operatorname{Im}(z)}{b(v_{0}) \sqrt{y}}m\right)\right||\Omega|_{t} \tag{4.55}\] The right hand side is rapidly decreasing as \(\operatorname{Im}(z)\to\infty\). This implies the integrability of \(\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}\). Similarly, using \(Q(W_{0},W_{2})=0\), we have \(\Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime}=\sum_{\begin{subarray}{c}N \lambda+\nu\equiv\mu_{2}\\ \text{mod }W_{2}^{\mathbb{Z}}\end{subarray}}\Theta_{\mathbb{V}}^{\circ}(y)_{m,\lambda\otimes\nu}^{\prime}\) with \[\Theta_{\mathbb{V}}^{\circ}(y)_{m,\lambda\otimes\nu}^{\prime}\cdot q ^{m}=\sum_{\begin{subarray}{c}v^{\prime}\in N\lambda+\langle v_{n}\rangle\\ v\in\nu+V_{2,\text{prim}}^{\prime}\\ Q(v+v^{\prime},v+v^{\prime})=2m\end{subarray}}q^{Q(v,v)/2}\overline{q}^{(-Q(v ^{\prime},v^{\prime})/2)}\\ \times\sum_{w\in\mu_{0}+W_{0}^{\mathbb{Z}}}\phi\left(\sqrt{y} \frac{b(v+v^{\prime}+w)-a(v^{\prime})\text{Re}(z)}{\operatorname{Im}(z)} \right)\Omega. \tag{4.56}\] The bound (4.55) and the fact that \(Q\) is positive definite on \(Y_{2,\text{prim}}^{\mathbb{Z}}\) and negative definite on \(N(\operatorname{Gr}_{4}^{W}V_{\mathbb{Z}})\) show that \(\Theta_{\mathbb{V}}^{\circ}(y)_{m,\lambda\otimes\nu}^{\prime}\) is integrable for all \(m\). The identity \[\int_{\Delta^{*}}\Theta_{\mathbb{V}}(\tau)_{\mu}^{\prime}=\sum_{m}\left(\int_{ \Delta^{*}}\Theta_{\mathbb{V}}^{\circ}(y)_{m,\mu}^{\prime}\right)\cdot q^{m}\] follows by dominated convergence. ## 5. Generating series of Noether-Lefschetz numbers The goal of this section is to determine the Fourier expansion of the non-holomorphic modular forms \(Z_{\mathbb{V}}(\tau)_{\mu}\) in Theorem 4.1. We will see that their Fourier coefficients can be expressed in terms of the degrees of the Noether-Lefschetz loci \(\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\) defined below and some discrete invariants of the limiting mixed Hodge structures arising from the degeneration of \(\mathbb{V}\) around each point \(P\) in \(\overline{S}-S\). More precisely, to \(\mathbb{V}\) one can attach the \(q\)-series \[Z^{+}_{\mathbb{V}}(\tau)_{\mu}:=-\deg(\overline{\mathcal{L}})\delta_{\mu,0}+\sum _{m>0}\deg\,\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\cdot q^{m},\quad q=e^{2\pi i\tau}\] as well as theta series \(Z^{-}_{\mathbb{V},P}(\tau)\) for each \(P\in\overline{S}\!\setminus\!S\) (see (3.17) and (3.24)). We will prove the following theorem which implies Theorem 1.2. **Theorem 5.1**.: _Assume that \(\mathbb{V}\) satisfies 1.1. For all \(\tau\in\mathbb{H}\) and \(\mu\in\mathcal{V}^{\vee}_{\mathbb{Z}}/\mathcal{V}_{\mathbb{Z}}\),_ \[Z_{\mathbb{V}}(\tau)_{\mu}=Z^{+}_{\mathbb{V}}(\tau)_{\mu}+\sum_{P\in\overline{ S}\!\setminus\!S}Z^{-}_{\mathbb{V},P}(\tau)_{\mu}.\] The proof proceeds by checking that both sides have the same Fourier coefficients. That is, let \[Z^{-}_{\mathbb{V},P}(\tau)_{\mu}=\sum_{m}Z^{-}_{\mathbb{V},P}(y)_{m,\mu}\cdot q ^{m}\] be the Fourier expansion of \(Z^{-}_{\mathbb{V},P}(\tau)_{\mu}\) and write similarly \[Z^{+}_{\mathbb{V}}(\tau)_{m,\mu}=\left\{\begin{array}{cc}\deg\, \operatorname{NL}_{\mathbb{V}}(m)_{\mu}&\text{if $m>0$,}\\ -\deg(\overline{\mathcal{L}}),&\text{if $(m,\mu)=(0,0)$,}\\ 0,&\text{otherwise,}\end{array}\right.\] for the Fourier coefficients of \(Z^{+}_{\mathbb{V}}(\tau)_{\mu}\). Theorem 5.1 is then equivalent to the identity \[\int_{S}\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}=Z^{+}_{\mathbb{V}}(\tau)_{m, \mu}+\sum_{P\in\overline{S}\!\setminus\!S}Z^{-}_{\mathbb{V},P}(y)_{m,\mu} \tag{5.1}\] for all \(m\) and \(\mu\). ### Kudla-Millson forms and Noether-Lefschetz loci The main input needed to prove Theorem 5.1 is the computation of the residues at the boundary of certain Green functions \(\mathfrak{g}^{\circ}(y)_{m,\mu}\) for the Noether-Lefschetz loci obtained by pulling back Green functions for special divisors on orthogonal Shimura varieties. The latter Green functions were introduced by Kudla in [15]. Let us briefly recall their definition. Consider the Kudla-Millson theta series \[\Theta_{\operatorname{KM}}(\tau)_{\mu}=\sum_{v\in\mu+V_{\mathbb{Z}}}\varphi_{ \operatorname{KM}}(y^{1/2}v)e^{\pi ixQ(v,v)}\] and let us write \[\Theta_{\operatorname{KM}}(\tau)_{\mu}=\sum_{m\in\frac{1}{2}Q(\mu,\mu)+ \mathbb{Z}}\Theta^{\circ}_{\operatorname{KM}}(y)_{m,\mu}\cdot q^{m}\] for its Fourier expansion. One of the main properties of \(\Theta_{\operatorname{KM}}(\tau)_{\mu}\) is that it defines a closed differential form and its Fourier coefficients \(\Theta^{\circ}_{\operatorname{KM}}(y)_{m,\mu}\) are Poincare dual to a certain special divisor \(Z(m,\mu)\) (see [14]) whose intersection with \(S\) gives the Noether-Lefschetz locus \(\operatorname{NL}_{\mathbb{V}}(m)_{\mu}\). In [15], Kudla introduced a Green function \(\mathfrak{g}^{\circ}(y)_{m,\mu}\) for the special divisor \(Z(m,\mu)\), i.e. a smooth function on \(\Gamma\backslash\mathbb{D}-|Z(m,\mu)|\) satisfying Green's equation \[\operatorname{dd}^{\mathrm{c}}[\mathfrak{g}^{\circ}(y)_{m,\mu}]+\delta_{Z(m, \mu)}=[\Theta^{\circ}_{\mathrm{KM}}(y)_{m,\mu}-\varphi_{\mathrm{KM}}(0)\delta _{(m,\mu)=(0,0)}]. \tag{5.2}\] Here \(\mathrm{d}^{\mathrm{c}}=(4\pi i)^{-1}(\partial-\overline{\partial})\), so that \(\operatorname{dd}^{\mathrm{c}}=-(2\pi i)^{-1}\partial\overline{\partial}\), and the term \(\delta_{(m,\mu)=(0,0)}\) equals one if \((m,\mu)=(0,0)\) and vanishes otherwise. Pulling back \(\mathfrak{g}^{\circ}(y)_{m,\mu}\) by the period map \(\Phi_{\mathbb{V}}\) associated with \(\mathbb{V}\), we obtain a function \(\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}\) whose main properties are summarized in the following Proposition. **Proposition 5.2**.: _For \(v\) a local section of \(\mathcal{V}_{\mathbb{R}}\), define_ \[\nu^{\circ}_{\mathbb{V}}(v)=e^{-2\pi h(s_{v})} \tag{5.3}\] _and \(\nu_{\mathbb{V}}(v)=e^{-\pi Q(v,v)}\nu^{\circ}_{\mathbb{V}}(v)\). Then_ \[\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}:=\int_{1}^{\infty}\left(\sum_{ \begin{subarray}{c}0\neq v\in\mu+\mathcal{V}_{\mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\nu^{\circ}_{\mathbb{V}}((yu)^{1/2}v)\right)\frac{du}{u} \tag{5.4}\] _defines a smooth function on \(S-|\mathrm{NL}_{\mathbb{V}}(m)_{\mu}|\) that satisfies the differential equation_ \[\operatorname{dd}^{\mathrm{c}}[\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}]+ \delta_{\mathrm{NL}_{\mathbb{V}}(m)_{\mu}}=[\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}+\Omega\delta_{(m,\mu)=(0,0)}] \tag{5.5}\] _as currents on \(S\). (Here \(\delta_{\mathrm{NL}_{\mathbb{V}}(m)_{\mu}}\) denotes the current of integration against the divisor associated with \(\mathrm{NL}_{\mathbb{V}}(m)_{\mu}\), understood to vanish if \(m\leq 0\).)_ Let us fix \(m\) and \(\mu\) and choose small disks \(D_{P,\epsilon}\) around each point \(P\) in the support of \(\mathrm{NL}_{\mathbb{V}}(m)_{\mu}\) as well as in \(\overline{S}\setminus S\) whose radii tend to zero as \(\epsilon\to 0\). By the above proposition and the integrability of \(\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}\) we have \[\begin{split}\int_{S}\Theta^{\circ}_{\mathbb{V}}(y)_{m,\mu}& +\deg(\overline{\mathcal{L}})\delta_{(m,\mu)=(0,0)}\\ &=\lim_{\epsilon\to 0}\int_{S-\cup D_{P,\epsilon}}(\Theta^{ \circ}_{\mathbb{V}}(y)_{m,\mu}+\Omega\delta_{(m,\mu)=(0,0)})\\ &=\lim_{\epsilon\to 0}\int_{\partial(S-\cup D_{P,\epsilon})} \mathrm{d}^{\mathrm{c}}\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}\\ &=\deg\,\mathrm{NL}_{\mathbb{V}}(m)_{\mu}-\sum_{P\in\overline{S }\setminus S}\lim_{\epsilon\to 0}\int_{\partial D_{P,\epsilon}}\mathrm{d}^{ \mathrm{c}}\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}\\ &=\deg\,\mathrm{NL}_{\mathbb{V}}(m)_{\mu}-\sum_{P\in\overline{S }\setminus S}\mathrm{res}_{P}\ \partial\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu},\end{split} \tag{5.6}\] where in the last line \(\mathrm{res}_{P}\) denotes the residue at \(P\) of the \((1,0)\)-form \(\partial\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}\). Thus to establish (5.1) it suffices to prove the identity \[-\mathrm{res}_{P}\ \partial\mathfrak{g}^{\circ}_{\mathbb{V}}(y)_{m,\mu}=Z^{-}_{ \mathbb{V},P}(y)_{m,\mu} \tag{5.7}\] for all \(m\) and \(\mu\) and all \(P\in\overline{S}\setminus S\). Note that the residue \(\operatorname{res}_{P}\) depends only on the restriction of \(\mathbb{V}\) to a small disk centered at \(P\). ### Local residue computations It follows from (5.7) that to prove Theorem 5.1 it suffices to prove the following three lemmas. In their statements we assume that \(\mathbb{V}\) is an arbitrary \(\mathbb{Z}\)-PVHS of weight two with \(h^{2,0}=1\) satisfying 1.1 on the punctured unit disk \(S=\Delta^{*}\). With the notation of Section 2.2 we define \[\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}=\int_{1}^{\infty}\left( \sum_{\begin{subarray}{c}0\neq\nu\in(\mu+\nu_{\mathbb{Z}})\cap W_{2}\\ Q(v,v)=2m\end{subarray}}\nu_{\mathbb{V}}^{\circ}((yu)^{1/2}v)\right)\frac{du}{ u}.\] The strategy is now the same as for the proof of Theorem 4.1: one first shows that the residue of \(\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)_{m,\mu}\) agrees with that of \(\partial\mathfrak{g}_{\mathbb{V}\mathrm{nilp}}^{\circ}(y)^{\prime}_{m,\mu}\) and then one computes the latter residue using the explicit formulas for the "\(\mathbb{R}\)-split nilpotent orbit" \(\bar{\mathbb{V}}^{\mathrm{nilp}}\) in Sections 2.3 and 2.4. **Lemma 5.3**.: _For any \(m\) and \(\mu\), we have_ \[\operatorname{res}_{t=0}\ (\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)_{m, \mu}-\partial\mathfrak{g}_{\mathbb{V}\mathrm{nilp}}^{\circ}(y)^{\prime}_{m, \mu})=0.\] **Lemma 5.4**.: _For any \(m\) and \(\mu\), we have_ \[\operatorname{res}_{t=0}\ (\partial\mathfrak{g}_{\mathbb{V}\mathrm{nilp}}^{ \circ}(y)^{\prime}_{m,\mu}-\partial\mathfrak{g}_{\mathbb{V}\mathrm{nilp}}^{ \circ}(y)^{\prime}_{m,\mu})=0.\] **Lemma 5.5**.: _For any \(m\) and \(\mu\), we have_ \[-\operatorname{res}_{t=0}\ \partial\mathfrak{g}_{\mathbb{V}\mathrm{nilp}}^{ \circ}(y)^{\prime}_{m,\mu}=Z_{\mathbb{V},P}^{-}(\tau)_{m,\mu}.\] The proof of these lemmas is analogous to the proofs of similar lemmas in Sections 4.2, 4.3, 4.4 and 4.5. It will be convenient to define \[\tilde{\Theta}_{\mathbb{V}}(y)_{m,\mu}=\sum_{\begin{subarray}{c}v\in\mu+\nu _{\mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\nu_{\mathbb{V}}^{\circ}(y^{1/2}v) \tag{5.8}\] and write \[\tilde{\Theta}_{\mathbb{V}}(y)_{m,\mu}=\tilde{\Theta}_{\mathbb{V}}(y)^{ \prime}_{m,\mu}+\tilde{\Theta}_{\mathbb{V}}(y)^{\prime\prime}_{m,\mu},\] where in \(\tilde{\Theta}_{\mathbb{V}}(y)^{\prime}_{m,\mu}\) the sum runs over vectors in \(W_{2}\) while in \(\tilde{\Theta}_{\mathbb{V}}(y)^{\prime\prime}_{m,\mu}\) it runs over vectors not in \(W_{2}\). Since \(\nu_{\mathbb{V}}^{\circ}(0)=1\) and hence \(\partial\nu_{\mathbb{V}}^{\circ}(0)=0\), we can drop the condition \(v\neq 0\) in (5.4) when computing \(\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)_{m,\mu}\) ; that is, we have \[\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)_{m,\mu}=\int_{1}^{\infty}\partial \tilde{\Theta}_{\mathbb{V}}(uy)_{m,\mu}\frac{du}{u} \tag{5.9}\] and \[\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)_{m,\mu}=\partial\mathfrak{g}_{ \mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu}+\partial\mathfrak{g}_{\mathbb{V}}^{ \circ}(y)^{\prime\prime}_{m,\mu}\] with \[\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{\prime}_{m,\mu} =\int_{1}^{\infty}\partial\tilde{\Theta}_{\mathbb{V}}(uy)^{\prime }_{m,\mu}\frac{du}{u}\] \[\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{\prime\prime}_{m, \mu} =\int_{1}^{\infty}\partial\tilde{\Theta}_{\mathbb{V}}(uy)^{\prime \prime}_{m,\mu}\frac{du}{u}. \tag{5.10}\] Proof of Lemma 5.3.: This reduces to \[\operatorname{res}_{t=0}\ \partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{ \prime\prime}_{m,\mu} =0 \tag{5.12}\] \[\operatorname{res}_{t=0}\ (\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{ \prime}_{m,\mu}-\partial\mathfrak{g}_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(y)^ {\prime}_{m,\mu}) =0. \tag{5.11}\] To prove (5.11) we can use (5.10) and the explicit expression \[\partial\nu_{\mathbb{V}}^{\circ}(y^{1/2}v)=\partial(e^{-2\pi yh(s_{v})})=e^{ -2\pi yh(s_{v})}\cdot(-\pi y\partial\|v\|_{\mathcal{V}}^{2}) \tag{5.13}\] (recall that \(\|v\|_{\mathcal{V}}^{2}=Q(v,v)+2h(s_{v})\) and hence \(2\partial h(s_{v})=\partial\|v\|_{\mathcal{V}}^{2}\)). With the notation of (4.4), we have \[\partial\|v\|_{\mathcal{V}}^{2}=\sum_{i,j}\overline{a_{i}}a_{j}\partial h_{ij }(t).\] As in the proof of Lemma 4.2 one shows that the forms \(e_{ij}^{-1}\partial h_{ij}\) are nearly bounded and hence that \[|\partial\|v\|_{\mathcal{V}}^{2}|_{t}\leq C\|v\|_{\mathcal{V},t}^{2} \tag{5.14}\] for some positive constant \(C\), giving the bound \[|\partial\tilde{\Theta}_{\mathbb{V}}(uy)^{\prime\prime}_{m,\mu}|_{t}\leq C \cdot\sum_{\begin{subarray}{c}v\in\mathcal{V}_{\mathbb{Z}}^{\vee}\\ v\notin W_{2}\\ Q(v,v)=2m\end{subarray}}e^{-2\pi uyh(s_{v})}\pi uy\|v\|_{t}^{2}\] and hence \[|\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{\prime\prime}_{m,\mu}|_{t} \leq C\cdot\sum_{\begin{subarray}{c}v\in\mathcal{V}_{\mathbb{Z}}^{ \vee}\\ v\notin W_{2}\\ Q(v,v)=2m\end{subarray}}\int_{1}^{\infty}e^{-2\pi uyh(s_{v})}\pi y\|v\|_{t}^{ 2}du\] \[=C\cdot\sum_{\begin{subarray}{c}v\in\mathcal{V}_{\mathbb{Z}}^{ \vee}\\ v\notin W_{2}\\ Q(v,v)=2m\end{subarray}}e^{-2\pi yh(s_{v})}\frac{\|v\|_{t}^{2}}{2h(s_{v})}\] \[=e^{2\pi ym}C\cdot\sum_{\begin{subarray}{c}v\in\mathcal{V}_{ \mathbb{Z}}^{\vee}\\ v\notin W_{2}\\ Q(v,v)=2m\end{subarray}}e^{-\pi y\|v\|_{t}^{2}}\left(1-\frac{2m}{\|v\|_{t}^{2}} \right)^{-1}. \tag{5.15}\] By (4.13), the factor \((1-2m\|v\|_{t}^{-2})^{-1}\) in the last expression is bounded above by an expression of the form \((1-A(-\log|t|)^{-1})^{-1}\) for some \(A>0\). The argument in the proof of Proposition 4.3 now shows that \(|\partial\mathfrak{g}_{\mathbb{V}}^{\circ}(y)^{\prime\prime}_{m,\mu}|_{t}\) is rapidly decreasing as \(t\to 0\), proving (5.11). To show that (5.12) holds one can follow closely the arguments proving Lemma 4.6 and Proposition 4.4. One first shows by differentiating (4.23) that \[|\partial\|v\|_{\mathcal{V},t}^{2}-\partial\|v\|_{\mathcal{V}^{\mathrm{nilp}},t} ^{2}|_{t}=O(|t|^{B}\|v\|_{\mathcal{V},t}^{2}) \tag{5.16}\] for some positive constant \(B\). Multiplying (4.18) by \(e^{\pi Q(v,v)}\) yields \[|e^{-2\pi h_{\mathcal{V}}(s_{v})}-e^{-2\pi h_{\mathcal{V}\mathrm{nilp}}(s_{v}) }|_{t}<C|t|^{B}e^{-\pi(2h_{\mathcal{V}}(s_{v})-A|t|^{B}\|v\|_{\mathcal{V},t}^{ 2})}\|v\|_{\mathcal{V},t}^{2}. \tag{5.17}\] Combined with (5.14) and (5.16), this gives the bound \[|\partial\nu_{\mathbb{V}}^{\circ}(v)-\partial\nu_{\mathbb{V}^{\mathrm{nilp}}} ^{\circ}(v)|_{t}<C|t|^{B}e^{-\pi(2h_{\mathcal{V}}(s_{v})-A|t|^{B}\|v\|_{ \mathcal{V}}^{2})}\cdot\|v\|_{\mathcal{V}}^{2}(1+\|v\|_{\mathcal{V}}^{2}) \tag{5.18}\] for some positive constants \(A\), \(B\) and \(C\). Writing \(f(v,t)=2h_{\mathcal{V}}(s_{v})-A|t|^{B}\|v\|_{\mathcal{V}}^{2}\), we have \[\int_{1}^{\infty}e^{-\pi uf(v,t)}u\|v\|_{\mathcal{V}}^{2}\frac{du}{u}=\|v\|_{ \mathcal{V}}^{2}\frac{e^{-\pi f(v,t)}}{\pi f(v,t)}\] and \[\int_{1}^{\infty}e^{-\pi uf(v,t)}u^{2}\|v\|_{\mathcal{V}}^{4}\frac{du}{u}=\|v \|_{\mathcal{V}}^{4}\left(\frac{e^{-\pi f(v,t)}}{\pi f(v,t)}+\frac{e^{-\pi f( v,t)}}{(-\pi f(v,t))^{2}}\right).\] By (4.13), there exist \(A>0\) and \(k\in\mathbb{N}\) such that \(f(y^{1/2}v,t)^{-1}<Ay^{-1}((-\log|t|)^{k})\) for all non-zero \(v\in\mathcal{V}_{\mathbb{Z}}^{\vee}\), and so for \(v\in\mathcal{V}_{\mathbb{Z}}^{\vee}\) with \(Q(v,v)=2m\) we obtain \[\int_{1}^{\infty}|\partial\nu_{\mathbb{V}}^{\circ}((yu)^{1/2}v) -\partial\nu_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}((yu)^{1/2}v)|_{ t}\frac{du}{u}\] \[<C(y^{-1}+y^{-2})|t|^{B}(-\log|t|)^{K}e^{2\pi ym}e^{-\pi y\|v\|_{ \mathcal{V}}^{2}/2} \tag{5.19}\] for positive constants \(B\), \(C\) and \(K\). Property (5.12) now follows as in the proof of Proposition 4.4. Proof of Lemma 5.4.: It suffices to show that \[|\partial\mathfrak{g}_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(y)_{m,\mu}^{\prime }-\partial\mathfrak{g}_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(y)_{m,\mu}^{\prime }|_{t}\] is bounded for \(t\) in a fixed angular sector where \(|\cdot|_{t}\) denotes the Poincare metric, i.e. that the form \(\mathfrak{g}_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(y)_{m,\mu}^{\prime}- \partial\mathfrak{g}_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(y)_{m,\mu}^{\prime}\) is nearly bounded. To see this, let us write \(\mathcal{V}=\mathcal{V}^{\mathrm{nilp}}\) and \(\tilde{\mathcal{V}}=\tilde{\mathcal{V}}^{\mathrm{nilp}}\). Using (5.13), (4.25) and the elementary inequality \(|e^{x}-1|\leq|x|e^{|x|}\) we estimate \[\pi^{-1}|\partial\nu_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(v) -\partial\nu_{\mathbb{V}^{\mathrm{nilp}}}^{\circ}(v)|_{t}\] \[=|e^{-2\pi h_{\mathcal{V}}(s_{v})}\partial\|v\|_{\mathcal{V}}^{2}- e^{-2\pi h_{\hat{\mathcal{V}}}(s_{v})}\partial\|v\|_{\tilde{\mathcal{V}}}^{2}|_{t}\] \[\leq|e^{-2\pi h_{\mathcal{V}}(s_{v})}-e^{-2\pi h_{\hat{\mathcal{V }}}(s_{v})}|\cdot|\partial\|v\|_{\tilde{\mathcal{V}}}^{2}|_{t}\] \[\quad+e^{-2\pi h_{\hat{\mathcal{V}}}(s_{v})}\cdot|\partial\|v\|_{ \mathcal{V}}^{2}-\partial\|v\|_{\tilde{\mathcal{V}}}^{2}|_{t}\] \[\leq Ce^{-\pi h_{\hat{\mathcal{V}}}(s_{v})}\cdot(-\log|t|)^{-1}, \tag{5.20}\] where \(C\) depends only on \(m\). This gives \[|\partial\mathfrak{g}^{\circ}_{\forall^{\mathrm{nilp}},m}(y)^{ \prime}-\partial\mathfrak{g}^{\circ}_{\tilde{\mathbb{V}}^{\mathrm{nilp}},m}(y) ^{\prime}|_{t} \leq\int_{1}^{\infty}\sum_{\begin{subarray}{c}v\in\mu+\mathcal{V}_{ \mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}|\nu^{\circ}_{\forall^{\mathrm{nilp}}}((yu)^{1/2}v)-\nu^ {\circ}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}((yu)^{1/2}v)|_{t}\frac{du}{u}\] \[\leq C(-\log|t|)^{-1}\int_{1}^{\infty}\sum_{\begin{subarray}{c}v \in\mu+\mathcal{V}_{\mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}e^{-\pi yuh_{\tilde{\mathbb{V}}}(s_{v})}\frac{du}{u}. \tag{5.21}\] The proof of Lemma 5.5 will show that the integrand in the last expression is \(O((-\log|t|)/\sqrt{u})\). It follows that \(\partial\mathfrak{g}^{\circ}_{\forall^{\mathrm{nilp}},m}(y)^{\prime}- \partial\mathfrak{g}^{\circ}_{\forall^{\mathrm{nilp}},m}(y)^{\prime}\) is nearly bounded. Proof of Lemma 5.5.: We consider the type II and Type III cases separately. Assume first that \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\) has a degeneration of type II at \(P\); this is the setting of Section 4.4. Arguing as in that section, and with the same notation, we note first that for \(v\in W_{2}^{\mathbb{Z}}\) we have \[\nu^{\circ}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(v)=e^{-2\pi h(s_{v})}=e^{-\pi \|\pi_{1}(v)\|_{\mathcal{V}}^{2}} \tag{5.22}\] (cf. (4.36)). For \(\tilde{\Theta}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(uy)^{\prime}_{m,\mu}\), this gives \[\tilde{\Theta}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(uy)^{\prime}_ {m,\mu} =\sum_{\begin{subarray}{c}v\in\mu+W_{2}^{\mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\nu^{\circ}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}((uy) ^{1/2}v)\] \[=\sum_{\begin{subarray}{c}v\in(\mu+W_{2}^{\mathbb{Z}})/W_{1}^{ \mathbb{Z}}\\ Q(v,v)=2m\end{subarray}}\sum_{v_{1}\in W_{1}^{\mathbb{Z}}}e^{-\pi yu\|v_{1}+ \pi_{1}(v)\|_{\mathcal{V}}^{2}}. \tag{5.23}\] The singularity of the inner sum as \(t\to 0\) can be determined using Poisson summation: with \(\Phi\) as in (4.38) we can write \[\sum_{v_{1}\in W_{1}^{\mathbb{Z}}}e^{-\pi yu\|v_{1}+\pi_{1}(v)\|_ {\mathcal{V}}^{2}} =\sum_{n\in\mathbb{Z}^{2}}e^{-2\pi\frac{yu}{\mathrm{Im}(z)}|(a_{1 }+n_{1})\alpha_{1}+(a_{1}+n_{2})\alpha_{2}|^{2}}\] \[=\left|\det\begin{pmatrix}\alpha_{1}&\overline{\alpha_{1}}\\ \alpha_{2}&\overline{\alpha_{2}}\end{pmatrix}\right|^{-1}\frac{\mathrm{Im}(z) }{yu}+\mathrm{o}(\mathrm{Im}(z)/yu)\] \[=\frac{1}{2\pi yu}\left|\det\begin{pmatrix}\alpha_{1}&\overline{ \alpha_{1}}\\ \alpha_{2}&\overline{\alpha_{2}}\end{pmatrix}\right|^{-1}\cdot(-\log|t|)+o((- \log|t|)/(yu)). \tag{5.24}\] To compute the determinant, recall that \(\alpha_{1}\), \(\alpha_{2}\) are defined by \[\lambda_{j}=\alpha_{j}e^{1,0}+\overline{\alpha_{j}}e^{0,1},\qquad j=1,2,\] where \(\lambda_{1},\lambda_{2}\) are a fixed basis of \(W_{1}^{\mathbb{Z}}\). Pick \(\tilde{\lambda_{1}},\tilde{\lambda_{2}}\in V_{\mathbb{Q}}\) such that \(N\tilde{\lambda_{j}}=\lambda_{j}\); then \[\tilde{\lambda_{j}}\equiv\alpha_{j}e^{2,1}+\overline{\alpha_{j}}e^{1,2}\mod W _{2,\mathbb{R}},\qquad j=1,2,\] and by (2.15) we have \[Q(\tilde{\lambda_{j}},\lambda_{k})=\begin{pmatrix}0&i\overline{\alpha_{1}} \alpha_{2}-i\alpha_{1}\overline{\alpha_{2}}\\ i\alpha_{1}\overline{\alpha_{2}}-i\overline{\alpha_{1}}\alpha_{2}&0\end{pmatrix}.\] It follows that \[\begin{split}\left|\det\begin{pmatrix}\alpha_{1}&\overline{ \alpha_{1}}\\ \alpha_{2}&\overline{\alpha_{2}}\end{pmatrix}\right|&=2|\mathrm{Im}(\alpha_{1} \overline{\alpha_{2}})|\\ &=|\det(Q(\tilde{\lambda_{j}},\lambda_{k}))|^{1/2}\\ &=\left(\frac{\mathrm{disc}(\mathrm{Gr}^{W}_{3,1}Q)}{r_{1}(V_{ \mathbb{Z}},N)}\right)^{1/2}\end{split} \tag{5.25}\] and hence \[\int_{1}^{\infty}\tilde{\Theta}_{\mathbb{V}^{\mathrm{nilp}}}(uy)^{\prime}_{m, \mu}\frac{du}{u}=Z^{-}_{\mathbb{V},P}(y)_{m,\mu}\cdot(-\log|t|^{2})+\mathrm{o} ((-\log|t|)/\sqrt{y}),\] which implies the statement for type II degenerations. Let us now consider the case when \(\tilde{\mathbb{V}}^{\mathrm{nilp}}\) has a degeneration of type III at \(P\); this case was considered in Section 4.5. Arguing as in that section, and with same notation, note first that (2.32) and (2.33) imply that for \[v=v_{U}+aNe^{2,2}+bN^{2}e^{2,2}\in W_{2,\mathbb{Z}}\] we have \[\nu^{\circ}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(v)=e^{-2\pi h(s_{v})}=e^{-2 \pi a^{2}}e^{-2\pi(b-a\mathrm{Re}(z))^{2}/\mathrm{Im}(z)^{2}}\] (cf. (4.44) and (4.46)). The same argument that led to (4.54) shows that \[\tilde{\Theta}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(yu)^{\prime}_{m,\mu}= \sum_{\begin{subarray}{c}\lambda+N\nu\equiv\mu\\ \mathrm{mod}\ \mathrm{Gr}^{W}_{2}V_{\mathbb{Z}}\end{subarray}}\tilde{\Theta}_{ \tilde{\mathbb{V}}^{\mathrm{nilp}}}(yu)^{\prime}_{m,\lambda\otimes\nu}\] with \[\begin{split}\tilde{\Theta}_{\mathbb{V}^{\mathrm{nilp}}}(yu)^{ \prime}_{m,\lambda\otimes\nu}&=\sum_{\begin{subarray}{c}v^{\prime} \in N\lambda+\langle v_{n}\rangle\\ v\in\nu+Y^{Z}_{2,\mathrm{prim}}\\ Q(v+v^{\prime},v+v^{\prime})=2m\end{subarray}}e^{2\pi yuQ(v^{\prime},v^{ \prime})}\\ &\qquad\cdot\sum_{w\in\mu_{0}+W^{Z}_{0}}e^{-2\pi\frac{yu}{ \mathrm{Im}(z)^{2}}(b(v+v^{\prime}+w)-a(v^{\prime})\mathrm{Re}(z))^{2}}.\end{split} \tag{5.26}\] Again the leading term of the inner sum as \(t\to 0\) can be determined using Poisson summation: writing \(v_{0}\) for a generator of \(W^{\mathbb{Z}}_{0}\), we have \[\sum_{w\in\mu_{0}+W^{\mathbb{Z}}_{0}}e^{-2\pi\frac{yu}{\mathrm{Im }(z)^{2}}(b(v+v^{\prime}+w)-a(v^{\prime})\mathrm{Re}(z))^{2}} =\frac{\mathrm{Im}(z)}{|b(v_{0})|\sqrt{2yu}}+\mathrm{o}(\mathrm{ Im}(z)/\sqrt{yu})\] \[=\frac{-\log|t|}{|b(v_{0})|2\pi\sqrt{2yu}}+\mathrm{o}((-\log|t|) /\sqrt{yu}).\] To compute \(b(v_{0})\), pick \(\tilde{v}_{0}\in V_{\mathbb{Q}}\) such that \(N^{2}\tilde{v}_{0}=v_{0}\); then \[Q(\tilde{v}_{0},v_{0})=\frac{\operatorname{disc}(\operatorname{Gr}_{4,0}^{W}Q)}{ r_{2}(V_{\mathbb{Z}},N)}.\] On the other hand, since \(v_{0}=b(v_{0})N^{2}e^{2,2}\), we have \(\tilde{v}_{0}\equiv b(v_{0})e^{2,2}\mod W_{2,\mathbb{R}}\) and hence \(Q(\tilde{v}_{0},v_{0})=b(v_{0})^{2}\). Using the notation \[r_{L}(m)_{\mu}=\{v\in\mu+L\ |\ Q(v,v)=2m\}\] for the representation numbers of a definite lattice \(L\), this shows that \[\tilde{\Theta}_{\tilde{\mathbb{V}}^{\mathrm{nilp}}}(uy)^{\prime}_ {m,\mu}\sim \left(\frac{r_{2}(V_{\mathbb{Z}},N)}{2\operatorname{disc}( \operatorname{Gr}_{4,0}^{W}Q)}\right)^{1/2}\] \[\times\left(\sum_{a+b=m}r_{Y^{\mathbb{Z}}_{2,\mathrm{prim}}}(a)_ {\nu}\cdot r_{\langle v_{n}\rangle}(b)_{N\lambda}\frac{e^{4\pi yub}}{4\pi \sqrt{yu}}\right)\cdot(-\log|t|^{2}),\] as \(t\to 0\). As remarked in (4.49), the quotient map \(W_{2}\to\operatorname{Gr}_{2}^{W}V\) induces isometries \(Y^{\mathbb{Z}}_{2,\mathrm{prim}}\simeq(\operatorname{Gr}_{2,\mathrm{prim}}^{W }V_{\mathbb{Z}},Q)\) and \(\langle v_{n}\rangle\simeq(\operatorname{Gr}_{4}^{W}V_{\mathbb{Z}},-Q_{4})\). The statement in case III follows.
2310.11523
Group Preference Optimization: Few-Shot Alignment of Large Language Models
Many applications of large language models (LLMs), ranging from chatbots to creative writing, require nuanced subjective judgments that can differ significantly across different groups. Existing alignment algorithms can be expensive to align for each group, requiring prohibitive amounts of group-specific preference data and computation for real-world use cases. We introduce Group Preference Optimization (GPO), an alignment framework that steers language models to preferences of individual groups in a few-shot manner. In GPO, we augment the base LLM with an independent transformer module trained to predict the preferences of a group for the LLM generations. For few-shot learning, we parameterize this module as an in-context autoregressive transformer and train it via meta-learning on several groups. We empirically validate the efficacy of GPO through rigorous evaluations using LLMs with varied sizes on three human opinion adaptation tasks. These tasks involve adapting to the preferences of US demographic groups, global countries, and individual users. Our results demonstrate that GPO not only aligns models more accurately but also requires fewer group-specific preferences, and less training and inference computing resources, outperforming existing strategies such as in-context steering and fine-tuning methods.
Siyan Zhao, John Dang, Aditya Grover
2023-10-17T18:41:57Z
http://arxiv.org/abs/2310.11523v1
# Group Preference Optimization: Few-Shot Alignment of Large Language Models ###### Abstract Many applications of large language models (LLMs), ranging from chatbots to creative writing, require nuanced subjective judgments that can differ significantly across different groups. Existing alignment algorithms can be expensive to align for each group, requiring prohibitive amounts of group-specific preference data and computation for real-world use cases. We introduce Group Preference Optimization (GPO), an alignment framework that steers language models to preferences of individual groups in a few-shot manner. In GPO, we augment the base LLM with an independent transformer module trained to predict the preferences of a group for the LLM generations. For few-shot learning, we parameterize this module as an in-context autoregressive transformer and train it via meta-learning on several groups. We empirically validate the efficacy of GPO through rigorous evaluations using LLMs with varied sizes on three human opinion adaptation tasks. These tasks involve adapting to the preferences of US demographic groups, global countries, and individual users. Our results demonstrate that GPO not only aligns models more accurately but also requires fewer group-specific preferences, and less training and inference computing resources, outperforming existing strategies such as in-context steering and fine-tuning methods. 1 Footnote 1: Our code is available at the project website: [https://siyan-zhao.github.io/llm-gpo/](https://siyan-zhao.github.io/llm-gpo/) _Warning: This paper contains qualitative examples that may be viewed as offensive or harmful._ ## 1 Introduction Large Language Models (LLMs) are increasingly being employed for a wide variety of domains, with use-cases including creative writing, chatbots, and semantic search among others (Touvron et al., 2023; Taori et al., 2023; Ouyang et al., 2022; Bai et al., 2022; Brown et al., 2020). Many of these applications are inherently subjective and require generations that cater to different demographics, cultural and societal norms, or simply individual preferences (Hartvigsen et al., 2022; Zhang et al., 2023; Solaiman and Dennison, 2021; Blodgett et al., 2020; Dunbar et al., 1997). By virtue of their large-scale training, current language models are exposed to diverse data that allows them to _represent_ a multitude of such opinions (Glaese et al., 2022; Durmus et al., 2023; Santurkar et al., 2023). However, expressing these diverse opinions requires steering the LLM generations to user requirements. This brings forth the key question studied in this work: _How do we efficiently adapt LLMs to align closely with the opinions of specific interest groups?_ Broadly, prior work has explored two modes of steering language models, which trade-off training complexity with test-time engineering. On one end, prompt engineering approaches avoid explicit modifications to the parameters of the language model and elicit desired behavior by crafting a suitable prompt. Often, the prompt is augmented with a few in-context examples (Brown et al., 2020; Taori et al., 2023; Chowdhery et al., 2022). While prompting approaches are attractive as they have no additional training complexity over the base model, prompt engineering can be quite tedious and empirically poor when the desired behaviors are more complex (Zhou et al., 2022; Reynolds and McDonnell, 2021; Qin and Eisner, 2021; Lester et al., 2021). For example, Santurkar et al. (2023) show that LLMs over-emphasize opinions from privileged demographics and are challenging to rectify via in-context prompting approaches. On the other end, various kinds of alignment approaches have been proposed that seek to augment or finetune the language model with an additional reward or scoring model. These approaches can steer the model to achieve complex behaviors such as honesty, helpfulness, and harmlessness (Ouyang et al., 2022; Bai et al., 2022; Glaese et al., 2022; Bansal et al., 2023; Askell et al., 2021; Song et al., 2023; Bai et al., 2022; Thoppilan et al., 2022; Wang et al., 2022), but come at the cost of additional complexity in gathering sufficient supervision to train reward models and subsequent finetuning. As a result, existing alignment approaches, such as PPO (Schulman et al., 2017), DPO (Rafailov et al., 2023), and Best-Of-N, are not designed to efficiently align LLMs when the number of target groups is large and supervision for each group is limited. We introduce _Group Preference Optimization_ (GPO), a few-shot framework for aligning Large Language Models to opinions and preferences of desired interest group(s). The key idea in GPO is to view the alignment of an LLM policy as a few-shot adaptation problem within the embedded space of an LLM. Specifically, GPO augments an arbitrary base LLM with an independent few-shot preference module. This module is parameterized via an independent transformer and trained to explicitly perform in-context supervised learning to predict preferences (targets) given joint embeddings (inputs) of prompts and corresponding LLM responses. The use of embeddings guarantees that the preference module can effectively process in-context examples where each example is itself a potentially long sequence of prompt and generated response. In-context learning further provides the ability to efficiently adapt to new, unseen groups at test-time with only a handful of examples. See Figure 1 for an illustration. Finally, we incorporate various architectural design choices to guarantee permutation-specific inductive biases, building on recent work in in-context learning over datasets (Nguyen and Grover, 2022). Once learned, the learned module can serve as a drop-in replacement for a reward or preference function for policy optimization and re-ranking algorithms. In our experiments, we validate the effectiveness of GPO for aligning language models to the opinions of 22 diverse US demographic groups in the OpinionQA dataset (Santurkar et al., 2023) and Figure 1: Overview of GPO. **Left:** Group alignment aims to steer pretrained LLMs to preferences cater to a wide range of groups. For each group \(g\), we represent its preference dataset as \(\mathcal{D}_{g}=\{(x_{1}^{g},y_{1}^{g}),\ldots,(x_{n}^{g},y_{n}^{g})\}\). Here, \(y_{i}^{g}\) signifies the preference of group \(g\) for a pair of given prompt \(q_{i}^{g}\) and response \(r_{i}^{g}\), while \(x_{i}^{g}\) is its LLM representation obtained with \(\pi_{\text{emb}}(q_{i}^{g},r_{i}^{g})\). **Right:** Once trained, GPO provides a few-shot framework for aligning any base LLM to a test group given a small amount of in-context preference data. 14 global countries in the GlobalOpinionQA dataset (Durmus et al., 2023). We consider 2 base language models of different sizes: Alpaca 7B (Taori et al., 2023), an instruction-tuned version of the LLaMA (Touvron et al., 2023a) 7B model, and the recent Llama2 13B chat (Touvron et al., 2023b), which has been fine-tuned on a large dataset of human preferences for helpfulness and safety. Empirically, we test GPO against a variety of prompting and finetuning baselines. On average, GPO surpasses the top-performing baselines by 7.1% when adapting to 22 US demographic groups in OpinionQA, and by 8.4% when aligning with 14 global countries in GlobalOpinionQA. Furthermore, GPO performs most effectively in adapting to individual preferences compared to other baselines. ## 2 Group Preference Optimization ### Problem Setup A large language model (LLM) expresses a probability distribution over natural language, denoted as \(\pi\). To accomplish any task, such as question answering or summarization, a user crafts a suitable query \(q\) and prompts the LLM to generate a response \(r\) obtained via sampling from the conditional distribution \(\pi(\cdot\mid q)\). Rather than decoding responses from a single distribution \(\pi(\cdot\mid q)\), our goal in this work is to align the language model to the preferences of a desired target group \(g^{*}\in G\). Here, we adopt a fairly general definition of a _group_ to refer to any collection of agents (e.g., demographic groups, individual personas), and we use \(G\) to denote the space of all possible groups. For training, we assume that we are given access to preference datasets for a finite set of training groups \(G_{\text{train}}\). In practical applications, the number of groups can be large (e.g., different demographics and cultures) while the amount of preference data for each group is generally small. ### Related Work Existing approaches for steering LLMs are challenging to apply for group alignment, especially when the underlying groups are complex and per-group supervision is scarce. Before discussing our proposed approach, we overview some of the popular approaches below and their corresponding trade-offs. Whenever applicable, these approaches will also serve as baselines in our experiments. We provide additional discussion of related work in Appendix H. **Prompt Engineering:** These approaches modify the input prompt \(q\to q^{\prime}\) to guide the LLM towards a group-aligned distribution (Jiang et al., 2023; Hwang et al., 2023; Deshpande et al., 2023). Techniques include meta-data utilization, where group-specific meta-data, are appended to the input prompt to provide richer contextual understanding. For example, this meta-data could include demographic labels such as "_male_", "_female_" or text descriptions such as "_individuals below the age of 50_". Further, the engineered prompts can be improved via in-context few-shot prompting, in which the prompt is concatenated with examples of desired behavior. Given the flexibility of language, even a preference dataset \(D_{g}\) could be converted into in-context examples for improving the prompt. Prompt engineering approaches are computationally efficient as they involve no changes to the parameters of the language model, but designing the prompt itself can be a tedious task that relies on heuristics (Zhou et al., 2022; Lester et al., 2021; Qin and Eisner, 2021). Moreover, these heuristics are not guaranteed to transfer well across different LLMs. Finally, it has been shown prompt engineering has limited gains in aligning LLMs to complex groups on challenging survey datasets (Santurkar et al., 2023; Durmus et al., 2023). **Gradient-based Alignment:** Algorithms that explicitly finetune the base LLM or augment it with additional models have shown significant success in aligning LLMs to complex behaviors such as honesty, harmfulness, and helpfulness, enabling their real-world deployments. There are 2 broad classes of methods. The first class of methods rely on gathering a dataset of responses from a target group and finetuning on this dataset via regular supervised learning (Ouyang et al., 2022; Ziegler et al., 2019). This supervised finetuning approach is easy to execute, but requires sufficient group-specific supervision and shows limited generalization in practice. Alternatively, another class of finetuning approaches gathers explicit preference data from humans to train a reward or scoring model. Such a reward model can then be used for filtering or re-ranking responses (e.g., via Best-of-N, importance weighting (Grover et al., 2019)), or explicitly optimized via a reinforcement learning approach (e.g., via PPO (Ouyang et al., 2022; Schulman et al., 2017)). The latter can be especially challenging as they introduce challenges in hyperparameter tuning and stable optimization (Sun et al., 2023; Santacroce et al., 2023). More recently, there are also approaches that directly optimize for preferences that improve the stability of RL approaches (Rafailov et al., 2023; Song et al., 2023). Preference-based finetuning approaches generally assume access to large amounts of preference data. Our proposed approach GPO also falls into the category of explicit alignment approaches, but is specifically designed to generalize to align to arbitrary interest groups with constraints on the amount of supervision available for each group. ### Proposed Method We desire an alignment approach that generalizes to a wide variety of groups, even when constrained by the amount of per-group supervision. Accordingly, we view group alignment as a few-shot learning problem and cast it in the framework of in-context meta-learning. For each training group \(g\in G_{\text{train}}\), we represent its preference dataset as \(\mathcal{D}_{g}=\{(x_{1}^{g},y_{1}^{g}),\dots,(x_{n}^{g},y_{n}^{g})\}\) where \(y_{i}^{g}\) denotes the preference of group \(g\) to a pair of input prompt query \(q_{i}^{g}\) and LLM response \(r_{i}^{g}\), and \(x_{i}^{g}\) denotes the LLM representation of the concatenation of the prompt query and LLM response \(x_{i}^{g}=\pi_{\text{emb}}(q_{i}^{g},r_{i}^{g})\). Here, \(\pi_{\text{emb}}\) can be the language model embedding function or an identity function that maintains the input's raw textual format. Note that while the inputs \(x^{g}\) can be shared across different groups (e.g., universal surveys), the preferences are different for each group. At test-time, our goal will be to steer the default LLM distribution to a new distribution, say \(\pi_{g^{*}}\), given a preference dataset \(\mathcal{D}_{g^{*}}\) for the target query group \(g^{*}\). For brevity of presentation, we consider the preference to be a real-valued scalars. Our framework extends to other kinds of responses and preferences, such as short-answer questions (e.g., MCQs) and relative pairwise responses, as discussed in Appendix G. Given the above setup, we design GPO to perform group alignment by learning a few-shot preference model that augments the base LLM, as shown in Algorithm 1. Once learned, we can use it to update the LLM via any standard preference optimization or reweighting algorithm (e.g., PPO, Best-of-N). Specifically, we parameterize GPO via a transformer and train it to perform in-context learning on the training preference datasets. Given a training group \(g\in G_{\text{train}}\), we randomly split its preference dataset \(\mathcal{D}_{g}\) into a set of \(m\) context points and \(n-m\) target points, where \(n=|\mathcal{D}_{g}|\) is the size of the preference dataset for group \(g\). Thereafter, GPO is trained to predict the target preferences \(y_{m+1:n}^{g}\) given the context points \((x_{1:m}^{g},y_{1:m}^{g})\) and target inputs \(x_{m+1:n}^{g}\). Mathematically, we can express the objective as: \[L(\theta)=\mathbb{E}_{g,m}\left[\log p_{\theta}(y_{m+1:n}^{g}\mid x_{1:n}^{g}, y_{1:m}^{g})\right] \tag{1}\] where the training group \(g\sim G_{\text{train}}\) and context size \(m\) are sampled uniformly. \(\theta\) represents the parameters of our model. Figure 2 shows an illustration. For decoding, we make the conditional independence assumption, where we assume that the target preferences are independent of each other given the context samples and the target inputs: \[L(\theta)=\mathbb{E}_{g,m}\left[\sum_{i=m+1}^{n}\log p_{\theta}(y_{i}^{g}\mid x _{1:n}^{g},y_{1:m}^{g})\right] \tag{2}\] In our preliminary experiments, we also investigated alternatives which model the dependencies. We did not find any noticeable improvements and hence use Eq. 2 for the rest of the paper. Figure 2: Illustration of the GPO architecture for a sequence of \(n\) points, with \(m\) context points and \(n-m\) target points. The context \((x_{1:m},y_{1:m})\) serves as few-shot conditioning for GPO. GPO processes the full sequence using a transformer and predicts the preference scores \(\hat{y}_{m+1:n}\). Following Nguyen and Grover (2022), we can modify the transformer architecture in GPO to explicitly account for permutation invariance conditioning over in-context examples. In particular, we discard the positional encodings commonly found in standard transformer architectures. However, this loses the pairwise relations between \((x_{i},y_{i})\). To solve this, we concatenate each pair \((x_{i},y_{i})\) into a single token to inform the transformer of their pairwise relation. For the target inputs, we pad the \(x_{i}\)'s with a dummy token (e.g., 0). Finally, we employ a masking strategy where the context pairs can self-attend to each other, whereas the padded targets can only attend to the context points and not to other target points to follow the conditional independence assumption in Eq. 2. Note that even though GPO uses in-context learning, it is distinct from in-context prompting a base LLM. The latter does not update the parameters of the base LLM and requires examples of desired text generations. On the other hand, GPO learns a few-shot model which augments the base LLM and only requires preferences of users for the LLM generations. That said, both these schemes are complementary to each other as we can use any engineered prompt (e.g., with in-context examples) as a drop-in replacement for the default prompt used in the inputs \(x\). Scaling to long dataset contexts.One challenge with GPO is that the effective sequence length for the transformer can grow significantly if we use raw representations of prompts and responses within each input \(x\). This can degrade performance and efficiency significantly. To overcome this challenge, we propose to use embedded representations of text within \(x\), as LLM representations can contain sufficient information for solving tasks (Bhatia et al., 2023). In particular, we first concatenate the prompt and response and compute their joint embedding \(\pi_{\text{emb}}(q_{i}^{g},r_{i}^{g})\) using the base LLM. We explored different techniques for extracting the joint embeddings from the base LLM, as detailed in the ablation study in Appendix C, and found it best to use the average embedding of all the tokens in the input. ``` 1:Input: LLM embeddding function \(\pi_{\text{emb}}\); Preference datasets \(\mathcal{D}_{g}\)\(\forall g\in G_{\text{train}}\). 2:Initialize GPO transformer with parameters \(\theta\). 3:For all \(g\in G_{\text{train}}\), cache embedded pairs \((x_{i}^{g},y_{i}^{g})\) in \(\mathcal{D}_{g}^{\text{emb}}\) where \(x_{i}^{g}=\pi_{\text{emb}}(q_{i}^{g},r_{i}^{g})\). 4:repeat 5: Sample training group \(g\in G_{\text{train}}\). 6: Sample context size \(m\sim\text{Uniform}[1,n-1]\) where \(n=|D_{g}|\). 7: Split \(\mathcal{D}_{g}^{\text{emb}}\) randomly into \(m\) context \((x_{1:m}^{g},y_{1:m}^{g})\) and \((n-m)\) target \((x_{m+1:n}^{g},y_{m+1:n}^{g})\) pairs. 8: Predict target preferences \(y_{m+1:n}^{g}\) using context \((x_{1:m}^{g},y_{1:m}^{g})\) and padded targets \((x_{m+1:n}^{g},0)\). 9: Update \(\theta\) to minimize in-context loss function \(L(\theta)\) in Eq. 2. 10:until convergence 11:Output: GPO transformer with learned parameters \(\theta\) ``` **Algorithm 1**_Group Preference Optimization_ (GPO) ## 3 Experiments Datasets.While GPO is general-purpose and can be applied broadly to many language model use cases, our work is focused on benchmarks which reflect a diverse landscape of human preferences. Quantitatively evaluating the diverse opinions through open-ended questions (e.g., creative writing) is inherently complex, and often demands expensive human labels. In contrast, closed-ended responses (e.g., multiple-choice questions) offer a standardized means of capturing diverse opinions, thus reducing ambiguity and noise in evaluation. Survey datasets have been used in prior work (Santurkar et al., 2023; Durmus et al., 2023) to demonstrate the weaknesses of current LLMs in catering to diverse populations, and hence can be effectively used to benchmark progress in group alignment. We benchmark group alignment on 2 recent survey datasets: (1) _OpinionQA_(Santurkar et al., 2023), which spans 22 US demographic groups (e.g. income, political ideology, race, and sex) across 500 multiple-choice questions and (2) _GlobalOpinionQA_(Durmus et al., 2023), which contains multiple-choice questions answered by participants from 14 countries, amounting to 2,554 questions which cover various topics including politics, media, technology, religion, race, and ethnicity. Survey questions are shared across different groups, so we use \(x_{i}\) (and not \(x_{i}^{g}\)) for brevity henceforth. Detailed dataset descriptions can be found in Appendix A. Next, we construct group \(g\) preference dataset \(\mathcal{D}_{g}\) from the survey data. Let \(Q\) be the set of all survey questions and \(G\) be the groups participating in the survey. Consider a survey question \(q\in Q\), with \(T\) unique answer options. Each option can be interpreted as a response \(r\), yielding a set of \(T\) viewpoints \(\{x_{i}\}_{i=1}^{T}=\{\pi_{\text{emb}}(q,r_{i})\}_{i=1}^{T}\). The preference score \(y_{i}^{q}\) for the viewpoint \(x_{i}\) is obtained by aggregating the survey responses given to \((q,r_{i})\) from group \(g\). These scores are normalized within each question to form the group preference distribution vector \(P_{g}(q)=[y_{1}^{g},...,y_{T}^{g}]\) for question \(q\), such that \(\sum_{i=1}^{T}y_{i}^{g}=1\). Repeating this process for all \(n\) questions in \(Q\) yields \(\mathcal{D}_{g}\). During training and testing, all viewpoints from the same question belong to either the context or target set. Finally, we apply a softmax layer to predictions for each question, yielding normalized preference scores for each survey question in the target set. Evaluation Metric.To rigorously assess the degree of alignment between two opinion distributions, \(P_{1}\) and \(P_{2}\), we calculate the _Alignment Score_, denoted as \(\mathcal{A}(P_{1},P_{2};Q)\) over a set of questions \(Q\). This metric employs a similarity function _Sim_: \[\mathcal{A}(P_{1},P_{2};Q)=\frac{1}{|Q|}\sum_{q\in Q}\textit{Sim}(P_{1}(q),P_ {2}(q)) \tag{3}\] For the OpinionQA dataset (Santurkar et al., 2023) with its ordinal answers, we employ the one-dimensional Wasserstein Distance as our similarity metric. Conversely, for the GlobalOpinionQA dataset, which often presents non-ordinal answer structures, we use the Jensen-Shannon Distance as suggested by the original paper. Further details are available in Appendix A. Base Large Language Models.We use two different-sized LMs as our base models for baselines and GPO. The first, Alpaca-7B Taori et al. (2023), is an instruction-tuned variant of the Llama-7B (Touvron et al., 2023a), crafted using 52K instruction-response pairs. The second, Llama2-13B chat version, is finetuned over 1M human preferences for helpfulness and safety. For baseline methods requiring updates of model weights, we use low-rank adaptation (LoRA) (Hu et al., 2021). Baselines.We compare our method against extensive baseline approaches as introduced below. For a detailed description of the baselines, refer to the Appendix E. * [leftmargin=*] * **Uniform Distribution:** Assumes equal preference scores for all options. * **_LM Base:** Following Santurkar et al. (2023); Durmus et al. (2023), we get the LM's default opinion distribution, denoted by \(P_{\pi}(q)\), by extracting the prediction scores for each of the available answer choices (e.g., A, B, C, D) from the top-\(K\) next-token predictions and then normalize them with softmax, resulting in a preference distribution across \(T\) options. * **_LM Steered:_ We use diverse prompting strategies to convey group information to the LM, with examples in Appendix J. The opinion distribution obtained for group \(g\) under this steering is expressed as \(P_{\pi}(q;c_{g})\), where \(c_{g}\) denotes the context for group \(g\). * **_Few-shot Prompt:_ We append a few examples showing a group's preferences for \(m\) context questions to the prompt. Here \(m\) is constrained by the LM's context window size and \(c_{g}\) includes the context samples \(\{x_{i},y_{i}\}_{i=1}^{m}\). See Figure 8 in the Appendix for examples. * **_SFT per group:_ The LM is fine-tuned separately for each group \(g\) with a maximum likelihood loss. For every training question \(q\), augmented training examples are created by sampling responses \(r\) according to the preference distribution \(P_{g}(q)\). * **_Reward Model:_ Here, we train a per-group reward model by adding a linear MLP head on a base LLM and train it on \(m\) context samples \(\{x_{i},y_{i}^{g}\}_{i=1}^{m}\) with MSE loss. It is evaluated by predicting the preference scores for the query viewpoints \(\{x_{i}\}_{i=m+1}^{n}\) with softmax applied to the predictions for each \(q\) to ensure normalization. * **_In-Context Finetune:_ We investigate whether the LM's few-shot in-context alignment ability can be fine-tuned. Similar to GPO, we partition the group set \(G\) into a meta-train set \(G_{\text{train}}\) and a meta-test set \(G_{\text{test}}\). The training questions for each group are split into context samples and query questions. For a given query question \(q\), we supplement it with a few-shot context \(c_{g}\), consisting of \(m\) questions paired with the respective ground truth preference scores, the same context as in the _Few-shot Prompt_ strategy. Then LM is finetuned using the same maximum likelihood loss as in _SFT per group_ where responses are sampled according to \(P_{g}(q)\). ### Results and Discussion Adapting to US demographics in OpinionQA.We conducted experiments with three distinct meta-train and meta-test splits, allocating 40%, 60%, and 80% of the 22 US demographics groups respectively as the meta-train groups \(G_{train}\). This group split was consistent with the _In-context Finetune_ baseline and GPO. For other baselines that operate on a per-group basis, we calculated the alignment score for the meta-test groups and present results averaged over three random seeds. Our results are presented in Figure 3. Alpaca-7b base model exhibit alignment scores that are similar to the alignment score of a uniform distribution. This does not necessarily imply an absence of biases, as averaging across groups could obscure biases towards certain demographics. Prior work has found LMs may disproportionately over-represent some groups and under-represent others (Santurkar et al., 2023). However, Llama2-13b-chat base model exhibits a lower alignment score as compared to the uniform distribution. This might be attributed to its fine-tuning for safety, causing the model to lean towards the least harmful option, which can be seen from the qualitative examples in Appendix I. When we incorporate group information into the LMs, we deploy various prompting strategies--QA, BIO, and PORTRAY--to convey this information (see Appendix J for examples). We report results for the strategy that yields the best alignment as _LM-steered_. Given explicit group information, _Alpaca-7b-steered_ displays slightly lower relative gains as compared to _Llama2-13b-steered_. Next, when provided with few-shot group preference context samples, which serve as an implicit method of conveying group information, the LM's alignment performance significantly Figure 3: Alignment score comparisons on the OpinionQA dataset and GlobalOpinionQA dataset with Alpaca-7b and Llama2-13b-chat as base models. Results have been averaged across three group split setups and three random seeds, with standard deviations provided. declines compared to the base language model's performance. We hypothesize this decline might be due to the prompting format being outside the distribution of the language model's training corpus. For methods involving gradient updates, we maintain a consistent number of context samples across all baselines, which is also the same number of context examples used in _Few-shot prompt_. Specifically, we use 15 samples for Alpaca-7b and 20 for Llama2-13b experiments. With gradient updates, _SFT per-group_ brings improvement as compared to other gradient-free steering methods. However, training a _Reward Model_ to predict alignment scores from context samples, and subsequently use it to predict preference scores for query examples underperforms SFT methods. This outcome may suggest a risk of overfitting when working with a limited sample size. GPO achieves notably higher alignment scores on this dataset compared to the baselines for both the Alpaca and Llama2 base models. GPO uses the same number of context samples for adaptation and the test groups are unseen during training. We observed performance increases when a larger number of meta-training groups were used. GPO's closest baseline, the _In-context Finetune_ method ranks second, where the LMs are trained to infer from few-shot context samples. On average over the two base models and the three group split settings, GPO achieves a 7.1% increase over the _In-context Finetune_. Figure 4 qualitatively illustrates the predicted alignment scores from different methods in response to an OpinionQA example concerning climate change concerns across six demographic groups. The first row depicts the ground truth group opinion distribution. Given just 15 context samples, GPO successfully adapts to match the opinion distributions of different groups. For instance, it increases preference for option A when adapted to the group _Hindus_, while the steered LMs do not exhibit correct distribution changes. For example, _Llama2-13b-steered_ appears to be biased towards a specific option, overrepresenting it rather than accurately reflecting the distribution of the targeted group. On the contrary, in demographics with a more balanced distribution like _College graduate/some postgrad_, GPO maintains this balance more consistently. This demonstrates that GPO does not merely adapt to the overall dataset group preferences, but can align to specific groups using limited context. Adapting to cross-nation groups in GlobalOpinionQA.The diverse and highly contrasting opinions across nations in GlobalOpinionQA presents a more complex landscape than the OpinionQA dataset. Upon analyzing performance, trends in the GlobalOpinionQA dataset closely followed those observed in the OpinionQA, as depicted in Figure 3. Notably, the alignment score of the Alpaca-7b base model surpasses that of the uniform distribution while Llama2-13b base model shows lower alignment. For Alpaca-7b _LM-base_, this could suggest that the base models might exhibit stronger alignment to certain specific countries and this hypothesis is supported by the in Figure 4: Qualitative comparison of GPO alignment with steered LMs, where each pie chart denotes the preference distribution of the group. Here, GPO uses Alpaca-7b’s embedding. creased standard deviation of the Alpaca-7b _LM-base_ alignment scores, hinting at varied alignment across different countries, a phenomenon also reported in the dataset (Durmus et al., 2023). Alternatively, this could imply that the base models tend to align more with the dataset's general respondents, which naturally would exceed a uniform distribution. With gradient updates, the _SFT per-group_ method here surpasses the alignment performance of steering methods, while the _Reward Model_ underperforms SFT methods. The _In-context Finetune_ method emerges as the third-best and second-best in terms of alignment for Alpaca-7b and Llama-13b respectively, which showcases enhanced in-context few-shot adaptation post meta-training. However, its training demands are substantially higher; it requires approximately 4.7 times more training time as compared with GPO on an NVIDIA RTX A6000 to achieve the depicted performance. Averaged across both base models and the three group split scenarios, GPO posts a 8.4% improvement over the second-best baseline. Scalability with Increasing Context Samples.We evaluate the scalability of different methods with respect to the size of the in-context examples. Figure 5 demonstrates that for Nigeria in the GlobalOpinionQA dataset, GPO enhances alignment scores with fewer than 10 preference context samples. The performance of _Few-shot Prompt_ improves with more examples but plateaus with greater variance. In comparison, _In-context Finetune_ exhibits superior adaptability post meta-training than _Few-shot Prompt_, yet its alignment is still suboptimal and the number of group context samples is limited by the context window size of the LM. Both _SFT per-group_ and _Reward Model_ show incremental improvements with added context samples; however, their sample efficiency is modest. In contrast, GPO adeptly adapts to groups in a sample-efficient manner. Adapting to Individual Preferences.Variations in individual opinions can manifest even within the same demographic groups (Hwang et al., 2023). Motivated by this, we assess methods to align with individual-level preferences. From the OpinionQA dataset, encompassing 15 surveys across 15 unique topics, we randomly select 100 participants from each survey, along with their responses to 30 topic-related questions. For each individual, 40% questions serve as context samples and 60% for queries. We use Alpaca-7b here as the base model. To steer the LM with individual information, we create individual context from combined demographic variables, such as income, religion, and age, as demonstrated in Appendix Figure 9. Since each individual only selects one option, we calculate alignment accuracy instead by treating the option with the highest predicted preference score as the predicted option. Due to computational constraints, we confined our evaluations of the SFT per-individual and reward model methods to one survey. Since both of them operate on a per-individual basis, the training needed for about a thousand individuals made broader comparisons of the two baselines impractical. In contrast, other baselines, including in-context finetune and GPO, were assessed across all 15 survey topics. Across the full breadth of the 15 topics, GPO consistently exhibited superior performance in adapting to individual preferences relative to other baselines, as depicted in Figure 6. Figure 5: Alignment score of various methods based on Llama2-13B with varying group context sample size. Evaluation conducted on survey questions for Nigeria from the GlobalOpinionQA dataset. The shaded region represents the standard deviation across three different seed results. ## 4 Conclusion and Limitations We introduced, GPO, a novel method for few-shot aligning LLM outputs to both individual and group preferences given little preference data. GPO is trained on a meta-train dataset containing group-wise preference data. During inference, GPO adapts to a new test group, predicting aligned preferences given a few context examples from that group. GPO significantly outperforms prior methods as measured by alignment score for group preference alignment while requiring no gradient updates to the base LLM. We find that GPO is also more sample efficient, improving alignment score significantly more than baseline methods while using fewer samples, and is effective across multiple popular open-source LLMs of various parameter and pre-training dataset scales. We also highlight a few limitations and directions for future work below: **Opinion Datasets:** We use datasets containing opinions of various demographic groups to validate GPO. Survey data is imperfect and may not be fully representative of an entire group's population. Additionally, all the datasets that we use in this work are in English. When aligning to groups, the language that is used to collect preference data and during alignment may have a significant effect on alignment metrics, especially if the inputs and outputs are in a different language than the native language of members of a group. Future work should also investigate more challenging few-shot alignment settings, such as adapting to individual creative preferences where there may be much higher variance between group preferences. **Multiple-choice Format:** Like many previous works, we focus on a multiple-choice format due to the availability of existing datasets and ease of quantitative evaluations. LLMs are capable of producing much more complicated long-form responses, and it is important that alignment methods can be extended to the general long-form response setting. While the GPO framework extends more broadly to different formats of LLM generations, future work should validate the effectiveness of GPO for longer form responses and additional considerations such as group preference feedback representation and evaluation metrics needed to extend to the long-form setting. **Alignment Objectives:** When aligning LLMs, multiple factors beyond group preference alignment are also very important. Aligning to group preferences may result in worse alignment for other factors including as harmlessness and helpfulness especially if the group preference data includes examples that contradicts these values. Moreover, aligning to group preferences may amplify undesirable behaviors from LLMs including biased or harmful outputs. Future work should study the impact of group alignment on other important alignment factors and methods to reduce regressions for these factors when aligning to group preferences. Figure 6: Individual alignment accuracy comparisons from the OpinionQA dataset. **Left:** Individual alignment on the gun topic survey. **Right:** Comprehensive comparison across all 15 topics, showcasing the performance of various methods on diverse subjects. Experiments use Alpaca-7b as the base LM. Both GPO and _In-context finetune_ are meta-trained on 40% of individuals and evaluated on the remaining 60%. The horizontal red line represents the average accuracy of a random model. **Model Initialization:** Initializing GPO with a pretrained LM transformer backbone might offer advantages in performance. Specifically, leveraging a pretrained backbone could potentially enhance GPO's capacity to encode world knowledge, thereby improving its ability to generalize to OOD examples. Investigating the performance and generalization benefits of this initialization approach could be a promising direction for future work. ## Ethics Statement GPO can be used to align models to preferences of diverse interest groups which can provide a more positive, useful, and inclusive experience for end users of LLM applications. We acknowledge that aligning LLMs to the preferences of demographic groups can have malicious applications. For example, making LLMs more capable of producing responses that are more tailored to specific users may be misused to convince or show members of a group how to perform unt ethical actions. Additionally, GPO's methodology can be used to align a model to a group's preferences even if those preferences are harmful. Biased, offensive, and harmful preferences present in the meta-train or meta-test datasets may be reflected in the outputs of GPO. Future work should investigate methods for aligning LLM outputs to group preferences without amplifying harmful outputs. ## Acknowledgments This research is supported by a Google Award for Inclusion Research and an Adobe Data Science Award. We want to thank Hrutik Bansal for insightful discussions.
2303.09254
Magnetic fields in inhomogeneous axion stars
We study the time evolution of magnetic fields in various configurations of spatially inhomogeneous pseudoscalar fields, which are the coherent superposition of axions. The new induction equation for the magnetic field, which accounts for this inhomogeneity, is derived for such systems. Based on this equation, we study, first, the evolution of two Chern-Simons (CS) waves interacting with a linearly decreasing pseudoscalar field. The nonzero gradient of the pseudoscalar field results in the mixing between these CS waves. Then, we consider the problem in a compact domain, when an initial CS wave is mirror symmetric. In this situation, the inhomogeneity of a pseudoscalar field acts as the effective modification of the $\alpha$-dynamo parameter. Thus, we conclude that the influence of a spatially inhomogeneous pseudoscalar field on the magnetic field evolution strongly depends on the geometry of the system.
Petr Akhmetiev, Maxim Dvornikov
2023-03-16T12:13:33Z
http://arxiv.org/abs/2303.09254v2
# Magnetic field evolution in spatially inhomogeneous axion structures ###### Abstract We study the time evolution of magnetic fields in various configurations of axions with spatially inhomogeneous wavefunctions. The generalization of the induction equation for the magnetic field is derived for such systems. Basing on this equation, we study, first, the evolution of two Chern-Simons (CS) waves interacting with a linearly decreasing axion wavefunction. The nonzero gradient of axions results in the mixing between these CS waves. Then, we consider the problem in a compact domain, when an initial CS wave is mirror symmetric. In this situation, the inhomogeneity of axions acts as the effective modification of the \(\alpha\)-dynamo parameter. Thus, we conclude that the influence of spatially inhomogeneous axion on the magnetic field evolution strongly depends on the geometry of the system. ## 1 Introduction The most prominent solution of the CP problem in quantum chromodynamics (QCD) requires the existence of a pseudoscalar particle called the axion [1]. Nowadays axions and axion like particles are one of the most reliable candidates for the dark matter particles [2]. Despite numerous attempts to directly detect an axion in an experiment, these particles still remain elusive. The main experimental techniques used for the axion detection are reviewed in Ref. [3]. The role of axions in astrophysics is highlighted in Ref. [4]. As a rule, axions in the early universe are spatially homogeneous. However, the coordinate dependence of the axion field is not ruled out [5]. It can be the case when the Peccei-Quinn (PQ) phase transition happens after the reheating during the inflation. In the present work, we consider such a situation when axions form spatially confined objects. One of the examples of such structures are the axion stars [6], which are the solutions of the wave equation for an axion field in curved spacetime accounting for a self-interaction of axions. Another possibility for these objects are axion miniclusters [7] which are made of virialized axions. The characteristics of the axion miniclusters were recently studied in Refs. [8, 9]. The impact of the axions inhomogeneity on the properties of cold dark matter was studied in Ref. [10]. Axions turn out to interact not only with quarks and between themselves but also with photos. The most general couplings between axions and photons are provided, e.g., in Ref. [11]. The interaction of axions and primordial magnetic fields was studied in Refs. [12, 13]. The instabilities in the axion MHD are discussed in Ref. [14]. We studied the interaction between inhomogeneous axions and helical primordial magnetic fields in Ref. [15]. We assumed in Ref. [15] that the spatial inhomogeneity of axions was isotropic, i.e. the mean value of the wavefunction gradient vanishes whereas the Laplace operator survives. In the present work, basing on the results of Ref. [15], we consider the mutual evolution of a magnetic field and axions having the fixed spatial distributions. It can correspond, e.g., to an axion star. For this purpose, we consider two situations: the simplified one dimensional model and the more sophisticated geometry based on the Hopf fibration. Magnetic fields on 3D sphere relate with the Hopf fibration and are considered in numerous works (see, e.g., Ref. [16, Ch. III] and references therein). In Ref. [17], a simplest hyperbolic analogue of the Hopf fibration is applied to construct the magnetic equilibrium on the 3D sphere with the variations of the magnetic permeability. Such a hyperbolic Hopf fibrations are based on the Ghys-Dehornoy examples of geodesic flows. In Ref. [19], the Hopf fibration is used to translate solutions in a compact domain to solutions in the standard 3D space. We develop this task in Sec. A. Our approach is different from that in Ref. [19], where the magnetic vector potential \({\bf A}\) is transformed. We use the Kelvin transformation to construct the magnetic field \(\mathbb{B}\) in the Euclidean space with various magnetic permeabilities and regular boundary conditions at the infinity. This approach keeps the magnetic helicity and admits magnetic energy calculations analogously to Ref. [17]. Because of various magnetic permeabilities, the operator curl in the Euclidean space is a non-standard and calculations are different with respect to Sec. 3. In our present construction, the configuration is elliptic, i.e. a scalar curvature parameter in a compact domain is positive, and higher invariants of magnetic lines are unnecessary. Using the idea from Ref. [18], the higher invariants of magnetic lines are required for fields with the \(SU(2)\) symmetry. Because the initial Chern-Simons (CS) configuration in this domain is mirror symmetric, i.e. the linking numbers for an arbitrary pair of magnetic lines equal to zero, the helicity flow is completely characterized by a flow of the dispersion of the asymptotic ergodic Hopf invariant. In Ref. [20], the analytic formula for the dispersion of the ergodic asymptotic Hopf invariant, or, the magnetic helicity density is proposed. We recall that the Hopf invariant [16] is a density of linking numbers of pairs of magnetic lines. The magnetic helicity density has the dimension \(\mathrm{G}^{2}\mathrm{cm}^{-2}\) and is distributed over 4-dimensional configurations. Initial points of magnetic lines in a pair are translated by a prescribed magnetic flow independently. Thus, the Hopf invariant is distributed at transverse sections of pairs of magnetic lines. However, the dispersion of the Hopf invariant is insufficiently studied since its analytic expression involves an infinite-dimensional space of jets. It is not represented by a finite-dimensional integral [21]. A conception of the asymptotic ergodic Hopf invariants assumes that magnetic lines could be non-closed and integrals over magnetic flow are considered for quasi-periodic functions. For an infinite-dimensional space of jets additional mathematics are required, in [22] an approach for non-periodic observables is developped. The dispersion of the magnetic helicity density, as it is shown in Ref. [20], has the dimension \(\mathrm{G}^{4}\mathrm{cm}^{-4}\). It is distributed over the 5-dimensional configuration space, where initial points of magnetic lines in a pair are simultaneously translated by the magnetic flow. The spatial factor \(4/5\) shows the quadratic magnetic helicity spectra is more energetic. A flow of the magnetic helicity density given by the axion interactions will be considered elsewhere. The present work is organized in the following way. In Sec. 2, we briefly remind how magnetic fields evolve under the influence of inhomogeneous axions. We consider the simple one dimensional model in Sec. 3. Then, in Secs. 4 and 5, we qualitatively study a more complicated 3D case. Finally, we conclude in Sec. 6. The Kelvin transformation is considered in Appendix A. ## 2 Electrodynamics in presence of inhomogeneous axions The equations of the axion electrodynamics in a curved spacetime have the form, \[\frac{1}{\sqrt{-g}}\partial_{\nu}(\sqrt{-g}F^{\mu\nu})+g_{a\gamma }\partial_{\nu}\varphi\tilde{F}^{\mu\nu}+J^{\mu} =0,\] \[\frac{1}{\sqrt{-g}}\partial_{\nu}(\sqrt{-g}\tilde{F}^{\mu\nu}) =0,\] \[\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}\partial^{\mu} \varphi\right)+m^{2}\varphi+\frac{g_{a\gamma}}{4}F_{\mu\nu}\tilde{F}^{\mu\nu} =0, \tag{2.1}\] where \(F_{\mu\nu}\) is the electromagnetic field tensor, \(\tilde{F}^{\mu\nu}=\frac{1}{2}E^{\mu\nu\alpha\beta}F_{\alpha\beta}\), \(E^{\mu\nu\alpha\beta}=\frac{1}{\sqrt{-g}}\varepsilon^{\mu\nu\alpha\beta}\) is the covariant antisymmetric tensor, \(\varepsilon^{0123}=+1\), \(g=\det(g_{\mu\nu})\), \(g_{\mu\nu}=\mathrm{diag}(1,-a^{2},-a^{2},-a^{2})\) corresponds to the Friedmann-Robertson-Walker (FRW) metric with the scale factor \(a(t)\), \(g_{a\gamma}\) is the coupling constant, \(J^{\mu}=(\rho,\mathbf{J}/a)\) is the external current, \(\varphi\) is the axion wavefunction, and \(m\) is the axion mass. Using the conformal variables [23], \(\mathbf{E}_{c}=a^{2}\mathbf{E}\), \(\mathbf{B}_{c}=a^{2}\mathbf{B}\), \(\rho_{c}=a^{3}\rho\), and \(\mathbf{J}_{c}=a^{3}\mathbf{J}\), and applying the results of Ref. [15], we derive the equation for \(\mathbf{B}_{c}\), \[\mathbf{B}_{c}^{\prime}=\nabla\times\left[\mathbf{b}\times(\nabla\times \mathbf{B}_{c})+\alpha\mathbf{B}_{c}-\eta_{m}(\nabla\times\mathbf{B}_{c}) \right]. \tag{2.2}\] where \(\mathbf{b}=g_{a\gamma}\nabla\varphi/\sigma_{c}^{2}\), \(\alpha=g_{a\gamma}\varphi^{\prime}/\sigma_{c}\) is the analogue of the \(\alpha\)-dynamo parameter, \(\eta_{m}=\sigma_{c}^{-1}\) is the magnetic diffusion coefficient, \(\sigma_{c}\approx 10^{2}T_{\mathrm{CMB}}\) is the conformal conductivity of ultrarelativistic plasma, \(T_{\mathrm{CMB}}=2.7\,\mathrm{K}\) is the current temperature of the cosmic microwave background radiation, and the prime means the derivative with respect to the conformal time \(\eta\) defined by \(\mathrm{d}t=a\mathrm{d}\eta\). For example, we can choose \(a=T_{\mathrm{CMB}}/T\) and \(\eta=\tilde{M}_{\mathrm{Pl}}T_{\mathrm{CMB}}^{-1}(T^{-1}-T_{\mathrm{QCD}}^{-1})\), where \(\tilde{M}_{\mathrm{Pl}}=M_{\mathrm{Pl}}/1.66\sqrt{g_{*}}\), \(M_{\mathrm{Pl}}=1.2\times 10^{19}\,\mathrm{GeV}\) is the Planck mass, and \(g_{*}=17.25\) is the number of the relativistic degrees of freedom at the QCD phase transition [24], which happens at \(T_{\mathrm{QCD}}\approx 100\,\mathrm{MeV}\). In this case \(\eta(T_{\mathrm{QCD}})=0\) and \(a_{\mathrm{now}}\equiv a(T_{\mathrm{CMB}})=1\). In Eq. (2.2), we keep only the terms linear in \(\varphi\). Note that \(\mathbf{B}_{c}\) in Eq. (2.2) always has the zero divergence, \((\nabla\cdot\mathbf{B}_{c})=0\). We add to Eq. (2.2) the equation for the evolution of \(\varphi\) which has the form, \[\varphi^{\prime\prime}+2H\varphi^{\prime}-\nabla^{2}\varphi+a^{2}m^{2}\varphi =\frac{g_{a\gamma}}{a^{2}}(\mathbf{E}_{c}\mathbf{B}_{c}), \tag{2.3}\] where \(H=a^{\prime}/a\) is the Hubble parameter. We study axions after the QCD phase transition when their mass are independent of the plasma temperature. In Ref. [15], we considered the electrodynamics of inhomogeneous axions by assuming that distribution of their wavefunction is isotropic, i.e. we supposed that only even number of derivatives, like \(\partial_{i}\partial_{j}\varphi\) etc., is nonzero. Now, our task is to study the impact of the term \(\mathbf{b}\propto\nabla\varphi\) in Eq. (2.2) on the evolution of the magnetic field. For this purpose, we study a spatially confined axion structure like an axion star. One dimensional model We consider the situation when the magnetic field is the superposition of two CS waves along the \(z\)-axis, which coincides with the radial direction, \[\mathbf{B}_{c}=\mathbf{B}_{+}+\mathbf{B}_{-},\quad\mathbf{B}_{+}=B_{+}^{(0)}( \sin kz,\cos kz,0),\quad\mathbf{B}_{-}=B_{-}^{(0)}(\cos kz,-\sin kz,0), \tag{3.1}\] where the amplitudes are functions of conformal time \(B_{\pm}^{(0)}=B_{\pm}^{(0)}(\eta)\) and \(k\) is the wave vector characterizing the scale of the system \(\propto k^{-1}\). Note that \((\mathbf{B}_{+}\cdot\mathbf{B}_{-})=0\), i.e. these waves correspond to different polarizations. We assume that \(\nabla\varphi\) is also along the \(z\)-axis, i.e. \(\mathbf{b}=(0,0,b)\). Thus, we get the following system of differential equations: \[{B_{+}^{(0)}}^{\prime}=k\left[kbB_{-}^{(0)}+B_{+}^{(0)}(\alpha-\eta_{m}k) \right],\quad{B_{-}^{(0)}}^{\prime}=k\left[-kbB_{+}^{(0)}+B_{-}^{(0)}(\alpha- \eta_{m}k)\right], \tag{3.2}\] for the amplitudes of CS waves. We can see in Eq. (3.2) that the nonzero gradient of the axion \(b\propto\partial_{z}\varphi\) mixes the independent CS waves. The distribution of the axion inside of an axion star can be quite sophisticated (see, e.g., Ref. [25]). We adopt the simple model, in which the axion wavefunction has the form, \[\varphi(z,\eta)=\begin{cases}\varphi_{0}(\eta),&0<z<R,\\ \varphi_{0}(\eta)\left(1-\frac{z-R}{\Delta}\right),&R<z<R+\Delta,\\ 0,&z>R+\Delta,\end{cases} \tag{3.3}\] where \(R\) is the radius of the axion star core, \(\Delta\) is the depth of the stellar crust, which is supposed to be thin, \(\Delta\ll R\), and \(\varphi_{0}(\eta)\) is the oscillating amplitude of the wavefunction. It means that we consider the analogue of the \(\alpha\)-dynamo in a thin layer (see, e.g., Ref. [26]). Using the fact that \(\mathbf{E}_{c}=\mathbf{J}_{c}/\sigma_{c}=(\nabla\times\mathbf{B}_{c})/\sigma _{c}\) and Eq. (3.1), we get that \((\mathbf{E}_{c}\cdot\mathbf{B}_{c})=k\left(B_{+}^{(0)2}+B_{-}^{(0)2}\right)/ \sigma_{c}\) in Eq. (2.3). Basing on Eq. (3.3), we obtain that \(b=-g_{a\gamma}\varphi_{0}/\Delta\sigma^{2}\) and \(\nabla^{2}\varphi\equiv\partial_{z}^{2}\varphi=0\). Finally, Eqs. (2.3) and (3.2) are rewritten in the form, \[B_{+}^{(0)}{}^{\prime}= \frac{k}{\sigma_{c}}\left[-\frac{g_{a\gamma}k\varphi_{0}}{\Delta \sigma_{c}}B_{-}^{(0)}+B_{+}^{(0)}\left(g_{a\gamma}\varphi_{0}^{\prime}-k \right)\right],\] \[B_{-}^{(0)}{}^{\prime}= \frac{k}{\sigma_{c}}\left[\frac{g_{a\gamma}k\varphi_{0}}{\Delta \sigma_{c}}B_{+}^{(0)}+B_{-}^{(0)}\left(g_{a\gamma}\varphi_{0}^{\prime}-k \right)\right],\] \[\varphi_{0}^{\prime\prime}= -2H\varphi_{0}^{\prime}-a^{2}m^{2}\varphi_{0}+\frac{g_{a\gamma}k }{a^{2}\sigma_{c}}\left(B_{+}^{(0)2}+B_{-}^{(0)2}\right). \tag{3.4}\] To derive Eq. (3) we take that we are at the bottom of the stellar crust, i.e. \(z\gtrsim R\). Using the dimensionless variables \[B_{\pm}^{(0)}=\frac{k}{g_{a\gamma}}\mathcal{B}_{\pm},\quad\varphi_{0}=\frac{ \sigma_{c}}{kg_{a\gamma}}\Phi,\quad\eta=\frac{\sigma_{c}}{k^{2}}\tau, \tag{3.5}\] we rewrite Eq. (3) in the form, \[\frac{\mathrm{d}\mathcal{B}_{+}}{\mathrm{d}\tau}= -K\Phi\mathcal{B}_{-}+\mathcal{B}_{+}\left(\xi\Psi-1\right), \tag{3.6}\] \[\frac{\mathrm{d}\mathcal{B}_{-}}{\mathrm{d}\tau}= K\Phi\mathcal{B}_{+}+\mathcal{B}_{-}\left(\xi\Psi-1\right),\] (3.7) \[\frac{\mathrm{d}\Psi}{\mathrm{d}\tau}= -\beta\Psi-\mu^{2}a^{2}\Phi+\frac{1}{a^{2}}\left(\mathcal{B}_{+}^{ 2}+\mathcal{B}_{-}^{2}\right), \tag{3.8}\] where \(\Psi=\partial_{\tau}\Phi\), \(\mu=m\sigma_{c}/k^{2}\) is the dimensionless axion mass, \(K=(\Delta k)^{-1}\), and \(\beta=2H\sigma_{c}/k^{2}\). The terms \({\cal B}_{\pm}\Psi\) in the right hand side of Eqs. (3.6) and (3.7) are responsible for the dynamo amplification of the magnetic field, i.e. the magnetic field becomes unstable. That is why, following Ref. [27], we introduce the quenching factor \(\xi=\left[1+({\cal B}_{+}^{2}+{\cal B}_{-}^{2})/{\cal B}_{\rm eq}^{2}\right]^{ -1}\) in these terms. Here \({\cal B}_{\rm eq}\) is the equipartition magnetic field. The numerical solution of Eqs. (3.6)-(3.8) requires the initial conditions. First, we establish the initial condition for axions. The energy-momentum tensor of axions is \[T_{\mu\nu}=\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{g_{\mu\nu}}{2}(g^ {\lambda\rho}\partial_{\lambda}\varphi\partial_{\rho}\varphi-m^{2}\varphi^{2 }). \tag{3.9}\] Using Eq. (3.9), we get that the total axions energy density is \[\rho_{a}=T_{00}=\frac{1}{2}\left[\dot{\varphi}^{2}+\frac{1}{a^{2}}(\nabla \varphi)^{2}+m^{2}\varphi^{2}\right]. \tag{3.10}\] We suppose that \(\dot{\varphi}=0\) initially. Thus, the initial energy density is \[\rho_{a}^{(0)}\approx\frac{\varphi_{0}^{2}}{2a^{2}\Delta^{2}}(1+m^{2}a^{2} \Delta^{2}). \tag{3.11}\] We can compare this quantity with \((\varepsilon mf_{a})^{2}\), where the factor \(\varepsilon\sim 10^{-10}\) for a dilute star and \(\varepsilon\sim 1\) for a dense one (see, e.g., Ref. [28]). Thus, we get the initial condition for the axion field in terms of the dimensionless variables \[\Phi(T=T_{\rm QCD})=\frac{\alpha_{\rm em}\varepsilon ma\Delta k}{\pi\sigma_{ c}\sqrt{2(1+m^{2}a^{2}\Delta^{2})}},\quad\Psi(T=T_{\rm QCD})=0. \tag{3.12}\] Here, we use the relation, \(g_{a\gamma}\approx\frac{\alpha_{\rm em}}{2\pi f_{a}}\), where \(\alpha_{\rm em}=7.3\times 10^{-3}\) is the fine structure constant and \(f_{a}\) is the PQ constant. We suppose that the stellar crust has the width \(\Delta=0.1R\). The parameter \(k\) in Eq. (3.1) is related to radius of the core as \(k=R^{-1}.\) It means that \(K=10\) in Eqs. (3.6)-(3.8). We take that \(k=10^{-8}T_{\rm CMB}\), which is much less that the reciprocal Debye scale \(k_{\rm D}=10^{-1}T_{\rm CMB}\). The physical size of such a star, if it would evolve to the present time, is \(\sim 85\,\)km. The axion mass is taken to be \(m=10^{-3}\,\)eV, which is below the upper bound established in Ref [29]. Moreover, we use the approximate relation (see, e.g., Ref. [30]) \[\left(\frac{m}{10^{-6}\,{\rm eV}}\right)\approx 5.7\left(\frac{f_{a}}{10^{12}\,{ \rm GeV}}\right)^{-1}, \tag{3.13}\] between the axion mass and the PQ constant. We suppose that the conformal seed magnetic field is \(B_{+}^{(0)}(T=T_{\rm QCD})=4.4\times 10^{13}\,\)G and \(B_{-}^{(0)}(T=T_{\rm QCD})=0\), i.e. only one CS wave in Eq. (3.1) is present initially. We use the seed strength equal to the Schwinger value \(B_{\rm crit}=m_{e}^{2}/e\). The value of the equipartition field is taken in the range \({\cal B}_{\rm eq}\lesssim 10^{-5}\). We remind that our main goal is to study the behavior of the magnetic field under the influence of axions with a nonzero \(\nabla\varphi\). That is why, first, we plot the amplitudes of CS waves \(B_{\pm}^{(0)}\) versus the temperature of primordial plasma \(T\) for a dense axion star in Fig. 1(a) and for a dilute one in Fig. 1(b). We can see that these cases coinside qualitatively. The insets in Fig. 1 show the behavior of the magnetic field at short evolution times. They confirm our guess that a nonzero \(\nabla\varphi\) mixes two different CS waves. The magnetic fields in Fig. 1 decay quite rapidly. It happens because of both the magnetic diffusion and the magnetic field quenching in Eqs. (3.6) and (3.7). To demonstrate this fact avoiding the rapid oscillations visible in Fig. 1, we show the evolution of the total magnetic energy \(\Xi_{\rm B}\propto B_{+}^{(0)2}+B_{-}^{(0)2}\) in Fig. 2. Again, both dense and dilute axion stars are considered. Finally, in Fig. 3, we depict the axion energy, defined in Eq. (3.10), for a dense and dilute axion stars. First, we can see in Fig. 3 that axions start oscillating with a frequency much lower than the magnetic field in Fig. 1. We also mention two facts: (i) axions continue oscillating even when the magnetic field decays; and (ii) the function \(\Xi_{a}\) is normalized to its maximal value, which turns out to be great, \(\Xi_{a}^{(\rm max)}\gg 1\). Such behavior of the axion energy results from the term \(\propto\left(\mathcal{B}_{+}^{2}+\mathcal{B}_{-}^{2}\right)/a^{2}\) in the right hand side of Eq. (3.8). Before the decay of the magnetic field, shown in Fig. 2, it succeeds to transmit its energy to axions. Unfortunately, it is technically difficult to trace the evolution of the system at greater times numerically since we have two different typical frequencies: one of them is related to magnetic field oscillations, cf. Fig. 1, another one to axion oscillations. The ratio of these frequencies is huge. ## 4 Three dimensional model based on a spherical CS wave In this section, we qualitatively consider the evolution of 3D magnetic fields under the influence of inhomogeneous axions. We have demonstrated in Sec. 3 that the seed magnetic field decays quite rapidly; cf. Figs. 1 and 2. That is why we neglect the conformal quantities and consider the physical magnetic fields. By a spherical CS wave we means an analogue of CS waves in Eq. (3.1), which is defined in a compact domain rather than in the flat Euclidean space. The simplest example of a closed Figure 3: The axion energy density \(\Xi_{a}\), normalized by its maximal value, vesus the plasma temperature. The parameters of the system are the same as in Fig. 2. Panel (a) corresponds to a dense star; panel (b) to a dilute one. Figure 2: The evolution of the total magnetic energy density \(\Xi_{\rm B}\propto B_{+}^{(0)2}+B_{-}^{(0)2}\) normalized by its initial value. The parameters of the system are the same as in Fig. 2. Panel (a) corresponds to a dense star; panel (b) to a dilute one. compacted 3D domain is the standard 3D unite sphere, which is defined in 4D Euclidean space with the coordinates \((x_{0},x_{1},x_{2},x_{3})\) by the equation: \[x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1. \tag{4.1}\] On the unite 3D sphere, we define right, \({\bf B}_{+}\), and left, \({\bf B}_{-}\), magnetic fields, as well as the following non-helical magnetic fields: \({\bf B}_{\rm A}={\bf B}_{+}-{\bf B}_{-}\) and \({\bf B}_{\rm B}={\bf B}_{+}+{\bf B}_{-}\). The magnetic field \({\bf B}_{\rm A}\) is not a complete analogue of the vector \({\bf B}_{c}\), given by Eq. (3.1), because the vector \({\bf B}_{c}\) is a right-polarized in the case \(k>0\), whereas the vector \({\bf B}_{\rm A}\) is a mirror-symmetric. An interesting fact that \({\bf B}_{\rm A}\) is orthogonal to \({\bf B}_{\rm B}\) and the two magnetic modes are linked by the helicity integral. Let us remark that the magnetic vector \({\bf B}_{+}\) is well-known and was constructed in Ref. [16, Ex. 1.9, Ch. III], as well as in Ref. [19], using the Kelvin transformation described in Appendix A. The splitting \[{\bf B}_{+}=\frac{1}{2}[{\bf B}_{\rm A}+{\bf B}_{\rm B}], \tag{4.2}\] into the sum of two mirror-symmetric vectors is new. Unfortunately, in a compact domain, we are not able the define a regular analogue of the gradient of the axion wavefunction \({\bf b}\) as in Eq. (2.2). The simplest analogue of the vector shift is the \(k\)-spectrum of the magnetic field. Instead of the evolution Eq. (3.2) we get a nonlinear oscillator, determined by an infinite numbers of harmonics. We describe the analogue of Eq. (3.4). Then, we calculate the first-order derivative of a solution in the right-hand side of the equation using the vector \({\bf b}\) in the left hand side of the equation. The axion density energy in the case is given analogously with Eq. (2.3). This solution can be used to describe the evolution of the magnetic field in a short time, when an initial magnetic is slowly varying, whereas the axion density energy changes rapidly. Analogous situation was studied in Ref. [15]. Thus, here, we consider the situation opposite to that shown in Figs. 1 and 3. To define a spherical analogue of a CS wave we use standard MHD calculations on the Riemannian manifold given in Ref. [16, Def. 5.9, Ch. 1]. The standard 3D sphere of the unit radius is equipped by the following coordinate system \((\phi,\psi,\theta=2\chi)\), which is related with the Cartesian coordinate system in \(\mathbb{R}^{4}\) by Eq. (4.7). Let us define the curves \(\Theta_{\rm A}\) and \(\Theta_{\rm B}\) on \(\mathbb{R}^{4}\) rather than on \(\mathbb{C}^{2}\), \[\Theta_{\rm A}(\phi;x_{0},x_{1},x_{2},x_{3})=\Theta_{\rm A}(\phi; x_{0},x_{1})= \left(R_{\rm A}\cos(\phi),R_{\rm A}\sin(\phi),0,0\right),\] \[\Theta_{\rm B}(\psi;x_{0},x_{1},x_{2},x_{3})=\Theta_{\rm B}(\psi; x_{2},x_{3})= \left(0,0,R_{\rm B}\cos(\psi),R_{\rm B}\sin(\psi)\right). \tag{4.3}\] Since \(R_{\rm A}^{2}+R_{\rm B}^{2}=1\), let us put \(R_{\rm A}=\sin(2\chi)\) and \(R_{\rm B}=\cos(2\chi)\). Thus, we can compute \[{\bf B}_{\rm A}={\bf d}\Theta_{\rm A}/{\bf d}\phi,\quad{\bf B}_{ \rm B}={\bf d}\Theta_{\rm B}/{\bf d}\psi. \tag{4.4}\] Using Eq. (4.4), we define the associated differential one-forms on \(\mathbb{R}^{4}\), \[\beta_{\rm A}^{\rm R4}= B_{\rm A}^{0}{\bf d}x^{0}+B_{\rm A}^{1}{\bf d}x^{1}=-\sin( \phi)R_{\rm A}{\bf d}x^{0}+\cos(\phi)R_{\rm A}{\bf d}x^{1}. \tag{4.5}\] \[\beta_{\rm B}^{\rm R4}= B_{\rm B}^{2}{\bf d}x^{2}+B_{\rm B}^{3}{\bf d}x^{3}=-\sin( \psi)R_{\rm B}{\bf d}x^{2}+\cos(\psi)R_{\rm B}{\bf d}x^{3}. \tag{4.6}\] We now define the mapping between points on the three-sphere \(S^{3}\) and \(\mathbb{R}^{4}\), \[\Upsilon = (x_{0},x_{1},x_{2},x_{3}),\] \[x_{0} = \cos(\phi)\cos(2\chi),\] \[x_{1} = \sin(\phi)\cos(2\chi),\] \[x_{2} = \cos(\psi)\sin(2\chi),\] \[x_{3} = \sin(\psi)\sin(2\chi), \tag{4.7}\] with the coordinates of \(S^{3}\): \(\phi\in[0,2\pi)\), \(\psi\in[0,2\pi]\), and \(2\chi\in[0,\frac{\pi}{2}]\). We can now compute the differential one-forms \(\beta^{\rm R4}_{\rm A}\) and \(\beta^{\rm R4}_{\rm B}\) on \(S^{3}\) as the pull-back under the mapping \(\Upsilon\), \[\beta^{\rm S3}_{\rm A}= \Upsilon^{*}\beta^{\rm R4}_{\rm A}\] \[=\sin(\phi)\cos(2\chi)\sin(\phi)\cos(2\chi){\bf d}\phi+\sin(\phi) \cos(2\chi)\cos(\phi)\sin(2\chi){\bf d}2\chi\] \[+\cos(2\chi)\cos(\phi)\cos(\phi)\cos(2\chi){\bf d}\phi-\cos(2\chi )\cos(\phi)\sin(\phi)\sin(2\chi){\bf d}2\chi\] \[=\cos^{2}(2\chi){\bf d}\phi, \tag{4.8}\] \[\beta^{\rm S3}_{\rm B}= \Upsilon^{*}\beta^{\rm R4}_{\rm B}\] \[=\sin(\psi)\sin(2\chi)\sin(\psi)\sin(2\chi){\bf d}\phi-\sin(\psi) \sin(2\chi)\cos(\psi)\cos(2\chi){\bf d}2\chi\] \[+\cos(\psi)\sin(2\chi)\cos(\psi)\sin(2\chi){\bf d}\phi+\cos(\psi) \sin(2\chi)\sin(\psi)\cos(2\chi){\bf d}2\chi\] \[=\sin^{2}(2\chi){\bf d}\psi. \tag{4.9}\] The forms in Eqs. (4.8) and (4.9) obey the properties, \[{\bf d}\beta^{\rm S3}_{\rm A}= -2\cos(2\chi)\sin(2\chi){\bf d}2\chi\wedge{\bf d}\phi, \tag{4.10}\] \[{\bf d}\beta^{\rm S3}_{\rm B}= 2\cos(2\chi)\sin(2\chi){\bf d}2\chi\wedge{\bf d}\psi. \tag{4.11}\] We take their Hodge-dual \(\star{\bf d}\beta^{\rm S3}_{\rm A}\), with the volume element \({\bf d}V=\cos 2\chi\sin(2\chi){\bf d}\phi\wedge{\bf d}2\chi\wedge{\bf d}\psi\), and find \[\star{\bf d}\beta^{\rm S3}_{\rm A}= \frac{1}{2}\sin^{2}(2\chi)\cos^{2}(2\chi){\bf d}\psi=2\sin^{2}(4 \chi){\bf d}\psi, \tag{4.12}\] \[\star{\bf d}\beta^{\rm S3}_{\rm B}= -\frac{1}{2}\sin^{2}(2\chi)\cos^{2}(2\chi){\bf d}\phi=2\sin^{2}(4 \chi){\bf d}\phi. \tag{4.13}\] Finally, we get the result, \[{\bf d}\star{\bf d}\beta^{\rm S3}_{\rm A}= 2{\bf d}\beta^{\rm S3}_{\rm B}, \tag{4.14}\] \[{\bf d}\star{\bf d}\beta^{\rm S3}_{\rm B}= 2{\bf d}\beta^{\rm S3}_{\rm A}. \tag{4.15}\] Let us define smooth magnetic fields \[{\bf B}_{+}= {\bf B}_{\rm A}+{\bf B}_{\rm B}=*{\bf d}\beta^{S3}_{\rm A}+*{ \bf d}\beta^{\rm S3}_{\rm B},\] \[{\bf B}_{-}= {\bf B}_{\rm A}-{\bf B}_{\rm B}=*{\bf d}\beta^{S3}_{\rm A}-*{\bf d }\beta^{S3}_{\rm B}, \tag{4.16}\] on \(S^{3}\), where \(*\) means a vector field, which is associated with the corresponding 2-form by means of the volume form. The quantities \({\bf B}_{\rm A}\) and \({\bf B}_{\rm B}\) are defined using the corresponding 2-form in Eqs. (4.10) and (4.11). Let us define a rescaling, \({\bf B}_{\rm A}\mapsto\frac{1}{2}{\bf B}_{\rm A}\) and \({\bf B}_{\rm B}\mapsto\frac{1}{2}{\bf B}_{\rm B}\), for simplicity. Using the coordinates \((\phi,\psi,2\chi=\theta)\) on \(S^{3}\), we get that \[{\bf B}_{\rm A}=(\sin(\theta),0,0);\quad{\bf B}_{\rm B}=(0,\cos(\theta),0). \tag{4.17}\] Obviously, \({\rm curl}({\bf B}_{+})=2{\bf B}_{+}\) and \({\rm curl}({\bf B}_{-})=-2{\bf B}_{-}\). ## 5 Harmonics Let us rescale the latitude \(\theta\) into \(\chi\) by the formula, \[\theta=2\chi,\quad\theta\in\left[0,\frac{\pi}{2}\right]. \tag{5.1}\] Let us define the magnetic harmonics \({\bf B}_{1,A}\), which are called poloidal harmonics. A poloidal harmonic equals to zero at \(\theta=0\). It is convenient to remark for the calculations that the absolute value of the vectors \({\bf B}_{\rm A}\cos(\theta)\) and \({\bf B}_{\rm B}\sin(\theta)\) coincides with functions of \(\theta\)-coordinate and is defined as \(\cos(\theta)\sin(\theta)\). Directions of the vectors are perpendicular at an arbitrary point on \(S^{3}\). Let us define the axion wavefunction \(\varphi(\theta,t)\), which is a pseudoscalar analogue of Eq. (2.3), on the sphere. It also depends on time by the formula, \[\varphi(\theta,t)=a_{0}\cos(n\theta)\sin(\omega t), \tag{5.2}\] where the parameter \(a_{0}\) is assumed to be small. Since \(({\bf B}_{\rm A}\cdot{\bf B}_{\rm B})=0\), the function \(\varphi(\theta,t)\) obeys the equation, \[\ddot{\varphi}-\nabla^{2}\varphi+m^{2}\varphi=0, \tag{5.3}\] and the following expression is valid: \(-\omega^{2}+n^{2}+m^{2}=0\). Note that Eq. (5.3) is analogous to Eq. (2.3) where we neglect the universe expansion. Let us consider the following sequence of transformations: \[\begin{array}{c}{\bf B}_{B}\stackrel{{\rm curl}}{{ \longrightarrow}}2{\bf B}_{A}\stackrel{{\times\nabla\cos(n \theta)}}{{\longrightarrow}}-2n\sin(n\theta)\sin(\theta)\cos^{-1}(\theta){\bf B }_{\rm B}\\ \stackrel{{\rm curl}}{{\longrightarrow}}2n\frac{{\rm d}}{{ \rm d}\theta}[\tan(\theta)\sin(n\theta)]\cot(\theta){\bf B}_{\rm A}-4n\tan( \theta)\sin(n\theta){\bf B}_{\rm A}.\end{array} \tag{5.4}\] Basing on Eq. (5.4), we consider the case \(n=2\). In this situation, one has \[\begin{array}{c}{\bf B}_{\rm B}\stackrel{{\rm curl}}{{ \longrightarrow}}2{\bf B}_{\rm A}\stackrel{{\times\nabla\cos(2 \theta)}}{{\longrightarrow}}\\ -8\sin^{2}(\theta){\bf B}_{\rm B}\stackrel{{\rm curl}}{{ \longrightarrow}}16\cos^{2}(\theta){\bf B}_{\rm A}-16\sin^{2}(\theta){\bf B}_ {\rm A}=16\cos(2\theta){\bf B}_{\rm A}.\end{array} \tag{5.5}\] Let us apply the calculations in Eq. (5.5) to the formula: \[\begin{array}{c}\dot{\bf B}_{\rm B}=a_{0}\sin(\omega t)\nabla\times[\nabla \cos(n\theta)\times(\nabla\times{\bf B}_{\rm B})]+\\ \nabla\times[\alpha{\bf B}_{\rm B}]-\eta_{m}\sin(\omega t)\nabla\times[\nabla \times{\bf B}_{\rm B}]].\end{array} \tag{5.6}\] where \(\alpha\) is defined in Eq. (2.2). Since \(\alpha=\omega\alpha_{0}\cos(\omega t)\cos(n\theta)\), for a small \(\alpha_{0}\) only first-order terms are calculated. The first term is known from Eq. (5.5). The second term is given by \(\nabla\times[\alpha{\bf B}_{\rm B}]=\alpha_{0}\omega\cos(\omega t)\nabla\times [\cos(n\theta){\bf B}_{\rm B}]=\omega\alpha_{0}\cos(\omega t)[n\sin(n\theta) \sin^{-1}(\theta)\cos(\theta){\bf B}_{\rm A}+2\cos(n\theta){\bf B}_{\rm A}]\). For \(n=2\) the second term is the following: \(-2\omega\alpha_{0}\cos(\omega t)[2\cos^{2}(\theta)-\sin^{2}(\theta)]{\bf B}_{ \rm A}\). The third term has the form, \(-4\eta_{m}\sin(\omega t){\bf B}_{\rm B}\). To simplify the calculations we take \(\eta_{m}=0\). In the case \(n=2\), we have \[\dot{\bf B}_{\rm B}=2a_{0}[8\sin(\omega t)\cos(2\theta)+\omega\cos(\omega t)(2 \cos^{2}(\theta)-\sin^{2}(\theta))]{\bf B}_{A}. \tag{5.7}\] Let us transform Eq. (5.7) as \[2a_{0}[8\sin(\omega t)-\omega\cos(\omega t)]\cos(2\theta){\bf B}_{A}+2a_{0} \omega\cos(\omega t)\cos^{2}(\theta){\bf B}_{\rm A}. \tag{5.8}\] This calculation means that, in a stationary nonhelical magnetic field \({\bf B}_{\rm B}\) with a fast oscillations of the scalar axion field, a first-order variation is given by a standing wave represented by the second term in Eq. (5.8). The amplitude of the wave is determined by an amplitude of the scalar axion field. Additionally, we get a running wave in the first term in Eq. (5.8). When a frequency \(\omega\) of the axion field in Eq. (5.7) is great, the \(\alpha\)-effect, which is related with the second term in Eq. (5.7) is dominated. Oppositely, when a frequency \(\omega\) is small, the first term in this equation is dominated. In each of the two limited cases, a standing wave is presented. A running wave is presented when the first and the second term in Eq. (5.7) are of a same order. ## 6 Conclusion In the present work, we have studied the simultaneous evolution of magnetic fields and spatially inhomogeneous axionic field. The coordinate dependence of the these fields have been chosen in a specific way imitating realistic situations taking place in astrophysical objects, e.g., in axion stars. We did not study the formation of the fields configuration. Our main goal was to analyze the time evolution of both the magnetic field and the axion wavefunction. We have started in Sec. 2 with writing down of the main equations of the axion electrodynamics. We have obtained the main Eq. (2.2) for the magnetic field evolution in the presence of a spatially inhomogeneous axion wavefunction. Equation (2.2) is a generalization of the induction equation known in MHD. Our subsequent studies have been based on this equation. Two main cases have been considered. First, in Sec. 3, we have studied the simplified 1D model where the two CS waves with independent polarizations have been present. The axion wavefunction was linearly decreasing within the crust of an axion star. The main impact of the nonzero gradient of the axion wavefunction in this geometry is the mixing of independent CS waves. It does not contribute to the magnetic field instability. We have derived the system of ordinary differential equations for the amplitudes of these fields. These system has been solved numerically. We have obtained that the magnetic field decays quite rapidly transferring its energy to axions. This system was supposed to exist in the early universe after the QCD phase transition. However, basing on the results obtained, it is unlikely for any signatures of such an object to evolve to the present days. Moreover, we have obtained that the evolution of dense and dilute axion stars is practically identical. Then, in Secs. 4 and 5, we have qualitatively studied more complicated situation based on the on the Hopf fibration. In this approach, Eq. (2.3) in a first-order approximation of the solution is satisfied. However, the analogue of the evolution equation (3.4) is different and higher terms of a solution depend on lower ones. Our main result is that, in the chosen geometry, the inhomogeneity of axions leads to the change of the \(\alpha\)-dynamo parameter. In summary, we have found that the impact of the nonzero gradient of the axion wavefunction on the magnetic field evolution strongly depends on the geometry of the system. Of course, the comprehensive analysis should involve the simultaneous solution of both Eqs. (2.2) and (2.3). We plan to tackle this problem in one of our forthcoming works. ## Acknowledgments We are thankful to E. Maslov for useful discussions. The work of P. Akhmetiev is supported by the Russian Science Foundation (Grant No. 21-11-00010). The Kelvin transformation The Kelvin transformation is assumed to be a stereographic projection of the sphere \(S^{3}\), outside a marked point \(pt\), into the Euclidean space \(\mathbb{R}^{3}\). This transformation is conformal, i.e. an angle between two vectors is unchanged. In Ref. [19], the Kelvin transformation was used to construct the MHD soliton in the Euclidean space with regular conditions at the infinity. It is remarkable that the Kelvin transformation \(T:S^{3}\setminus\{pt\}\to\mathbb{R}^{3}\) does not modify equations. In the case the marked point \(pt\) is the north pole on the sphere: \(\psi=\theta=0\), the magnetic mode \(\mathbf{B}_{\mathrm{A}}\) is translated into a (generalized) toroidal mode \(\mathbb{B}_{\mathrm{A}}\), and the magnetic mode \(\mathbf{B}_{\mathrm{B}}\) is translated into a (generalized) poloidal \(\mathbb{B}_{\mathrm{B}}\). We will calculate only the transformation for the magnetic modes. The Kelvin transformation \(T\) is defined by the expression, \[(\phi,\theta,\psi)\mapsto(r=\cot(\theta),\Psi,\phi),\] (A.1) where, in the target, a spherical coordinate system with the latitude \(\Psi\) and the longitude \(\phi\) is considered. The image of the \(\psi\)-coordinate is calculated explicitly. It is independent of the \(\phi\)-coordinate in the target. The absolute value of the gradient of the function \(\cot(\theta)\) determines the module of the conformal transformation \(T\). This scale factor, which is a function in the target, is denoted by \(K\). The Kelvin transformation transforms a magnetic field \(\mathbf{B}^{S3}\) on the source sphere into the corresponding magnetic field \(\mathbb{B}=\frac{T_{*}(\mathbf{B}^{S3})}{K^{3}}\) on the target Euclidean space, where \(K\) is the scale factor defined above. This function has an asymptotic \(\propto r^{2}\), where \(T_{*}\) is the translation of the vector by means of the differential of \(T\). One can see that the mode \(\mathbb{B}_{\mathrm{B}}=T_{*}(\mathbf{B}_{\mathrm{B}})K^{-3}\) is pointed along the longitude \(\phi\) and looks like a toroidal magnetic mode. The mode \(\mathbb{B}_{\mathrm{A}}=T_{*}(\mathbf{B}_{\mathrm{A}})K^{-3}\) is pointed in a perpendicular direction along \((r,\Psi)\)-coordinates and looks like a poloidal one. The scale factor \(K\) is related to the magnetic volume form \(K^{3}\mathbf{d}x\) in \(\mathbb{R}^{3}\), which has an asymptotic \(\propto r^{6}\) along the radius. Both modes \(\mathbb{B}_{\mathrm{A}}\) and \(\mathbb{B}_{\mathrm{B}}\) have the asymptotic \(\propto r^{-4}\). The domain at the infinity, where the magnetic volume form is great, is the dielectric with a very large magnetic permeability. The operator \(\mathrm{curl}_{K}\) with a domain with various magnetic permeability corresponds to the curl operator on \(S^{3}\), which is translated using the Kelvin transformation by the formula, \[\mathrm{curl}_{K}:\mathbb{B}\mapsto\mathrm{curl}(K\mathbb{B}),\] (A.2) where curl is the standard vorticity operator in the flat homogeneous Euclidean space. Therefore, the operator \(\mathrm{curl}_{K}\) in the domain with various magnetic permeability keeps the asymptotic of a magnetic mode over the radius and is related with the standard vorticity operator by the formula: \[\mathrm{curl}:\mathbb{B}\mapsto\mathrm{curl}_{K}(K^{-1}\mathbb{B}).\] (A.3) In particular, at the infinity, where \(K=K(r)\to+\infty\), the magnetic modes are almost potential and determine no currents. Magnetic helicity is kept by \(T\). The helicity is calculated as a improper integral in the form, \[\chi_{\mathbb{B}}=\int_{\mathbb{R}^{3}}(\mathbb{A}\cdot\mathbb{B})K^{3}\mathbf{ d}x=\chi_{\mathbf{B}}=\int_{S^{3}}(\mathbf{A}\cdot\mathbf{B})\mathbf{d}V,\] (A.4) where \(\mathrm{curl}_{K}(\mathbb{A})=\mathbb{B}\) and \(\mathrm{curl}(K\mathbb{A})=\mathrm{curl}_{K}(\mathbb{A})=K\mathrm{curl}(K^{-3}T_{ *}\mathbb{A})+\nabla(K)\times K^{-3}T_{*}(\mathbb{A})=(\mathbb{B}-\nabla(K) \times\mathbb{A})+\nabla(K)\times\mathbb{A}\). ### The Arnold inequality The Arnold inequality [16, Theorem 1.5, Ch. III] for a magnetic field on \(S^{3}\) is the following: \[\int_{S^{3}}(\mathbb{B}\cdot\mathbb{B})\mathbb{d}V\geq\frac{1}{2R}\int_{S^{3}} (\mathbb{A}\cdot\mathbb{B})\mathbb{d}V=\frac{1}{2R}\chi_{\mathbb{B}},\] (A.5) where \(\frac{1}{2R}\) is the inverse of the smallest eigenvalue of the curl operator, \(R\) is the radius of the sphere, which is a scale factor, and \(\chi_{\mathbb{B}}\) is the magnetic helicity. The Arnold inequality is generalized for a non-bounded conductive domain with the variation of magnetic permeability \(K(r)\), \[\int_{\mathbb{R}^{3}}(\mathbb{B})^{2}K^{3}\mathbb{d}x=\int_{S^{3}}(\mathbb{B} )^{2}K^{-1}\mathbb{d}V\geq\max_{S^{3}}(K^{-1})\int_{S^{3}}(\mathbb{B})^{2} \mathbb{d}V\geq\frac{1}{2R}\chi_{\mathbb{B}},\] (A.6) where \(r\) is the distance to the origin and \(R\) is scale of a inhomogeneity of the magnetic permeability.
2310.07341
Spectroscopy of hadrons with heavy quarks from lattice QCD
Lattice QCD results on hadrons with heavy quarks are briefly reviewed. The focus is on the spectrum of conventional and exotic hadrons. Structure of certain conventional hadrons is addressed as well.
Sasa Prelovsek
2023-10-11T09:38:51Z
http://arxiv.org/abs/2310.07341v1
# Spectroscopy of hadrons with heavy quarks from lattice QCD ###### Abstract Lattice QCD results on hadrons with heavy quarks are briefly reviewed. The focus is on the spectrum of conventional and exotic hadrons. Structure of certain conventional hadrons is addressed as well. + Footnote †: Only few references are cited due to the page limit. Other references are listed in the slides. ## 1 Introduction Experiments revealed that hadrons with the following minimal contents exist: mesons \(\bar{q}q\), baryons \(qqq\), tetraquarks \(\bar{q}q\bar{q}q\), pentquarks \(\bar{q}qqqq\) and hybrid mesons \(\bar{q}Gq\). The first two sectors correspond to conventional hadrons, while the last three are referred to as exotic hadrons. We will briefly review lattice results on some of these states [1]. ## 2 Hadron spectroscopy with Lattice QCD The _spectrum of hadrons_ (below, near, or above threshold) is extracted from the energies \(E_{n}\) of QCD eigenstates \(|n\rangle\) on a finite and discretized lattice in Eucledian space-time. The eigen-energies \(E_{n}\) are determined from the time-dependence of two-point correlation functions \(\langle O_{i}(t_{E})O_{j}^{\dagger}(0)\rangle=\sum_{n}\langle O_{i}|n\rangle\ e^{-E_{n}t_{E}}\langle n|O_{j}^{\dagger}\rangle\), where operators \(O\) create/annihilate the hadron system with a given quantum number of interest. The masses of strongly stable hadrons well below threshold are obtained as \(m\!=\!E_{n}|_{\vec{p}=0}\). These have already been determined and agree well with the experiment, e.g. [1, 2]. The masses of hadrons near threshold and hadronic resonances have to be inferred from the scattering of two hadrons \(H_{1}H_{2}\), which is encoded in the scattering amplitude \(T(E)\). The simplest example is a one-channel scattering in partial wave \(l\) sketched in Fig. 1. Luscher has shown that the energy \(E\) of a two-hadron eigenstate in finite volume renders \(T(E)\) at that energy in infinite volume [3]. This relation leads to \(T(E)\) for real \(E\), which is then analytically continued to complex energies. A pole in \(T(E)\) indicates the presence of a state, while its position renders its mass \(m={\rm Re}(E)\) and the width \(\Gamma=-2\) Im\((E)\). A resonance corresponds to a pole away from the real axes. A bound state corresponds to a pole below the threshold: the state is referred to the bound state if the pole occurs for positive imaginary momenta \(p=i|p|\) and a virtual bound state if it occurs for \(p=-i|p|\), where \(p\) denotes the magnitude of the 3-momentum in the center-of-mass frame. The majority of the scattering studies are still performed at \(m_{\pi}>m_{\pi}^{phy}\) and at a single lattice spacing. The resonances that decay via several strong decay channels \(R\to H_{1}H_{2},\ H_{1}^{\prime}H_{2}^{\prime},..\) have to be extracted from the coupled-channel scattering (Fig. 2). The scattering matrix for two coupled channels contains three unknown functions of energy. It is customary to parametrize their energy dependence in order to extract them from the eigen-energies using the Luscher's formalism. The formalism and progress in addressing decays \(R\to H_{1}H_{2}H_{2}\) was reviewed at the last edition of this conference [4]. ## 3 Spectroscopy of various hadron sectors The majority of the discovered exotic hadrons contain heavy quarks as those are more likely to form quasi-bound states due to small kinetic energies. Most of them decay strongly and the theoretical challenge to study them increases with the number of decay channels. **Charmonium-like states \(\bar{\bf c}\bf{c}\bf{,\bar{c}}\bar{\bf q}\bf{q^{\prime}}\bf{,\bar{c}}\bf{c}\bf{u}\bf{d}\)** The spectrum of charmonium-like states with \(I=0\) was extracted from the coupled channels \(\bar{D}D-\bar{D}_{s}D_{s}\) at \(m_{\pi}\simeq 280\) MeV [5] (Fig. 3). All the states except for two (indicated by magenta arrows) appear to be conventional charmonia \(\bar{c}c\). In addition, two exotic scalar states are predicted near both thresholds. The heavier one has a large coupling to \(\bar{D}_{s}D_{s}\) and a small coupling to \(\bar{D}D\) - it likely corresponds to a state \(X(3960)\) composed of \(\bar{c}s\bar{s}c\) recently discovered by LHCb [6]. The two additional scalars were not found at \(m_{\pi}\simeq 390\) MeV in [7]. Figure 1: Extracting resonances and near-threshold bound states from one-channel scattering. Figure 2: A resonance that decays via two channels \(R\to H_{1}H_{2},\ H_{1}^{\prime}H_{2}^{\prime}\) has to be inferred from the scattering of two coupled channels. A candidate for pentaquark \(P_{c}=\bar{c}cud\) with \(J^{P}=1/2^{-}\) was found \(6\pm 3\) MeV below threshold \(\bar{D}\Sigma_{c}\) threshold [8]. It appears as a bound state in one-channel scattering amplitude \(\bar{D}\Sigma_{c}\) (Fig. 4), where the lower-lying channel \(J/\psi p\) was omitted. **Hadrons with two heavy quarks: \(\mathbf{Q}\mathbf{Q}\bar{\mathbf{q}}\mathbf{q}^{\prime}\)** The \(bb\bar{u}\bar{d}\) and \(bb\bar{s}\bar{d}\) tetraquarks with \(J^{P}\!=\!1^{+}\) are expected to reside significantly below strong decay thresholds (Fig. 5). This is a reliable conclusion based on a number of lattice simulations and model-based calculations (listed in the slides). The hadron \(bb\bar{u}\bar{d}\) with such a deep binding is expected to have a small size, which indicates the dominance of \([bb]^{S=1}_{3}[\bar{u}\bar{d}]^{S=0}_{3c}\). So far, this the only tetraquark where lattice finds a strong support for the dominance of the diquark antidiquark Fock component. Tetraquarks \(QQ\bar{q}\bar{q}^{\prime}\) (\(Q=c,b\),\(q\!=\!u,d,s\)) with other flavors and spin-parities are expected near or above strong decay thresholds \(H_{1}H_{2}\), as suggested also by Fig. 5. In order to prove the existence of such a state, one needs to extract the scattering matrix and establish a pole in it as shown in Fig. 1. Figure 3: Poles and masses for a charmonium-like system with \(I=0\) from lattice [5]. LHCb discovery of a \(X(3960)\)[6] composed of \(\bar{c}s\bar{s}c\). Possible binding mechanism. The doubly charm tetraquark \(T_{cc}=cc\bar{u}\bar{d}\) was discovered by LHCb just 0.4 MeV below \(DD^{*}\) threshold [14], it has \(I\!=\!0\) and most likely \(J^{P}\!=\!1^{+}\). The lattice results on \(DD^{*}\) scattering from three simulations at different \(m_{\pi}\) are shown in Fig. 6. All simulations find significant attraction; the attraction decreases with increasing \(m_{\pi}\). This implies that \(T_{cc}\) would-be bound state from experiment converts to virtual bound state or a resonance at heavier \(m_{\pi}\). Virtual bound state poles are indeed found in simulations [11, 12, 13] when assuming effective range approximation. Relaxing this approximation and taking into account the effect of left-hand cut from one-pion exchange leads to a pair of virtual bound states or a resonance [15]. The long-range potential is dominated by \(\pi\pi\) exchange in [11]. The dominant attraction in this channel is attributed to \(\rho\) exchange in [13]. **Hadrons with a single heavy quark** The charmed scalar mesons would form a \(SU(3)\) flavor triplet in Fig. 7a according to the quark model. However, a new paradigm is supported by the effective field theories based on HQET and ChPT, combined with the lattice results as well as the experimental data (see slides for many references). According to this paradigm, the spectrum features \(c\bar{q}\) as well as \(c\bar{q}\)\(\bar{q}q\) Fock components (\(q=u,d,s\)). The latter decomposes to the multiplets \(\bar{3}\oplus 6\oplus 5\) in the \(SU(3)\) flavor limit. The attractive interactions within the anti-triplet and the sextet suggest the existence of hadrons with flavors indicated by cyan circles in Fig. 7b, with two pair of poles for \(I=1/2\) charmed mesons. The lower one resides at \(2.1\!-\!2.2\) GeV in agreement with the lattice simulations and it is a natural partner of \(D_{s0}^{*}(2317)\). The heavier pole at \(2.4\!-\!2.5\) GeV is suggested by the EFT re-analysis of the lattice and experimental data. Figure 5: Left: The binding energies of \(bb\bar{u}\bar{d}\) and \(bb\bar{u}\bar{s}\) with \(J^{P}=1^{+}\) from lattice (see references in the slides). Right: The dependence of the binding energy on \(m_{b}\) and \(m_{u,d}\)[9, 10]. Figure 6: The \(DD^{*}\) scattering in \(T_{cc}\) channel and the pole locations from lattice [11, 12, 13]. The simulation of \(D^{*}\pi\) scattering finds one axial charmed meson dominated by s-wave and the other by d-wave coupling to \(D^{*}\pi\)[16] (Fig. 8). This is in line with HQET prediction and significantly different experimental decay widths. **Bottomium-like states \(\bar{b}\)b, \(\bar{b}\)Gb, \(\bar{b}\)b\(\bar{q}\)q\({}^{\prime}\)** These systems can be studied using relativistic, non-relativistic or static b-quarks. Here I focus on the last option, where two heavy quarks and additional light degrees of freedom are investigated via the Born-Oppenheimer approximation. The eigen-energies at fixed distance between heavy quarks render the potential \(V(r)\). Motion of heavy quarks within this potential is then studied with Schrodinger-type equation. The aim is to determine whether bound states or resonances exist. The static potential for \(\bar{b}b\) system with \(I=0\) that accounts also for the coupling to a pair of heavy mesons [17] is shown in Fig. 9b. Coupled-channel Schrodinger equation with analogous potential (from an earlier calculation [19]) renders the poles related to bottomonium-like states and their composition in terms of \(\bar{b}b\), \(\bar{B}B\) and \(\bar{B}_{s}B_{s}\)[18] (Figs. 9c,d). These are dominated by the conventional Fock component \(\bar{b}b\), except for state \(n\!=\!5\), which is likely related to unconventional \(\Upsilon(10750)\) discovered by Belle. The observed \(Z_{b}\) resonances with flavor content \(\bar{b}b\bar{d}u\) are challenging for rigorous treatment since the lowest decay channel is \(\Upsilon_{b}\pi\), while they reside at the higher threshold \(B\bar{B}^{*}\). This was taken into account in the extraction of the potential between \(B\) and \(\bar{B}^{*}\) in Fig. 10, which is attractive at small distances [27, 20, 28]. This attraction is likely responsible for the existence of the exotic \(Z_{b}\). Figure 8: Poles related to charmed mesons with \(J^{P}=1^{+},2^{+}\) from [16]. Figure 7: Scalar charmed mesons according to the quark model (a) and the new paradigm (b). The excited potentials for \(\bar{b}b\) with certain spin-parities in Fig. 11 are relevant for hybrid mesons \(\bar{b}Gb\). The masses from these potentials within the Born-Oppenheimer approach [23, 24] agree with those obtained using relativistic \(b\)-quarks [26]. ## 4 Structure of hadrons with heavy quarks The charge distribution of charmonia was probed via EM and other currents in [29], while various decay constants that probe the wave function were extracted in [2]. The mass decomposition with respect to various terms of the QCD Hamiltonian was determined for charmed baryons in [30] (Fig. 12). ## 5 Looking ahead The drawback of spectroscopy on Eucledian lattice is suppressed contribution \(e^{-E_{n}t_{E}}\) of the excited states. The evolution of the QCD systems in real Minkovsky time \(e^{-iE_{n}t_{M}}\) is one of the long-term goals for quantum computers. Such time evolution for tetraquark and pentquark systems has already been studied in one-dimensional QCD with one quark flavor on a quantum computer [31] (Fig. 13). Figure 10: The attractive static potential between \(B\) and \(\bar{B}^{*}\)[20] is likely responsible for the existence of \(Z_{b}\simeq\bar{b}b\bar{d}u\)[21]. Figure 9: (a,b) The static potential of the quarkonium system that accounts also for the coupling to a pair of heavy mesons [17]. (c,d) Poles and composition of the bottomonium-like states [18] from analogous static potential [19]. ## 6 Conclusions Experiments have provided great discoveries of new conventional as well as around thirty exotic hadrons. I have reviewed the theoretical challenge to understand the spectroscopic properties of various hadron sectors from lattice QCD. This approach renders masses of hadrons that are strongly stable, as well as most of the hadrons that are slightly below the strong decay threshold or decay strongly via one decay channel. The theoretical challenge increases with the number of open decay channels. It seems impossible to address the high-lying states like \(Z_{c}(4430)\) with current lattice methods, while many interesting physics conclusions are already available for certain lower-lying states. **Acknowledgments** I acknowledge the support from ARRS research core funding No. P1-0035.
2305.03391
Compressing audio CNNs with graph centrality based filter pruning
Convolutional neural networks (CNNs) are commonplace in high-performing solutions to many real-world problems, such as audio classification. CNNs have many parameters and filters, with some having a larger impact on the performance than others. This means that networks may contain many unnecessary filters, increasing a CNN's computation and memory requirements while providing limited performance benefits. To make CNNs more efficient, we propose a pruning framework that eliminates filters with the highest "commonality". We measure this commonality using the graph-theoretic concept of "centrality". We hypothesise that a filter with a high centrality should be eliminated as it represents commonality and can be replaced by other filters without affecting the performance of a network much. An experimental evaluation of the proposed framework is performed on acoustic scene classification and audio tagging. On the DCASE 2021 Task 1A baseline network, our proposed method reduces computations per inference by 71\% with 50\% fewer parameters at less than a two percentage point drop in accuracy compared to the original network. For large-scale CNNs such as PANNs designed for audio tagging, our method reduces 24\% computations per inference with 41\% fewer parameters at a slight improvement in performance.
James A King, Arshdeep Singh, Mark D. Plumbley
2023-05-05T09:38:05Z
http://arxiv.org/abs/2305.03391v1
# Compressing Audio CNNs with Graph Centrality Based Filter Pruning ###### Abstract Convolutional neural networks (CNNs) are commonplace in high-performing solutions to many real-world problems, such as audio classification. CNNs have many parameters and filters, with some having a larger impact on the performance than others. This means that networks may contain many unnecessary filters, increasing a CNN's computation and memory requirements while providing limited performance benefits. To make CNNs more efficient, we propose a pruning framework that eliminates filters with the highest "commonality". We measure this commonality using the graph-theoretic concept of _centrality_. We hypothesise that a filter with a high centrality should be eliminated as it represents commonality and can be replaced by other filters without affecting the performance of a network much. An experimental evaluation of the proposed framework is performed on acoustic scene classification and audio tagging. On the DCASE 2021 Task 1A baseline network, our proposed method reduces computations per inference by 71% with 50% fewer parameters at less than a two percentage point drop in accuracy compared to the original network. For large-scale CNNs such as PANNs designed for audio tagging, our method reduces 24% computations per inference with 41% fewer parameters at a slight improvement in performance. James A King\({}^{+}\), Arshdeep Singh\({}^{+}\), Mark D. Plumbley, Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK. Convolutional Neural Network, Pruning, Audio classification, PANNs, DCASE. ## 1 Introduction Convolutional neural networks (CNNs) have shown promising results in a variety of audio tasks, including speech recognition [1], music analysis [2] and audio classification [3, 4]. Typically, CNNs have many layers, such as convolutional and pooling layers. Each layer consists of parameters, including weights, biases and filters, which are all learned through an optimisation process for the given problem. CNNs often have many parameters, requiring a large amount of memory storage. Moreover, the majority of computations during inference are performed by convolution operations, which involve sliding a set of filters over the input data to create a feature map; this computation is especially time-consuming when dealing with large input data and a large number of filters. While CNNs are highly effective in solving non-linear complex tasks [5], the requirement of high memory and heavy computations during inference is a bottleneck to deploying them on resource-constrained devices such as mobile phones or Internet of things (IoT) devices [6]. Moreover, the prolonged need for large computations in machine learning (ML) models contributes heavily to CO\({}_{2}\) emissions, making larger CNNs environmentally unfriendly. For instance, a modern GPU (e.g. NVIDIA GPU RTX-2080 Ti) used to train ML models for 48 hours generates the equivalent CO\({}_{2}\) emitted by an average car driven for 13 miles1. Thus, the issue of reducing the computations and memory requirement for CNNs has drawn a significant amount of attention in the research community. Footnote 1: Machine learning CO\({}_{2}\) estimator Recent efforts towards compressing CNNs involve filter pruning methods [7], which eliminate unimportant filters which contribute the least to performance. These methods measure the importance of the filters using either active or passive methods. Active methods [8, 9] use a dataset to generate feature maps from the filters and then measure the importance of the filters using various measures such as entropy, rank or the average percentage of zeros on feature maps. Some active methods even identify important filters during the training of CNNs by involving extra parameters such as soft mask to each filter and then jointly optimise the CNN parameters and soft mask [10, 11]. Conversely, passive methods are data-free, using only the filters to quantify their own importance. Therefore, passive methods are easier to apply and require significantly less storage than active filter pruning, something particularly important for larger models. Passive filter pruning methods are either norm-based [12], which computes \(l_{1}\) or \(l_{2}\) norm of the filters to define their importance, or similarity-based [13], where similar filters are removed. Norm-based methods are based on a smaller-norm-less-important criterion and eliminate filters with the smaller norm. However, eliminating smaller norm filters may ignore the diversity learned in the network and redundancy in the high-norm filters. Similarity-based pruning methods capture this diversity, eliminating redundant filters based on the pairwise similarity between filters [13]. Such similarity-based methods give better performance compared to norm-based methods. However, the pairwise similarity method eliminates redundant filters by considering only the similarity between pairs, where the closest filters might differ. Ignoring such filters may reduce the useful diverse information learned in the network. In this paper, we propose a passive filter pruning method where a filter is considered redundant if others can replace it. To measure its redundancy, we consider filters as nodes in a graph and determine the centrality of each node. A high node centrality represents a node with high commonality among any other two nodes. By ranking the centrality, we better understand the effect removing such common filters would have in the network over the previous method [13], where commonality is measured only within the closest pairs of filters without considering commonality with other filters. The rest of this paper is organised as follows. Section 2 presents a background of pruning methods. The proposed method to identify similar filters is described in Section 3. Next, Section 4 includes the experimental setup. The results and analysis are included in Section Section 5. Finally, the discussion and conclusions are presented in Section 6 and Section 7, respectively. ## 2 Background and Related Work ### The Pruning problem The filter pruning problem is one of the main ways to reduce a network size by eliminating some filters while maintaining performance. Given a CNN with \(L\) convolutional layers, each with a set of filters \(\mathcal{F}=\{F_{1},F_{2},\dots,F_{n}\}\), the aim to find a pruned CNN having layer \(L^{\prime}\) with a reduced set of filters \(\mathcal{F}^{\prime}\subseteq\mathcal{F}\) such that \(p\%\) of filters are removed. \[\min_{\mathcal{F}^{\prime}\subseteq\mathcal{F},|\mathcal{F}^{\prime}|=\left[(1- p)|\mathcal{F}|\right]}\mathcal{C}(L^{\prime})\quad\text{s.t.}\quad\mathcal{P}(L^{ \prime})\gtrapprox(1-\epsilon)\mathcal{P}(L) \tag{1}\] where \(\epsilon\) is a small tolerance value. So \(|\mathcal{F}^{\prime}|=\left\lceil(1-p)|\mathcal{F}|\right\rceil\), and the performance metric \(\mathcal{P}(L^{\prime})\) remains close to \(\mathcal{P}(L)\). It is also not uncommon to find \(\epsilon\) having a negative value, so pruning improves performance. To select a few important filters, \(\mathcal{F}^{\prime}\), the importance of the filters is computed using active or passive filter pruning methods. After obtaining the pruned network, a fine-tuning process is performed to regain most of the performance. This fine-tuning process re-trains the pruned network on the original data. ### Methods to compute CNN filter importance **Active methods:** Active filter pruning methods are data-driven, which allows the evaluation of \(\mathcal{P}(L^{\prime})\) to influence which filters are pruned or retained. For example, previous attempts compute the importance of filters during the training process by jointly learning the CNN's and the extra parameters, such as the soft mask associated with each of the CNN parameters [11, 14]. However, these methods add up to 10 times more training time [15, 16] and are computationally expensive. Other active filter pruning methods generate features corresponding to a set of examples and then apply metrics such as rank [8], energy [9], the average percentage of zeros [17] to quantify the importance of filters or similarity measures such as clustering [18] on feature maps to eliminate filters corresponding to redundant feature maps. However, these methods use extra memory resources to obtain feature maps and involve complex training procedures to optimize extra parameters such as a soft mask, particularly when a large-scale pre-trained network is used for downstream tasks. **Passive methods:** Passive approaches are data-free and do not evaluate \(\mathcal{P}(L^{\prime})\) during the filter selection process. Therefore, the passive approach is much more scalable and efficient than the active methods. For example, Li et al. [12] compute the \(l_{1}\)-norm, the absolute sum of the filter parameters, of the filters to quantify filter importance and eliminate low-norm filters to obtain the pruned network. He et al. [19] computes the geometric median of the filters and eliminates the redundant filter which are closer to the geometric median of all filters. However, previous methods are based on the smaller-norm-less-important criterion where a filter is considered less important if the filter has low \(l_{1}\)/\(l_{2}\)-norm from the origin or from the geometric median of the filters, ignoring the redundancy in the filters with high norm. Moreover, the diversity in selecting filters is also ignored as only high-norm filters would be considered as important. Other methods capture the diversity using similarity-based measures. For example, Kim et al. [20] perform clustering on filters, select a filter from each cluster as important, and eliminate the other filters. Singh et al. [13] measure similarity between filters by computing a pairwise cosine distance for all filters and then eliminating a filter from a pair of similar filters. ### Optimal Pruning Heuristic We need a good heuristic indicating the solution's quality when solving the passive pruning problem. We use the heuristic that the filters removed should be the least similar to those kept so we resolve the most redundancy. To get the optimal set of filters \(\mathcal{F}_{\text{pruned}}\) with \(|\mathcal{F}_{\text{pruned}}|=\left\lceil(1-p)|\mathcal{F}|\right\rceil\): \[\mathcal{F}_{\text{pruned}}=\operatorname*{argmin}_{\mathcal{F}^{\prime}\subseteq \mathcal{F},|\mathcal{F}^{\prime}|=\left\lfloor(1-p)|\mathcal{F}|\right\rceil} \sum_{i=1}\sum_{j=1}\mathit{sim}(\mathcal{F}^{\prime})_{i,j} \tag{2}\] where \(\mathit{sim}\) measures how similar two filters are. In this way, we have modelled the total present similarity in a matrix. Implementing an algorithm to calculate this function precisely would require iterating over all possible subsets of \(\mathcal{F}\) and so unfeasible for a relatively small number of filters. ## 3 Proposed Method The proposed method prunes a CNN layer-by-layer. Each layer contains a set of \(n\) filters \(\mathcal{F}\) where each \(F\in\mathcal{F}\) is a 3D tensor of shape (\(w\times h\times c\)). We start by flattening each filter \(F\) into \(F^{\text{flat}}\) of shape (\(wh\times c\)) with \(F_{(i-1)h+j,k}^{\text{flat}}=F_{i,j,k}\). We then perform singular value decomposition (SVD) on each flattened filter \(F^{\text{flat}}\) to find its best Rank-1 approximation under the Frobenius norm; this yields the most significant singular value \(\sigma_{1}\) and corresponding left and right singular vector \(l_{1},r_{1}\). We can now approximate the original filter with \(\hat{F}=\frac{(\sigma_{1}l_{1}r_{1}^{T})_{i,j}}{\sqrt{\sum_{k=1}^{n}((\sigma_{1 }l_{1}r_{1}^{T})_{k,j})^{2}}}\). Then we pick any column from \(\hat{F}\) as our filter representative \(f\). We can pick any column because our Rank-1 approximation means they will all be identical. Finally, we compute the cosine similarity between each filter representative \(f_{n}\) in the layer giving us a similarity matrix of shape \(\mathbf{W}\in\mathbb{R}^{n\times n}\). Utilizing \(\mathbf{W}\), we perform following steps to obtain a importance scores for a given layer. After this pre-precessing is complete, there are three other steps taken. **1. Embed filters to graph:** We construct a graph \(G=(\mathcal{F},E)\) where \(\mathcal{F}\) is the set of vertices representing filters, \(E\) is the set of edges with property \(w(e)=\mathbf{W}_{u,v}\) that assigns a weight to each edge \(e=(u,v)\in E\). Let \(S_{N}(G)\) be the set of all complete subgraphs of \(G\) of size \(N\). so that: \[S_{N}(G)=\{H\mid H\subseteq G,|\mathcal{F}_{H}|=N,\text{ and }\\ E_{H}=\{(u,v)\in E\mid u\in\mathcal{F}_{H}\text{ and }v\in\mathcal{F}_{H}\}\} \tag{3}\] Figure 1: An overall pipeline of the proposed framework. A pre-trained CNN is pruned layer-wise using centrality measures, followed by a fine-tuning process to regain most of the performance. So for \(H\in S_{\left\lceil(1-p)|\mathcal{F}|\right\rceil}(G)\), \(H=(\mathcal{F}_{H},E_{H})\) is a subgraph of \(G\) with \(|\mathcal{F}_{H}|=\left\lceil(1-p)|\mathcal{F}|\right\rceil\). The subgraph \(H_{pruned}\) can hence be defined: \[H_{\text{pruned}}=(\mathcal{F}_{\text{pruned}},E_{\text{pruned}})=\text{ argmin}_{H\in S_{\left\lceil(1-p)|\mathcal{F}|\right\rceil}}\left(G\right)\sum_{e \in E_{H}}w(e) \tag{4}\] Where the computation of \(\mathcal{F}_{\text{pruned}}\) is equivalent to Equation (2). **2. Compute centrality scores for each filter:** We utilise this graphical setup and two centrality algorithms to compute the importance of each filter. **Weighted degree centrality (WDC)[21]:** Our first approach uses weighted degree centrality (WDC) to assign importance to each node. Then we keep the filters corresponding to the \(\left\lceil(1-p)|\mathcal{F}|\right\rceil\) lowest scoring nodes since the highest scoring filter is similar to most other filters. We compute this importance with \[C(V)=\sum_{e\in\text{Elegs.of V}}w(e) \tag{5}\] and so approximate Equation (4) with \[\begin{split}\mathcal{F}_{pruned}=\{& F_{i}\in \mathcal{F}\mid v_{i}\text{ is one of the }\left\lceil(1-p)|\mathcal{F}|\right\rceil\\ &\text{largest elements in }\{C(F)\mid F\in\mathcal{F}\} \}\end{split} \tag{6}\] This equation is much easier to solve and can trivially be done so in Polynomial-time approximation, making it feasible to calculate with minimal performance loss. **Betweenness centrality (BC)[21]:** We also explore the idea of using betweenness centrality (BC) to perform pruning. BC gives each node in a network a score based on how important it is to the network's connectivity. We quantise this by measuring the shortest path between each node and counting how many times each node is in any such path. Therefore, the more minimum paths that go via a node, the more central a node is, and the more likely it is that its removal will decrease the total similarity of the network. To perform these experiments, we have the same approximator as Equation (6) but we score nodes with \[C(v)=\sum_{s,t\in V}\frac{\sigma(s,t|v)}{\sigma(s,t)}\] where \(\sigma(s,t)\) is the total number of shortest paths from node \(s\) to node \(t\), and \(\sigma(s,t|v)\) is the number of those shortest paths that pass through the node \(v\). The sum is taken over all pairs of nodes \(s\) and \(t\) in the graph. **3. Obtaining Pruned network and fine-tuning:** After obtaining centrality scores for each filter in a given layer, we prune \(p\) filters with high centrality scores in that layer. Then, we repeat the same procedure, steps (1) and (2), for other layers as well. After removing the filters from various convolutional layers a pruned network is obtained. In the end, the pruned network is retrained to regain most of the lost performance due to elimination of some filters. Figure 1 shows an overall flow of the proposed method. ## 4 Experimental Setup We evaluate the proposed pruning framework on CNNs designed for acoustic scene classification (ASC) and audio tagging. An overview of the unpruned CNNs is given below: **(a) DCASE21_Net:** We use a publicly available DCASE 2021 Task 1A baseline network designed for ASC to classify 10 different acoustic scenes [22] and denote it as "DCASE21_Net". DCASE21_Net consists of three convolutional layers (termed as C1 to C3) and one fully connected layer. The network takes input a log-mel spectrogram of size (40 \(\times\) 500), corresponding to a 10 second audio clip and is trained with the Adam optimizer for 200 iterations. The network has 46,246 parameters and requires approximately \(287\)M multiply-accumulate operations (MACs) during inference per input, and gives 48.58% accuracy. **(b) PANNs_CNN14:** PANNs [4] are large-scale pre-trained audio neural networks designed for audio tagging. PANNs are trained on the AudioSet [23], which contains over 2M labelled sound events comprising 527 different sound classes. For our experiments, we use one of the PANNs models, CNN14, which consists of 12 convolutional layers (denoted as C1 to C12) and denote the network as "PANNs_CNN14". PANNs_CNN14 takes a log-mel spectrogram of size (1000 \(\times\) 64) as an input. CNN14 gives 0.431 mean average precision (mAPs) and 0.973 area under the curve (AUC) for the AudioSet evaluation dataset. CNN14 has 81M parameters and 21G MACS2 corresponding to a 10-second-length audio clip sampled at a 32KHz with a window size of 1024 samples and a hop size of 320 samples. PANNs_CNN14 is trained with data augmentation techniques such as Mixup and SpecAugment for 600k iterations. Footnote 2: MACs computation Pytorch package. **Pruning and fine-tuning:** After obtaining the importance filters across various convolutional layers using the proposed centrality based pruning method, we eliminate \(p\in\{25\%,50\%,75\%\}\) top unimportant filters from a subset of convolutional layers to obtain pruned networks. Once the pruned network is obtained, we perform fine-tuning of the pruned network with similar conditions such as loss function, batch size except for fewer iterations as used while training the unpruned network. For DCASE21_Net, we consider all convolutional layers for pruning and perform fine-tuning for 100 iterations. For PANNs_CNN14, we provide a preliminary analysis and fine-tuned the pruned network for 180k iterations by pruning only C7 to C12 layers as these layers contain approximately 99% of the parameters. **Other methods for comparison:** The proposed pruning method is compared with methods, (a) \(l_{1}\)-norm [12], (b) geometric median (GM) method [19] and (c) pair-wise similarity method (CS) [13]. We also use the active filter pruning methods, including HRank [8] and Energy-aware pruning [9] that uses feature maps for pruning. For pruning, we randomly select 500 training examples to generate feature maps corresponding to each filters. Subsequently, we follow same fine-tuning process as used in the other methods. ## 5 Performance Analysis **DCASE21_Net:** Figure 2 shows accuracy, the number of parameters and the number of MACs obtained after pruning various subsets of convolutional layers at different \(p\) using BC measure for DCASE21_Net. Pruning 25% filters from C3 layer reduces both the number parameters and MACs by 20% at approximately 1 percentage point drop in accuracy. We find that pruning 25% filters across various subset of layers result in an accuracy drop of less than 3 percentage points compared to the unpruned network at approximately 60% reduction in parameters and 40% reduction in MACs. As the number of filters pruned across various layers increases from 25% to 75%, the accuracy drop, the number of reduced parameters and MACs for the pruned network increases except for C1 layer which shows the least sensitivity towards accuracy drop and the number of MACs at different \(p\). Next, we compare the accuracy obtained using various pruning frameworks in Table 1. For a fair comparison, we obtain the pruned network at the pruning ratio as obtained using the pair-wise similarity method [13]. We find that the accuracy obtained using the centrality methods is equal to or greater than that obtained using \(l_{1}\)-norm, GM and pairwise-similarity methods. The proposed WDC pruning method gives approximately similar accuracy without using much memory and 500 examples during pruning process compared to that of the existing active pruning methods. **PANNs_CNN14:** Figure 3 shows mAPs obtained during fine-tuning of the pruned network at different pruning ratios using BC measure. We find that pruning 25% filters across C7 to C12 layers reduce 41% parameters and 24% MACs at a slight improvement in performance with 0.434 mAPs and 0.974 AUC compared to that of the unpruned network. Pruning 50% filters, the mAPs is 0.426, and the AUC is 0.974 with 70% fewer parameters and 36% fewer MACs. Pruning 75% filters, the mAPs is 0.399, and the AUC is 0.973 with 78% fewer parameters and 46% fewer MACs. Next, we compare the proposed pruning methods with that of the existing passive pruning methods in Figure 4 when 25% of the filters are pruned across C7 to C12 layers. The proposed method gives better mAPs compared to other methods when fine-tuning iterations are less than 15k. After fine-tuning the pruned network for 180k iterations, the proposed method gives slightly improved performance compared to other methods. Overall, the proposed WDC pruning method results in a pruned network which performs better than the unpruned PANNs_CNN14 with an advantage of reduced number of MACs and the parameters as well. ## 6 Discussion In this paper, we use the graph centrality of filters in a CNN to define their redundancy and pruning. We find that the proposed weighted degree centrality based passive filter pruning method performs better than the existing pairwise-similarity method and norm-based methods. For DCASE21_Net, our experiments reveal that the proposed passive pruning method achieves similar accuracy compared to that of the active filter pruning methods without involving any feature maps. For PANNs_CNN14, we find that the pruned network gives a slightly better performance compared to that of the unpruned network with approximately 3 times fewer iterations as used in training the unpruned network. This suggests that the existing large-scale pre-trained network can be used efficiently by first applying the proposed passive filter pruning to obtain a smaller-size pruned network, and then perform fine-tuning for few iterations less than that required for unpruned network to achieve similar performance. Hence, the underlying computational resources can be used effectively. ## 7 Conclusion This paper presents a passive filter pruning method to reduce the computational complexity and memory storage of CNNs by exploring the graph centrality of the filters. The proposed pruning method achieves similar or better performance compared to that of existing norm-based and pairwise similarity methods, showing the advantage of utilising graph-based centrality measures for defining the redundancy of filters. Compared to active filter pruning methods, the proposed passive pruning method gives a similar performance without involving feature maps during pruning. In future, we would like to improve the performance of the proposed pruning method by designing better centrality measures and reducing the fine-tuning process overhead further. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Puning Method} & \multirow{2}{*}{Active} & Filter or & \multirow{2}{*}{Accuracy(\%)} & \multirow{2}{*}{Parameters} & \multirow{2}{*}{MACs} \\ & & feature map & & & \\ \hline Baseline(No Pruning) & - & 48.38 & 462.46 & 287M \\ \hline HiRank [1] & \(\mathcal{J}\) & 1.26GB & 47.24 & 2405 & 1399M \\ Energy-aware [9] & \(\mathcal{J}\) & 1.26GB & 47.00 & \(-\)\(-\)\(-\)\(-\)\(-\)\(-\) \\ \hline \(l_{1}\)-norm [12] & \(\times\) & 0.15MB & 44.22 & \(-\)\(-\)\(-\)\(-\)\(-\) \\ Similarity-based [13] & \(\times\) & 0.15MB & 45.54 & \(-\)\(-\)\(-\)\(-\)\(-\) \\ \hline \(\Delta\)(19) & \(\times\) & 0.15MB & 45.84 & \(-\)\(-\)\(-\)\(-\)\(-\) \\ Proposed (BC) & \(\times\) & 0.15MB & 45.84 & \(-\)\(-\)\(-\)\(-\) \\ Proposed (WDC) & \(\times\) & 0.15MB & 46.91 & \(-\)\(-\)\(-\)\(-\) \\ \hline \end{tabular} \end{table} Table 1: DCASE21_Net comparison with other pruning methods. Figure 4: mAPs comparison for PANNs_CNN14 with other pruning methods. Also, the maximum mAPs obtained for each method is shown in round brackets. Figure 3: mAPs obtained during fine-tuning of the pruned PANNs_CNN14 at different \(p\) using betweenness centrality (BC). Also, the maximum mAPs obtained during the fine-tuning process is shown in round brackets. Figure 2: Accuracy obtained after pruning various intermediate convolutional layers using betweenness centrality (BC) measure in DCASE21_Net at different pruning ratios. (a) shows accuracy versus parameters and (b) shows accuracy versus MACs. ## 8 Acknowledgment This work was partly supported by a PhD studentship from the Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership EP/T518050/1 and "AI for Sound (AI4S)" grant EP/T019751/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
2310.08072
Training Generative Question-Answering on Synthetic Data Obtained from an Instruct-tuned Model
This paper presents a simple and cost-effective method for synthesizing data to train question-answering systems. For training, fine-tuning GPT models is a common practice in resource-rich languages like English, however, it becomes challenging for non-English languages due to the scarcity of sufficient question-answer (QA) pairs. Existing approaches use question and answer generators trained on human-authored QA pairs, which involves substantial human expenses. In contrast, we use an instruct-tuned model to generate QA pairs in a zero-shot or few-shot manner. We conduct experiments to compare various strategies for obtaining QA pairs from the instruct-tuned model. The results demonstrate that a model trained on our proposed synthetic data achieves comparable performance to a model trained on manually curated datasets, without incurring human costs.
Kosuke Takahashi, Takahiro Omi, Kosuke Arima, Tatsuya Ishigaki
2023-10-12T06:46:07Z
http://arxiv.org/abs/2310.08072v2
# Training Generative Question-Answering on Synthetic Data Obtained from an Instruct-tuned Model ###### Abstract This paper presents a simple and cost-effective method for synthesizing data to train question-answering systems. For training, fine-tuning GPT models is a common practice in resource-rich languages like English, however, it becomes challenging for non-English languages due to the scarcity of sufficient question-answer (QA) pairs. Existing approaches use question and answer generators trained on human-authored QA pairs, which involves substantial human expenses. In contrast, we use an instruct-tuned model to generate QA pairs in a zero-shot or few-shot manner. We conduct experiments to compare various strategies for obtaining QA pairs from the instruct-tuned model. The results demonstrate that a model trained on our proposed synthetic data achieves comparable performance to a model trained on manually curated datasets, without incurring human costs. ## 1 Introduction Fine-tuning large language models (LLMs) has been proven effective for enhancing question-answering systems Dong et al. (2019). However, extending this approach to languages other than English presents challenges due to the scarcity of adequate QA pairs for training. In this study, we specifically target Japanese as a representative non-English language. We propose a straightforward approach that synthesizes Japanese QA pairs using an instruct-tuned model.1 Footnote 1: Our experiments utilize OpenAI’s ChatAPI with the _gpt-3.5-turbo-0613_ model. Question-answering tasks can be categorized into two main settings: questions with context and without context Kurihara et al. (2022). In this study, we focus on the context-based setting as shown in Figure 1. In this setting, the system takes a question along with the accompanying context as input. The model generates an answer by utilizing the information provided within the context. On the other hand, the setting without context involves the system processing only the question as input. We present a straightforward yet cost-effective method for generating synthetic question-answer (QA) pairs. Existing QA systems are trained on either human-authored datasets or automatically generated QA pairs Sachan and Xing (2018); Tang et al. (2018), both leading to high labor costs. By contrast, this paper investigates utilizing an instruct-tuned model inspired by their reasonable ability to produce synthetic dataset Gilardi et al. (2023). We use a context as input and generate both the corresponding question and its answer. The instruct-tuned model allows us to produce QA pairs in a zero-shot or few-shot manner, eliminating the need for manual curation. Our experiments compare question-answering systems fine-tuned on synthetic data generated through various strategies. Specifically, we explore different sources of contexts, the number of shots fed into the instruct-tuned model, and the quantity of QA pairs generated. The evaluation on JSQuAD's evaluation dataset Kurihara et al. (2022) provides three findings. Firstly, employing contexts extracted from a corpus with similar characteristics to the evaluation dataset yields improved performance. Secondly, the one-shot strategy outperforms the zero-shot approach. Lastly, generating three QA pairs for each context is more effective than generating a lower number of QA pairs. The top-performing model fine-tuned on our synthetic Figure 1: The task of the generative context-aware QA. data exhibits comparable performance to models trained on manually curated data. ## 2 Related Work Existing QA focus on two major settings: "closedQA" with context and "commonsens-QA" without context Kurihara et al. (2022). For the former, which we target, the QA systems receive a question along with a context, such as a Wikipedia article, and generate an answer. On the other hand, in the latter setting, the systems only receive a question as input. There are two types of QA systems: extractive and generative. Extractive methods extract an answer as it is from the context by models like BERT Rajpurkar et al. (2016), while generative methods often use the expressions that are not in the context by models like T5 Raffel et al. (2020) or GPT Brown et al. (2020). Our focus is on the latter. While several manually created datasets exist in English, such as SQuAD Rajpurkar et al. (2016) and QuALITY Pang et al. (2022), these resources do not directly apply to the Japanese language. For Japanese, JSQuAD Kurihara et al. (2022) and JAQKET2 are available. We use JSQuAD3 because the evaluation data of JAQKET is not public. Footnote 2: [https://www.nlp.ecei.tohoku.ac.jp/projects/jaqket/#Reference](https://www.nlp.ecei.tohoku.ac.jp/projects/jaqket/#Reference) Footnote 3: Strictly, JSQuAD is not for evaluating generative QA, but the span extraction-based setting. We use this data because there is no common evaluation data in Japanese for generative QA. Our models generate answers not extract spans, thus, we also conduct human evaluations. Existing studies synthesize QA pairs by two main approaches: supervised Lee et al. (2020); Sachan and Xing (2018); Tang et al. (2018) and unsupervised Puri et al. (2020). The supervised approaches train question-answer generators using manually created datasets. Our approach generates QA pairs from contexts in a zero-shot or few-shot manner, eliminating the need to train generators. In the unsupervised approach, Puri et al. (2020) uses a named entity recognizer (NER) for answer candidate extraction while our approach uses only an instruct-tuned model in end-to-end and does not require NER. ## 3 Synthesizing QA Pairs We describe our approach in this section. Based on the given texts, please make a pair of answerable question and answer. Please make the answer in Japanese polite language. Please respond in the JSON format. example texts:"texts to extract the pair of question and answer" output:{"Question":"the question that can be answered from the texts", "Answer":"the answer to the question"} ## input texts:{QA context} output: ### Source Contexts and Filtering We generate \(N\) question-answer pairs from each context. \(N\) is set to one or three in our experiments. We compare three specific sources of contexts: 1) a random sample of 6,000 Japanese Wikipedia articles (wiki), 2) a random sample of 6,000 news articles (news), and 3) contexts in JSQuAD's training dataset (JSQuAD). To collect the news articles, we gathered the most accessed articles from a search engine 4 during the period from May 2022 to May 2023. We limit each context to the first 300 characters before generating QA pairs by the instruct-tuned model. Footnote 4: The URL of the engine/dataset is hidden to preserve the anonymity of authors, and will be shown after acceptance ### Prompts for Generating QA Pairs We provide examples of zero-shot and one-shot prompts with the setting \(N=1\) in Figure 2 and Figure 3, respectively. These prompts aim to gen Figure 3: An translated sample of the “## example” part in one-shot prompt. Note that the original is in Japanese. Figure 2: An example of zero-shot prompt to generate a pair of QA. erate QA pairs from a context. In the zero-shot prompt, we first present the task instructions, followed by an explanation of the structure oh how an input text is represented, and their desired output JSON structure as shown in the "## example" section. For the setting \(N>1\), we modify the example of the JSON structure to include more QA pairs. Then, we write an input text in the "## input" section. In the zero-shot prompt setting, we only write the format of input and output structures, without including actual texts or the expected question-answer pairs corresponding to the context. On the other hand, in the one-shot prompt, we replace the "## example" section in 2 with the prompt shown in Figure 3. Unlike the zero-shot prompt, the one-shot prompt includes actual example contexts and their corresponding expected QA pairs. To better understand the effects of prompt engineering, we compare these two prompts in our experiments. The tuples of a context and generated QA pairs are used to fine-tune a GPT by the prompt shown in Figure 4. ## 4 Experiments Evaluation Dataset and Compared Models:We use the JSQuAD Kurihara et al. (2022) for evaluation. This evaluation data contains 4,470 human-authored QA pairs given Wikipedia articles as contexts. We use whole evaluation data for the automatic evaluation while randomly sampled 500 instances are used for manual evaluation. We conduct a comprehensive comparison by exploring various combinations of contexts, the number of generated QA pairs denoted as \(N\) and prompts. Regarding contexts, we consider three options: wiki, news, JSQuAD, and, as detailed in Sec. 3.1. For \(N\), we compare \(N=1\) and \(N=3\). We compare zero-shot and one-shot prompts 5. Footnote 5: We are constrained to one-shot due to the input length limit of ChatGPT. Our proposed models are compared with two models: 1) a plain GPT model without fine-tuning and 2) a model fine-tuned on QA pairs from the JSQuAD training dataset (Human), where these QA pairs are human-authored while our proposed QA pairs are not human-authored. Fine-tuningWe use the synthesized QA pairs to fine-tune the Japanese version of GPT-NeoX Black et al. (2022)6. To achieve improved speed, we employ LoRA fine-tuning Hu et al. (2022). In generating answers, we use a prompt in the zero-shot setting (Figure 4). Footnote 6: [https://huggingface.co/cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b) Metrics:For automatic evaluation, we employ BERTScore Zhang et al. (2020) and BLEU Papineni et al. (2002). BERTScore is implemented on our own with a Japanese BERT model.7 As for BLEU, SacreBLEU library Post (2018) is used. Footnote 7: [https://huggingface.co/cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3) These automatic metrics may not directly capture the correctness of an answer to a given question. To address this, we also conduct manual evaluations by human judges. We ask four judges, who are experts in natural language processing or linguistics, to assess whether the generated answer is correct or not. We showed tuples of questions, answers, and contexts to the judges. We report the accuracy obtained from the manual evaluation. ParametersWe conducted a grid search for tuning parameters: batch size, learning rate, the number of epochs, as well as LoRA's hyperparameters (specifically \(\alpha\) and \(r\)). The range of values explored during this search is provided in Table 1. Subsequently, the model that attained the highest BERTScore was chosen for evaluation. ## 5 Results In this section, we present the results on JSQuAD. ### Automatic Evaluation Our primary interest lies in examining the impact of each strategy for synthesizing QA pairs on the performance of the downstream question answering task. Specifically, we focus on comparisons involu \begin{table} \begin{tabular}{l} \hline Batch Size: {4, 8}, \\ Learning Rate: {0,00001, 0,00005, 0,000001}, \\ Epoch: {3, 4, 5,}, \(r\): {4, 8, 16, 64, 128}, \(\alpha\): {1, 4, 16} \\ \hline \end{tabular} \end{table} Table 1: The search range values in LoRA fine-tuning. Figure 4: The prompt to generate answers with the fine-tuned GPT-NeoX. ing different contexts, prompts, and the quantities of automatically generated QA pairs. Table 2 presents the scores of BERTScore and BLEU obtained by varying the contexts while keeping other settings, i.e., \(N\) and prompts are fixed. The table is divided into five sections. Starting from the top, the first section displays scores for QA models trained on human-authored QA pairs (Human) from the JSQuAD training dataset, along with the plain GPT model ( GPT) without fine-tuning. The second and third sections showcase scores obtained when \(N\) is fixed to one, but we vary the prompts to zero-shot and one-shot. The fourth and fifth sections represent scores when we use \(N=3\). **Impact of Context on Performance:** We observe that using contexts extracted from the news dataset yields relatively low scores, e.g., 0.713 and 0.747 in terms of BERTScore for zero-shot and one-shot settings with \(N=3\), respectively. The wiki context performs better (0.706 and 0.838) than news (0.713 and 0.747) for the same settings. Notably, the JSQuAD context achieves the highest BERTScore of 0.863 and 0.889 with \(N=1\) and \(N=3\), respectively. The results suggest that using Wikipedia as context provides an advantage, likely because the JSQuAD evaluation data is also derived from Wikipedia. **Impact of Prompts on Performance:** The one-shot prompt is more effective. As shown in Table 2, the model fine-tuned on the zero-shot QA pairs (\(N=1\)) generated from the contexts in JSQuAD training dataset achieves a BERTScore of 0.724. However, the one-shot prompts with \(N=1\) exhibit a significant performance gain, reaching a BERTScore of 0.863. **Effect of the Number of Generated QA Pairs on Performance:** As we increase the number of QA pairs for context, there is a gain of 2.6 points in BERTScore (from 0.863 to 0.889). Remarkably, the achieved BERTScore of 0.889 is comparable to that of a model trained on human-authored QA pairs (0.899), despite our approach not utilizing any human-authored QA pairs. ### Evaluation by Human Judges: We present the results of the manual evaluation. Table 3 shows the comparisons between three outputs: answers generated by 1) our best performing model (JSQuAD (\(N=3\)), and one-shot prompt) and 2) a model that is fine-tuned on human-authored QA pairs from the JSQuAD training dataset, and 3) gold answers in JSQuAD evaluation dataset. Remarkably, despite our approach does not use any human-authored QA pairs, the achieved accuracy is 45.4% while the model fine-tuned on human-authored QA pairs achieves only 38.4% in terms of accuracy. Gilardi et al. (2023) mention that automatic annotation with an instructor-tuning model has higher quality than annotations by crowd-workers, and our results are consistent with their claim. Note that the performance of both fine-tuned models falls significantly behind the Gold standard (90.4%), indicating ample room for improvement. ## 6 Conclusions This paper proposed to use an instruction-tuned model for synthesizing QA pairs. Our experimental results demonstrate that the models trained on automatically generated QA pairs achieve comparable or even superior performance compared to the fine-tuned model trained on human-authored QA pairs. In future studies, we plan to explore the relationship between the diversity of automatically generated QA pairs and their impact on the performance of downstream QA tasks. \begin{table} \begin{tabular}{c|c|c|c|c} context & \(N\) & prompt & BERTScore & BLEU \\ \hline \hline Human & - & - & 0.899 & 5.64 \\ GPT & - & - & 0.601 & 0.00 \\ \hline \hline news & 1 & zero & 0.697 & 0.02 \\ wiki & 1 & zero & 0.713 & 0.03 \\ JSQuAD & 1 & zero & 0.724 & 1.55 \\ \hline news & 1 & one & 0.738 & 0.11 \\ wiki & 1 & one & 0.775 & 0.09 \\ JSQuAD & 1 & one & 0.863 & 4.83 \\ \hline news & 3 & zero & 0.713 & 0.38 \\ wiki & 3 & zero & 0.706 & 0.23 \\ JSQuAD & 3 & zero & 0.740 & 1.85 \\ \hline news & 3 & one & 0.747 & 1.25 \\ wiki & 3 & one & 0.838 & 1.66 \\ JSQuAD & 3 & one & **0.889** & **6.77** \\ \hline \end{tabular} \end{table} Table 2: Performances on different contexts and numbers of generated QA pairs. \begin{table} \begin{tabular}{c|c} \hline QA Pairs & Accuracy (\%) \\ \hline \hline JSQuAD (\(N=3\), one-shot prompt) & **45.4** \\ Human & 38.4 \\ \hline Gold & 90.4 \\ \hline \end{tabular} \end{table} Table 3: Accuracy calculated as the number of correct question-context-answer tuples divided by the total 500 evaluation instances.
2305.05938
V2X-Seq: A Large-Scale Sequential Dataset for Vehicle-Infrastructure Cooperative Perception and Forecasting
Utilizing infrastructure and vehicle-side information to track and forecast the behaviors of surrounding traffic participants can significantly improve decision-making and safety in autonomous driving. However, the lack of real-world sequential datasets limits research in this area. To address this issue, we introduce V2X-Seq, the first large-scale sequential V2X dataset, which includes data frames, trajectories, vector maps, and traffic lights captured from natural scenery. V2X-Seq comprises two parts: the sequential perception dataset, which includes more than 15,000 frames captured from 95 scenarios, and the trajectory forecasting dataset, which contains about 80,000 infrastructure-view scenarios, 80,000 vehicle-view scenarios, and 50,000 cooperative-view scenarios captured from 28 intersections' areas, covering 672 hours of data. Based on V2X-Seq, we introduce three new tasks for vehicle-infrastructure cooperative (VIC) autonomous driving: VIC3D Tracking, Online-VIC Forecasting, and Offline-VIC Forecasting. We also provide benchmarks for the introduced tasks. Find data, code, and more up-to-date information at \href{https://github.com/AIR-THU/DAIR-V2X-Seq}{https://github.com/AIR-THU/DAIR-V2X-Seq}.
Haibao Yu, Wenxian Yang, Hongzhi Ruan, Zhenwei Yang, Yingjuan Tang, Xu Gao, Xin Hao, Yifeng Shi, Yifeng Pan, Ning Sun, Juan Song, Jirui Yuan, Ping Luo, Zaiqing Nie
2023-05-10T07:20:51Z
http://arxiv.org/abs/2305.05938v1
# V2X-Seq: A Large-Scale Sequential Dataset for ###### Abstract Utilizing infrastructure and vehicle-side information to track and forecast the behaviors of surrounding traffic participants can significantly improve decision-making and safety in autonomous driving. However, the lack of real-world sequential datasets limits research in this area. To address this issue, we introduce V2X-Seq, the first large-scale sequential V2X dataset, which includes data frames, trajectories, vector maps, and traffic lights captured from natural scenery. V2X-Seq comprises two parts: the sequential perception dataset, which includes more than 15,000 frames captured from 95 scenarios, and the trajectory forecasting dataset, which contains about 80,000 infrastructure-view scenarios, 80,000 vehicle-view scenarios, and 50,000 cooperative-view scenarios captured from 28 intersections' areas, covering 672 hours of data. Based on V2X-Seq, we introduce three new tasks for vehicle-infrastructure cooperative (VIC) autonomous driving: VIC3D Tracking, Online-VIC Forecasting, and Offline-VIC Forecasting. We also provide benchmarks for the introduced tasks. Find data, code, and more up-to-date information at [https://github.com/AIR-THU/DAIR-V2X-Seq](https://github.com/AIR-THU/DAIR-V2X-Seq). ## 1 Introduction Although single-vehicle autonomous driving has made significant advancements in recent years, it still faces significant safety challenges due to its limited perceptual field and inability to accurately forecast the behaviors of traffic participants. These challenges hinder autonomous vehicles from making well-informed decisions and driving safer. A promising solution to address these challenges is to leverage infrastructure information via Vehicle-to-Everything (V2X) communication, which has been shown to significantly expand perception range and enhance autonomous driving safety [1, 38]. However, current research primarily focuses on utilizing infrastructure data to improve the perception ability of autonomous driving, particularly in the context of frame-by-frame 3D detection. To enable well-informed decision-making for autonomous vehicles, it is critical to also incorporate infrastructure data to track and predict the behavior of surrounding traffic participants. To accelerate the research on cooperative sequential perception and forecasting, we release a large-scale sequential V2X dataset, V2X-Seq. All elements of this dataset were captured and generated from real-world scenarios. Compared with DAIR-V2X [38], which focuses on 3D object detection tasks, V2X-Seq is specifically designed for tracking and trajectory forecasting tasks. The V2X-Seq dataset is divided into two parts: the sequential perception dataset Figure 1: Autonomous driving datasets. V2X-Seq is the first large-scale, real-world, and sequential V2X dataset. The green circle denotes the real-world dataset, and the pink triangle denotes the simulated dataset. The abscissa represents the number of sequences. and the trajectory forecasting dataset. The sequential perception dataset comprises 15,000 frames captured from 95 scenarios, which include infrastructure images, infrastructure point clouds, vehicle-side images, vehicle-side point clouds, 3D detection/tracking annotations, and vector maps. The trajectory forecasting dataset comprises 210,000 scenarios, including 50,000 cooperative-view scenarios, that were mined from 672 hours of data collected from 28 intersection areas. To our knowledge, V2X-Seq is the first sequential V2X dataset that includes such a large-scale scenarios, making it an ideal resource for developing and testing cooperative perception and forecasting algorithms. Based on the V2X-Seq dataset, we introduce three novel tasks for vehicle-infrastructure cooperative perception and forecasting. The first task is VIC3D Tracking, which aims to cooperatively locate, identify, and track 3D objects using sequential sensor inputs from both the vehicle and infrastructure. The second task is Online-VIC trajectory forecasting, which focuses on accurately predicting future behavior of target agents by utilizing past infrastructure trajectories, ego-vehicle trajectories, real-time traffic lights, and vector maps. The third task is Offline-VIC trajectory forecasting, which involves extracting relevant knowledge from previously collected infrastructure data to facilitate vehicle-side forecasting. These proposed tasks are accompanied by rich benchmarks. Additionally, we propose an intermediate-level framework, FF-Tracking, to effectively solve the VIC3D Tracking task. The main contributions are organized as follows: * We release the V2X-Seq dataset, which constitutes the first large-scale sequential V2X dataset. All data are captured and generated from the real world. * Based on the V2X-Seq dataset, we introduce three tasks for the vehicle-infrastructure cooperative autonomous driving community. To enable a fair evaluation of these tasks, we have carefully designed a set of benchmarks. * We propose a middle fusion method, named FF-Tracking, for solving VIC3D Tracking and our proposed method can efficiently overcome the latency challenge. ## 2 Related Work Autonomous Driving Datasets.Public datasets have greatly facilitated the development of autonomous driving. Kitti [12] is the pioneering dataset for autonomous driving. nuScenes [3], Waymo Open [10, 28], ApolloScape [16], and ONCE [26] are large-scale and real-world datasets that support 3D object detection, tracking and prediction tasks. Argoverse [5], Argoverse 2.0 [34], Lyft [13], and nuPlan [4] release large-scale trajectories generated from the raw sensor data to support motion prediction and planning tasks. These datasets are all captured with single-vehicle sensors. Repo3D [37], WIBAM [14], and A9-Dataset [7] release the infrastructure-only 3D detection dataset. HighD [17] and NGSIM [29] release the drone or infrastructure-only trajectories dataset. OpenV2V [35], V2X-Sim 2.0 [21], and Cooper(inf) [1] release small-scale sequential and simulated datasets for multi-vehicle cooperative perception. DAIR-V2X-C [38] is the first real-world V2X dataset that supports VIC3D object detection; however, it does not provide the trajectory information. Compared with these existing public autonomous driving datasets, our V2X-Seq is the first large-scale sequential V2X dataset. All data are captured and generated from the real world. The dataset also includes vector maps and real-time traffic light signal data. It will be suitable for studying the Vehicle-Infrastructure Cooperative sequential perception and trajectory forecasting tasks. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Dataset** & **Year** & **Real/Sim.** & **View** & \begin{tabular}{c} **With** \\ **Trajectory** \\ \end{tabular} & \begin{tabular}{c} **With** \\ **3D Boxes** \\ \end{tabular} & \begin{tabular}{c} **With** \\ **Maps** \\ \end{tabular} & \begin{tabular}{c} **With** \\ **Traffic Light** \\ \end{tabular} & \begin{tabular}{c} **Tracked** \\ **Objects/Scene** \\ \end{tabular} & \begin{tabular}{c} **Total** \\ **Time (hour)** \\ \end{tabular} & **Scenes** \\ \hline KITTI [12] & 2012 & Real & Single-vehicle & ✓ & ✓ & ✗ & ✗ & 43.67 & 1.5 & 50 \\ \hline nuScenes [3] & 2019 & Real & Single-vehicle & ✓ & ✓ & ✓ & ✗ & 75.75 & 5.5 & 1,000 \\ \hline Waymo Motion [10, 28] & 2021 & Real & Single-vehicle & ✓ & ✓ & ✓ & ✓ & - & 574 & 103,354 \\ \hline Argoverse [5] & 2019 & Real & Single-vehicle & ✓ & ✗ & ✓ & ✗ & 50.03 & 320 & 324,557 \\ ApolloScape [16, 25] & 2019 & Real & Single-vehicle & ✓ & ✗ & ✗ & ✗ & 50.6 & 2.5 & 103 \\ \hline HighD [7] & 2018 & Real & Drone & ✓ & ✗ & ✓ & ✗ & - & 16.5 & 5,940 \\ \hline WIBAM [14] & 2021 & Real & Infrastructure & ✗ & ✗ & ✗ & ✗ & 0 & 0.25 & 0 \\ \hline NGSIM [29] & 2016 & Sim. & Infrastructure & ✓ & ✗ & ✗ & ✗ & - & 1.5 & 540 \\ \hline V2X-Sim 2.0 [21] & 2022 & Sim. & V2X & ✓ & ✓ & ✗ & ✗ & - & 0.3 & 100 \\ \hline OPV2V [35] & 2021 & Sim. & V2X & ✓ & ✓ & ✗ & ✗ & 26.5 & 0.2 & 73 \\ \hline Cooper(inf) [1] & 2019 & Sim. & V2X & ✓ & ✓ & ✗ & ✗ & 30 & - & \textless{}100 \\ \hline DAIR-V2X-C [38] & 2021 & Real & V2X & ✗ & ✓ & ✓ & ✗ & 0 & 0.5 & 100 \\ \hline \hline **V2X-Seq/Perception** & 2023 & Real & V2X & ✓ & ✓ & ✓ & ✗ & 110 & 0.43 & 95 \\ \hline **V2X-Seq/Forecasting** & 2023 & Real & V2X & ✓ & ✓ & ✓ & ✓ & 101 & 583 & 210,000 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with the public autonomous driving dataset. ’-’ denotes that the information is not provided. ’Real/Sim.’ indicates whether the data was collected from the real world or a simulator. V2X view includes multi-vehicle cooperative view and vehicle-infrastructure cooperative view. V2X-Seq is the first large-scale sequential V2X dataset and focuses on vehicle-infrastructure cooperative view. All data elements, including the traffic light signals, are captured and generated from the real world. Cooperative Autonomous Driving.Utilizing data from the road environment to enhance the safety of autonomous driving has attracted significant research attention in recent years. Some research works have focused on multi-vehicle cooperative perception, where lightweight feature-level data is transmitted and shared for improved perception of other vehicles [6, 22, 32]. To address communication delays in multi-vehicle 3D object detection, [20] proposes a time-compensation module for latency. On the other hand, some works have explored the use of infrastructure data to improve autonomous driving. For instance, [38] formalizes the vehicle-infrastructure cooperative 3D object detection task and highlights the latency challenges in cooperative perception. [39] further proposed to use feature flow prediction to overcome the uncertain latency. Other works such as [1, 14, 23, 24] also consider transmitting feature-level data from infrastructure to the vehicle side. To empower only sharing sparse yet perceptually critical information, [15] utilizes a spatial confidence map. Moreover, [21] applies the Transformer [31] to fuse the features. Works such as [8, 27, 30] integrate the infrastructure data for control in autonomous driving. However, most current works on cooperative autonomous driving focus on perceptual completion, overlooking the importance of temporal perception and forecasting. In this paper, we contribute to this field by releasing the V2X-Seq dataset, which is suitable for exploring sequential perception and forecasting tasks in vehicle-infrastructure cooperative settings. ## 3 V2X-Seq Dataset To enable the exploration of the role of infrastructure in sequential perception and trajectory forecasting, we introduce the V2X-Seq dataset. This large-scale, real-world dataset contains sequential vehicle-to-everything (V2X) data. The sequential perception component of the dataset is presented in Section 3.1, while the trajectory forecasting component is detailed in Section 3.2. Additionally, we provide an overview of the vector maps and traffic lights used in the dataset in Section 3.3. ### The Sequential Perception Dataset. 3D tracking is a critical component in autonomous driving, as it provides sequential perception information that facilitates 3D detection and prediction. To enable exploration of the role of infrastructure in 3D tracking, we release the Sequential Perception Dataset (SPD). The SPD builds on the DAIR-V2X-C 3D detection dataset [38] and consists of more than 15,000 frames captured from 95 representative scenes with 10\(\sim\)20s duration sequences, comprising both vehicle sequential frames (images and point clouds) and infrastructure sequential frames (images and point clouds) sampled at 10 Hz. We provide 3D tracking annotations for each object of interest in each sequence, with unique tracking IDs shared by the same objects in each sequence, even if they are fully occluded in some frames. Additionally, for each scene, we provide an extra local vector map. Data Collection and Annotation.The SPD builds on the DAIR-V2X-C [38]. We select 95 representative scenes from this dataset, where an autonomous driving vehicle drives through intersections equipped with sensors. SPD provides high-quality 3D annotations for ten object classes in every image and point cloud frame, including category attributes, occlusion state, truncated state, and a 7-dimensional cuboid modelled as x, y, z, width, length, height, and yaw angle. The object categories include various vehicles, pedestrians, and cyclists. Building upon the DAIR-V2X-C dataset, our annotators assigned a unique tracking ID to each annotated object, except for static traffic cone objects. The same object in one sequence is assigned a unique tracking ID, even when it is completely occluded in some frames. Moreover, we provide cooperative tracking annotations for the cooperative-view sequences based on spatial and temporal matching. Specifically, for each frame in each ego-vehicle sequence, we generate an infrastructure frame with the same timestamp as the corresponding ego-vehicle frame. This frame contains the 3D boxes interpolated and estimated from the infrastructure trajectories. Next, we convert these 3D boxes into an ego-vehicle coordinate system and match and fuse the two-side 3D boxes based on the Euclidean distance measurement and the Hungarian method [18]. To account for possible calibration and interpolation precision errors that may cause spatial matching errors, we compute the similarity of the two-side trajectories corresponding to the two matched 3D boxes. We filter out the matching with low scores and manually refine them to obtain accurate cooperative tracking annotations. ### The Trajectory Forecasting Dataset We are also interested in studying trajectory forecasting to predict the future locations of tracked objects. Accurately predicting the behavior of surrounding traffic participants can facilitate more rational decision-making and improve the safety of autonomous driving. However, the ego-vehicle Figure 2: Total number and average tracking length of 3D tracked objects per category for the sequential perception dataset (SPD). The distribution of tracked objects is relatively balanced. prediction capabilities are significantly limited by the lack of sufficient perceptual information and the lack of interaction between different traffic participants. It is valuable to study the Vehicle-Infrastructure Cooperation (VIC) trajectory forecasting to fully utilize the infrastructure data and improve the forecasting ability. Although the Sequential Perception Dataset (SPD) can be used to study the VIC trajectory forecasting, the data scale needs to be larger, and the richness of the trajectories needs to be higher to explore various behaviors. Therefore, we mined interesting trajectories from 336 driving and 336 infrastructure hours at 28 urban intersections in the Beijing Yizhuang Area to form a large-scale trajectory dataset. Details about the data collection and trajectory mining are presented in the Appendix. The Trajectory Forecasting Dataset (TFD) is composed of about 50,000 cooperative-view, 80,000 infrastructure-view and 80,000 ego-vehicle-view scenarios. Each scenario includes a sequence of tracked object data for 10 seconds at 10HZ, a local vector map, and real-time traffic light signals (only provided for cooperative-view and infrastructure-view scenarios). Among them, 50,000 cooperative-view scenarios were collected at the same time and intersection, where the ego vehicle drove through the equipped intersections. The tracked objects contain 3D boxes modeled with 7 dimensions, an object type attribute from 8 classes, and a trajectory ID. Additionally, we provide cooperative trajectory annotations for cooperative-view scenarios. The cooperative trajectory is generated in a similar way to cooperative tracking annotation but without manual refinement. Each cooperative trajectory is marked with which trajectories it originated from. The released dataset is diverse in terms of different classes and locations. The distribution of classes is presented in Figure 3. We provide the detailed data collection and generation in the Appendix. ### Vector Maps and Traffic Lights. We provide vector maps for the areas covering the selected 28 intersections, organized similarly to Argoverse [5]. The vector maps contain lane centerlines, cross-walks, and stoplines, represented by line segments with starting and ending points. To meet data security requirements, we add a constant offset to the coordinates of the points located in the world coordinate system. For each lane centerline, we provide attributes such as turning left or right, and we also provide the actual lane width so that we can calculate the boundaries of each lane. As traffic vehicles must follow the lane, including the centerline and boundary, to obey traffic rules, building the spatial context between trajectories and vector maps can provide valuable hints for trajectory tracking and forecasting. Additionally, we provide real-time traffic light signals for the infrastructure portion of the Trajectory Forecasting Dataset (TFD). During the collection and storage of infrastructure sensor data, we also record traffic light data at 10 Hz. The traffic light signals include the timestamp, location, color status, shape status, and time remaining. This information can significantly influence the behavior of traffic participants. It is worth noting that although nuPlan [4] also provides traffic light data, their data is estimated offline based on traffic flow statistics, whereas our data is obtained directly from the traffic lights themselves. ## 4 VIC3D Tracking Task In this section, we detail the formalization of the Vehicle-Infrastructure Cooperative 3D (VIC3D) Tracking task, along with the corresponding evaluation metrics. Furthermore, we propose the FF-Tracking framework, which builds upon FFNet [39], to address the issue of degraded tracking performance caused by latency, thereby improving the overall efficiency of VIC3D Tracking. Task Description.VIC3D Tracking aims to cooperatively locate, identify, and track 3D objects using both infrastructure and ego-vehicle sequential data while operating under limited communication bandwidth. The input for VIC3D Tracking consists of sequential frames from both ego-vehicle and infrastructure sources: * Ego-vehicle sequential frames \(I_{v}(t_{v}^{{}^{\prime}})|t_{v}^{{}^{\prime}}\leq t_{v}\) as well as its relative pose \(M_{v}(t_{v}^{{}^{\prime}})|t_{v}^{{}^{\prime}}\leq t_{v}\): captured at and before time \(t_{v}\), where \(I_{v}(\cdot)\) denotes the capturing function of ego-vehicle sensors. * Infrastructure sequential frames \(I_{i}(t_{i}^{{}^{\prime}})|t_{i}^{{}^{\prime}}\leq t_{i}\) as well as its relative pose \(M_{i}(t_{i}^{{}^{\prime}})|t_{i}^{{}^{\prime}}\leq t_{i}\): captured at and before time \(t_{i}\), where \(I_{i}(\cdot)\) denotes the capturing function of infrastructure sensors. Here, \(t_{i}\) should be earlier than \(t_{v}\) (i.e., \(t_{i}<t_{v}\)) due to the communication delay. The outputs of VIC3D Tracking include the category, location, orientation, and unique tracking ID of each object in the area of interest surrounding the ego vehicle over time \(t_{v}\). The corresponding ground truth is the set of 3D tracked objects appearing in one of the cooperative-view sensors over time \(t_{v}\), which can be formulated as: \[GT=(GT_{v}\cup GT_{i})\cap R, \tag{1}\] Figure 3: Total number and average length per category for the trajectory forecasting dataset in a relatively uniform distribution of trajectory categories. where \(GT_{v}\) is the ground truth for ego-vehicle sensor perception, \(GT_{i}\) is the ground truth for infrastructure sensor perception, and \(R\) is the ego-vehicle interest region. Evaluation Metrics and Analysis.VIC3D Tracking has two primary objectives: achieving better tracking performance while minimizing transmission costs to reduce bandwidth consumption. To assess these objectives, we use the following metrics: * MOTA, MOTP and IDS: Multi-Object Tracking Accuracy (MOTA), Multi-Object Tracking Precision (MOTP), and ID Switch (IDS) are three commonly used evaluation metrics for 3D tracking [3, 11]. We use these metrics to measure the performance of VIC3D Tracking approach. * BPS: Byte Per Second (BPS) measures the amount of data transmitted from the infrastructure to the ego vehicle per second, taking into account the transmission frequency. However, achieving these objectives presents several challenges. Firstly, we need to reduce the amount of data transmitted to meet the limited communication bandwidth requirement, while ensuring that the transmitted data are valuable enough to improve the tracking performance. The intermediate form is the most likely to achieve a balance between performance and transmission among the three possible data transmission forms (raw, intermediate, and perceived data). Secondly, latency can cause significant damage to cooperative fusion due to scene changes and dynamic object movements over time. Hence, we should consider the use of prediction alignment to remove fusion errors. FF-Tracking Framework.To address the challenges of VIC3D Tracking, we propose a middle fusion framework called FF-Tracking, which is based on feature flow prediction in FFNet [39]. FF-Tracking transmits both feature and feature flow instead of the single static feature from the infrastructure to the ego vehicle. We predict the future feature to align with the ego-vehicle timestamp using the following linear estimation: \[F_{future}(t)=F_{0}+t*F_{1}, \tag{2}\] where \(F_{0}\) denotes the static feature and \(F_{1}\) denotes the feature flow. With the predicted feature, we can effectively address the fusion error and solve the latency challenges. To further reduce the transmission cost, we compress the features and feature flows before transmitting them. This approach enables us to achieve the goals of better tracking performance and lower transmission cost while meeting the limited communication bandwidth requirement. The FF-Tracking framework consists of following parts. 1) Extracting the feature and feature flow from past sequential infrastructure frames. 2) Compressing, transmitting, and decompressing the static feature and feature flow. 3) Predicting the infrastructure feature using Eq. 2. 4) Fusing the features. We transform the predicted feature into a local ego-vehicle coordinate system and then fuse it with the ego-vehicle feature. We extract the ego-vehicle feature from the ego-vehicle point clouds. 5) Generating the tracking results. We use a Single Shot Detector (SSD) [36] to generate the 3D object outputs and then use the AB3DMOT [33] to track the objects and assign a unique tracking ID for each object. The whole process is also illustrated in Fig. 4. Please refer [39] to more feature flow prediction configurations. ## 5 VIC Trajectory Forecasting Tasks In this section, we present two trajectory forecasting tasks based on the trajectory forecasting dataset: Online-VIC Forecasting and Offline-VIC Forecasting. These tasks aim to investigate how to effectively leverage real-time infrastructure information and offline behavior knowledge transfer from the infrastructure to the vehicle side. Figure 4: Overview of the FF-Tracking framework. The framework transmits compressed features and feature flows, which can effectively reduce transmission costs while removing fusion errors caused by communication delays. ### Online-VIC Forecasting Task Task Formulation.Online-VIC Forecasting can be formulated as the problem of predicting future trajectories using real-time infrastructure and vehicle-side data. The inputs for Online-VIC Forecasting are: * A set of infrastructure trajectories \(\{T_{i}^{(l)}(t_{i})\}\) and traffic light signals, where the trajectory \(T_{i}^{(l)}(t_{i})\) contains the sequential coordinates of agent \(A_{i}^{(l)}\) at and before time \(t_{i}\). * Local vector maps. * A set of ego-vehicle trajectories \(\{T_{v}^{(k)}(t_{v})\}\), where the trajectory \(T_{v}^{(k)}(t_{v})\) contains the sequential coordinates of agent \(A_{v}^{(k)}\) at and before time \(t_{v}\). Note that \(t_{i}\) should be earlier than \(t_{v}\) due to the latency. However, in this paper, we ignore the latency to explore how to integrate infrastructure information better and consider \(t_{i}\) equal to \(t_{v}\). The output is the specified target agent's future coordinates for time steps \(t=t_{v}+1,\cdots,t_{pred}\). To make the forecasting task more challenging, we predict longer trajectories and define the forecasting task as observing the past 50 frames (5\(s\)) and then predicting the future 50 frames (5\(s\)). Evaluation Metrics and Analysis.In autonomous driving, there are often diverse possible future behaviors of traffic participants. Therefore, we output multiple possible future trajectories for each target agent for evaluation. Similar to Argoverse [5], we use the minimum Average Displacement Error (minADE), minimum Final Displacement Error (minFDE), and Missing Rate (MR) as the metrics to measure the prediction performance. We evaluate the model with Top-\(K\) predictions as our metrics, where \(K=6\). Our approach is based on the Trajectory Forecasting Dataset (TFD), which involves receiving and fusing infrastructure data from an intersection environment with complicated traffic situations. There are several challenges to achieving better prediction performance. One of these challenges is to effectively utilize valuable infrastructure information to enhance the incomplete perception results of the vehicle side, which is limited due to the single-vehicle view. Another challenge to establish a proper social context by incorporating infrastructure-perceived agents for better reasoning about the future behaviors of the target agent. Finally, it is crucial to improve the encoding of vector maps and traffic light signals to better assist in prediction. ### Offline-VIC Forecasting Task The Offline-VIC Forecasting task aims to transfer knowledge extracted from various infrastructure sequences to predict ego-vehicle trajectories. During inference, the model can only utilize the ego-vehicle data and cannot access real-time infrastructure data, similar to the traditional trajectory forecasting task [5]. Similar to Online-VIC Forecasting, we define the prediction task as observing the past 50 frames (5 seconds) and predicting the future 50 frames (5 seconds). We measure the prediction results using minADE, minFDE, and MR metrics and evaluate the model with Top-\(K\) predictions, where \(K\)=6. The main challenge in solving this task is extracting appropriate knowledge from heterogeneous infrastructure data for transfer. ## 6 Experiments ### VIC3D Tracking Benchmarks In this section, we present the results of our extensive experiments, which include different fusion approaches, input modalities, and latency settings. The experiments are conducted on the sequential perception dataset (SPD), and the train/valid/test split ratio is set to 5:2:3. We only consider four classes of Car, Van, Bus and Truck and the objects located in a rectangular of [0, -39.68, 100, 39.68]. The results are summarized in Table 2 and visualized in Figure 5. #### 6.1.1 Baselines The VIC3D Tracking problem can be tackled using three solutions: early fusion, middle fusion, and late fusion. Early fusion involves fusing infrastructure raw data, middle fusion fuses intermediate-level infrastructure data like feature maps, and late fusion fuses infrastructure perception results. Raw data contains all information but requires the highest transmission cost, while perception results consume the least amount of transmission cost but lose valuable information. We conducted experiments to evaluate the performance of these fusion solutions for VIC3D Tracking. Solution with Middle Fusion.We implemented the FF-Tracking model and a simple middle fusion model to explore middle fusion with intermediate data. We first explain how to train the FF-Tracking model. We pre-trained the FF-Tracking model on the training part of the sequential perception dataset for 40 epochs without considering latency. The learning rate was set to 0.001, and the weight decay was set to 0.01. We fine-tuned the FF-Tracking model on the training part of thesequential perception dataset for 20 epochs by adding random latency. The learning rate was set to 0.001, and the weight decay was set to 0.01. We then applied V2VNet [32] as a simple middle fusion model to solve VIC3D Tracking and compared it with FF-Tracking. Compared to FF-Tracking, the V2VNet [32] only transmits a single feature and keeps the other configurations the same as FF-Tracking. We trained the model for 40 epochs with a learning rate of 0.001 and a weight decay of 0.01. Note that FF-Tracking incurs a higher transmission cost per second compared to simple middle fusion due to the requirement of transmitting additional feature flow. Furthermore, in 0\(ms\) latency, FF-Tracking degenerates into V2VNet, which suggests that FF-Tracking and V2VNet manifest equivalent tracking performance under 0\(ms\) latency conditions. **Solution with Early Fusion.** We implement early fusion with point cloud inputs. First, we convert the infrastructure point cloud into the ego-vehicle coordinate system. Then, we convert both infrastructure and ego-vehicle point clouds into pseudo-images and fused them. We used PointPillars [19] as a detector to generate 3D outputs and AB3DMOT [33] to track each object. We directly train and evaluate the detector with the fused point cloud. Additionally, we also evaluate the model with different latencies. **Solution with Late Fusion.** To investigate the fusion effect with perception results, we implement late fusion using point cloud and image inputs. Specifically, we employ PointPillars [19] to locate and identify objects from both infrastructure sequential frames and ego-vehicle sequential frames. Additionally, we use ImvoxelNet [9] to perceive 2D objects from the infrastructure and ego-vehicle sequential images. Next, we transmit the infrastructure objects to the ego vehicle and fuse them with the ego-vehicle objects based on Euclidean distance measurements. Then we use AB3DMOT [33] to track the fused objects. Finally, we evaluate the model's performance with different latencies. #### 6.1.2 Analysis **V2X view vs. Single-vehicle view.** In Table 2, we present the evaluation results for both fusion and no-fusion methods. When using point clouds as input, all fusion methods outperform the no-fusion strategy, even when there is a performance drop due to communication delay. For instance, with point cloud as input and 200\(ms\) latency, the early fusion method improves the MOTA (multiple object tracking accuracy) of vehicles by 11.96% (from 39.31% to 51.27%). Thus, vehicle-infrastructure cooperative perception can effectively enhance 3D tracking performance. **Middle Fusion vs. Early Fusion&Late Fusion.** We compared the performance of middle fusion, early fusion, and late fusion techniques using point cloud as input and with 0\(ms\) latency. Our results indicate that early fusion achieves higher tracking performance than middle fusion (56.03% vs. 54.75% MOTA), while middle fusion requires less transmission cost (6.2\(\times 10^{5}\) Byle/s and 1.2\(\times 10^{6}\) Byle/s vs. 1.3\(\times 10^{7}\) Byle/s). Although late fusion requires the least transmission cost with 3.3\(\times 10^{3}\) Byle/s, it still achieves lower tracking performance than middle fusion (53.18% vs. 54.75% MOTA). Our findings suggest that the middle fusion technique can achieve a better balance between transmission cost and tracking performance. **FF-Tracking can overcome the latency challenge.** We present evaluation results of different fusion methods with a 200\(ms\) latency, as shown in Table 2. All the fusion methods show a performance drop as the latency increases. For \begin{table} \begin{tabular}{c|c c c|c c|c|c} \hline \hline Modality & Latency (\(ms\)) & Fusion Type & Fusion Method & MOTA \(\uparrow\) & MOTP & IDS & BPS (Byte/s) \(\downarrow\) \\ \hline \hline \multirow{2}{*}{Image} & 0 & Vehicle Only & - & 10.96 & 58.69 & 2 & 0 \\ & 0 & Late Fusion & Hungarian [18] & 22.27 & 57.25 & 194 & 3.3\(\times 10^{3}\) \\ \hline \hline \multirow{4}{*}{PointCloud} & 0 & Vehicle Only & - & 39.31 & 67.28 & 109 & 0 \\ & 0 & Early Fusion & Concat & **56.03** & 70.17 & 296 & 1.3\(\times 10^{7}\) \\ & 0 & Late Fusion & Hungarian [18] & 53.18 & 72.35 & 273 & 3.3\(\times 10^{3}\) \\ & 0 & Middle Fusion & V2VNet [32] & 54.75 & 69.76 & 222 & 6.2\(\times 10^{5}\) \\ \hline \hline PointCloud & 0 & Middle Fusion & **FF-Tracking** & 54.75 & 69.76 & 222 & 6.2\(\times 10^{5}\) \\ \hline \hline \multirow{4}{*}{PointCloud} & 200 & Early Fusion & Concat & 51.27 & 69.67 & 234 & 1.3\(\times 10^{7}\) \\ & 200 & Late Fusion & Hungarian [18] & 50.32 & 71.58 & 260 & 3.3\(\times 10^{3}\) \\ \cline{1-1} & 200 & Middle Fusion & V2VNet [32] & 48.38 & 68.99 & 231 & 6.2\(\times 10^{5}\) \\ \hline \hline PointCloud & 200 & Middle Fusion & **FF-Tracking** & **52.26** & 69.64 & 225 & 1.2\(\times 10^{6}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Evaluation Results for VIC3D Tracking on SPD at Different Latency Levels.** The ”Vehicle Only” approach utilizes only ego-vehicle data, while ”Concat fusion” combines pseudo images generated from point clouds. The evaluation of tracking performance employs three metrics: MOTA, MOTP, and IDS. Additionally, the transmission cost per second is assessed using the BPS metric. Notably, **in this experiment we only compare MOTA scores for the evaluation** and do not consider the MOTP and IDS scores for comparison. Figure 5: Comparison of VIC3D Tracking Baseline Models with Varying Latencies. Our proposed FF-Tracking model demonstrates greater robustness to latency when compared to the early fusion, late fusion, and simple middle fusion models. example, the early fusion has a 4.76% MOTA drop, and the simple middle fusion has a 6.37% MOTA drop when the latency is increased from \(0ms\) to \(200ms\). In comparison, our FF-Tracking model only has a 2.49% MOTA drop. We also present additional evaluation results at different latencies in Fig. 5. Our FF-Tracking model remains robust to all latencies, and importantly, outperforms early fusion by up to 4% MOTA at 300\(ms\) latency. Additionally, our FF-Tracking model achieves the best tracking performance when the latency reaches 200\(ms\). ### Trajectory Forecasting Benchmarks This section provides the baselines for solving the Online-VIC and Offline-VIC forecasting tasks on the trajectory forecasting dataset (TFD) with a train/val/test split of 5:2:3. The evaluation results are presented in Table 3. #### 6.2.1 Baselines We choose TNT [40] and HiVT [41] as base models and train them with different configurations. We encode only the trajectories and vector maps that are within 50m of the ego vehicle. We evaluate the models on val part of 50,000 cooperative-view dataset. Specifically: * **Baseline 1:** We only use ego-vehicle data and vector maps from the 50,000 cooperative-view data. We train the TNT [40] and HiVT [41] models for 30 epochs, and the other settings remain the same as the original. * **Baseline 2:** We use vector maps and both ego-vehicle and infrastructure trajectories. We propose the PP-VIC framework, a simple yet effective hierarchical perception-prediction method for solving the Online-VIC Forecasting task. Firstly, we first use CBMOT [2] to fuse the infrastructure and ego-vehicle trajectories. We only fuse or add infrastructure trajectories that are relatively complete or have very high detection scores. Then, we apply TNT [40] and HiVT [41] to encode the trajectories and vector maps to generate future trajectories, respectively. We train the PP-VIC model for 30 epochs, and the other settings remain the same as the original. * **Baseline 3:** We use ego-vehicle data and vector maps from the 50,000 cooperative-view data and additionally use 80,000 infrastructure-view trajectories. We pre-train the TNT [40] on the 80,000 infrastructure trajectories and then fine-tune the TNT [40] initialized with the pre-trained models. We train HiVT [41] in the same way. #### 6.2.2 Analysis **Online infrastructure trajectories are useful.** Compared to baselines that do not use any infrastructure information, PP-VIC achieves lower minADE, minPDE, and MR. PP-VIC with TNT [40] achieves a minADE that is 3.74 lower than the TNT [40] model that does not use infrastructure trajectory information, and PP-VIC with [41] achieves a minADE that is 0.28 lower than the [41] model that does not use trajectory information. These results suggest that online utilization of infrastructure trajectories can improve forecasting performance. Offline infrastructure trajectories are useful.TNT [40] pretrained on extra infrastructure trajectories achieves a 7.65 minADE reduction compared to TNT [40] without the use of any infrastructure data. HiVT [41] pretrained on extra infrastructure trajectories achieves a 0.03 minADE reduction compared to HiVT [41] without the use of any infrastructure data. The experimental results demonstrate that extracting knowledge from infrastructure trajectories can effectively improve forecasting performance. ## 7 Conclusion This paper presents a large-scale sequential V2X dataset, where all the data elements, including data frames, trajectories, vector maps, and traffic lights, are captured and generated from natural scenery. The paper introduces three new tasks for the vehicle-infrastructure cooperative autonomous driving community to better study how to utilize infrastructure information to improve sequential perception and trajectory forecasting ability. Several benchmarks are carefully designed for the fair evaluation of the introduced tasks. The experimental results demonstrate that infrastructure data can improve tracking and trajectory forecasting ability. Moreover, this paper proposes a novel FF-Tracking approach to solve the VIC3D Tracking problem. ## Acknowledgements This work was supported by Baidu Inc. through the Apollo-AIR Joint Research Center, and partially supported by the General Research Fund of HK under Grants No. 27208720 and No. 17200622. The authors would like to express their gratitude to the Beijing High-level Autonomous Driving Demonstration Area and Beijing Academy of Artificial Intelligence for their support throughout the dataset construction and release process. \begin{table} \begin{tabular}{c c|c c c} \hline \hline \begin{tabular}{c} Using Infrastructure \\ Trajectories \\ \end{tabular} & \begin{tabular}{c} Prediction \\ Model \\ \end{tabular} & \begin{tabular}{c} \multicolumn{3}{c}{K = 6} \\ minADE \(\downarrow\) \\ \end{tabular} & \begin{tabular}{c} \multicolumn{1}{c}{minPDE \(\downarrow\)} & MR \(\downarrow\) \\ \end{tabular} \\ \hline \hline \begin{tabular}{c} X \\ Online \\ \end{tabular} & TNT [40] & 12.01 & 24.15 & 0.84 \\ \begin{tabular}{c} Online \\ \end{tabular} & TNT [40] & 8.27 & 17.25 & 0.76 \\ \begin{tabular}{c} Offline \\ \end{tabular} & TNT [40] & 4.36 & 9.23 & 0.62 \\ \hline \hline \begin{tabular}{c} X \\ \end{tabular} & HiVT [41] & 1.55 & 2.59 & 0.36 \\ \begin{tabular}{c} Online \\ \end{tabular} & HiVT [41] & 1.27 & 2.36 & 0.35 \\ \begin{tabular}{c} Offline \\ \end{tabular} & HiVT [41] & 1.52 & 2.27 & 0.30 \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation results for different baselines. Using infrastructure trajectories can improve forecasting performance.
2307.13923
GrammarGPT: Exploring Open-Source LLMs for Native Chinese Grammatical Error Correction with Supervised Fine-Tuning
Grammatical error correction aims to correct ungrammatical sentences automatically. Recently, some work has demonstrated the excellent capabilities of closed-source Large Language Models (LLMs, e.g., ChatGPT) in grammatical error correction. However, the potential of open-source LLMs remains unexplored. In this paper, we introduced GrammarGPT, an open-source LLM, to preliminary explore its potential for native Chinese grammatical error correction. The core recipe of GrammarGPT is to leverage the hybrid dataset of ChatGPT-generated and human-annotated. For grammatical errors with clues, we proposed a heuristic method to guide ChatGPT to generate ungrammatical sentences by providing those clues. For grammatical errors without clues, we collected ungrammatical sentences from publicly available websites and manually corrected them. In addition, we employed an error-invariant augmentation method to enhance the ability of the model to correct native Chinese grammatical errors. We ultimately constructed about 1k parallel data and utilized these data to fine-tune open-source LLMs (e.g., Phoenix, released by The Chinese University of Hong Kong, Shenzhen) with instruction tuning. The experimental results show that GrammarGPT outperforms the existing SOTA system significantly. Although model parameters are 20x larger than the SOTA baseline, the required amount of data for instruction tuning is 1200x smaller, illustrating the potential of open-source LLMs on native CGEC. Our GrammarGPT ranks $3^{rd}$ on NLPCC2023 SharedTask1, demonstrating our approach's effectiveness. The code and data are available at \url{https://github.com/FreedomIntelligence/GrammarGPT}.
Yaxin Fan, Feng Jiang, Peifeng Li, Haizhou Li
2023-07-26T02:45:38Z
http://arxiv.org/abs/2307.13923v2
# GrammarGPT: Exploring Open-Source LLMs ###### Abstract Grammatical error correction aims to correct ungrammatical sentences automatically. Recently, some work has demonstrated the excellent capabilities of closed-source Large Language Models (LLMs, e.g., ChatGPT) in grammatical error correction. However, the potential of open-source LLMs remains unexplored. In this paper, we introduced GrammarGPT, an open-source LLM, to preliminary explore its potential for native Chinese grammatical error correction. The core recipe of GrammarGPT is to leverage the hybrid dataset of ChatGPT-generated and human-annotated. For grammatical errors with clues, we proposed a heuristic method to guide ChatGPT to generate ungrammatical sentences by providing those clues. For grammatical errors without clues, we collected ungrammatical sentences from publicly available websites and manually corrected them. In addition, we employed an error-invariant augmentation method to enhance the ability of the model to correct native Chinese grammatical errors. We ultimately constructed about 1k parallel data and utilized these data to fine-tune open-source LLMs (e.g., Phoenix, released by The Chinese University of Hong Kong, Shenzhen) with instruction tuning. The experimental results show that GrammarGPT outperforms the existing SOTA system significantly. Although model parameters are 20x larger than the SOTA baseline, the required amount of data for instruction tuning is 1200x smaller, illustrating the potential of open-source LLMs on native CGEC. Our GrammarGPT ranks \(3^{rd}\) on NLPCC2023 SharedTask1, demonstrating our approach's effectiveness. The code and data are available at [https://github.com/FreedomIntelligence/GrammarGPT](https://github.com/FreedomIntelligence/GrammarGPT). Keywords:Native Chinese grammatical error correction Large language models ChatGPT Instruction tuning. ## 1 Introduction Grammatical Error Correction (GEC) aims to automatically correct ungrammatical sentences without changing their meaning [26, 10, 27]. Previous works [28, 13, 14, 26] in Chinese Grammatical Error Correction (CGEC) mainly study the errors from foreign Chinese learners, which are very obvious and naive. Therefore, recent works [27, 10] shift to the grammatical errors made by native speakers, which are more subtle and challenging. Table 1 shows the six main types of grammatical errors made by native speakers, which can be divided into two types, e.g., with (w/) and without (w/o) clues. We can find that the incorrect sentences are fluent and in line with the habits of native Chinese. However, they do not conform to Chinese grammar, which is more difficult to correct. Previous studies in GEC mainly adopted both Seq2edit [5, 26, 9, 10] and Seq2seq [7, 29, 15] paradigms and have achieved impressive performance on various GEC benchmarks. With the emergence of LLMs, Fang et al. [4] evaluated the performance of closed-source LLMs (e.g., ChatGPT 1) on GEC and revealed its excellent capabilities for error detection and correction. However, the potential of open-source LLMs remains unexplored. Footnote 1: [https://chat.openai.com/](https://chat.openai.com/) In this paper, we introduce GrammarGPT, a novel model for studying the potential of open-source LLMs architectures in addressing Native Chinese Grammatical Error Correction (CGEC) through supervised fine-tuning. The key challenge in fine-tuning LLMs for CGEC is obtaining high-quality parallel data comprising grammatical errors made by native speakers. However, manually annotating such data is not only time-consuming but also expensive, necessitating the exploration of automatic data annotation methods. Recent works [25, 22] have successfully leveraged distilled data from ChatGPT and real-world datasets to fine-tune LLMs for specific domains, effectively reducing costs while achieving superior performance. Inspired by this line of research, we propose a hybrid dataset that incorporates different types of native Chinese grammatical errors. Specifically, we first proposed a heuristic method for the grammatical errors with clues as shown in Fig. 1 that guides ChatGPT to generate ungrammatical sentences by providing those clues. Then, for those errors without clues, we collected the ungrammatical sentences from the public website and corrected them manually. In addition, we proposed an error-invariant data augmentation method to enhance the diversity of the data by substituting the named entities in parallel data with similar ones, which can improve the ability of the model to correct native Chinese grammatical errors. We ultimately constructed 1k parallel data and utilized these data to fine-tune LLMs with instruction tuning. The experimental results show that GrammarGPT can significantly outperform state-of-the-art (SOTA) systems. Although the size of model parameters is 20x larger than the SOTA baseline, the data for fine-tuning is 1200x smaller, which demonstrated the potential of open-source LLMs on Chinese grammatical error correction. Our contributions are as follows: * To the best of our knowledge, we are the first to explore the potential of open-source LLMs with instruction tuning for native Chinese grammatical error correction. * We have constructed a hybrid dataset generated by ChatGPT and manual annotation, which can effectively cover native Chinese grammatical errors for taming the LLMs into an excellent grammar detector. * We designed an error-invariant data augmentation method to substitute the named entities in parallel data with similar ones, making the model more accurate in correcting grammatical errors. * The experimental results show that GrammarGPT can outperform the SOTA system significantly, and the data size for instruction tuning is only 1/1200 of the SOTA system. \begin{table} \begin{tabular}{c c l} \hline \hline & & **Incorrect:\(\exists\)\(\exists\)\(\forall\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\exists\) ## 2 Related Work ### Grammatical Error Correction The works in grammatical error correction can be divided into two paradigms: the Seq2edit paradigm and the Seq2seq paradigm. Seq2edit paradigmSeq2edit paradigm aims to predict the modification label, including insertion, deletion, and substitution, for each position of the sentence iteratively. Hinson et al. [5] proposed a heterogeneous approach to CGEC, composed of a NMT-based model, a sequence editing model, and a spell checker. Liang et al. [9] introduced and transferred the BERT-fused NMT model and sequence tagging model into the CGEC field. Zhang et al. [26] proposed a multi-reference multi-source evaluation dataset for CGEC and adopted the seq2edit method that enhanced with large pre-trained language models. Ma et al. [10] propose a linguistic rules-based approach to construct large-scale CGEC training corpora with automatically generated grammatical errors and adopt the seq2edit method for evaluation. Seq2seq paradigmThis paradigm treats CGEC as a monolingual translation task. Katsumata and Komachi [7] explored the utility of bidirectional and auto-regressive transformers (BART) as a generic pre-trained encoder-decoder model for GEC. Zhao and Wang [29] proposed a simple yet effective method to improve the NMT-based GEC models by dynamic masking, which can generate more diverse instances to enhance model generalization. Rothe et al. [15] proposed a language-agnostic method to generate a large number of synthetic examples, and then fine-tune large-scale multilingual language models. In addition, several works [9, 5, 8, 26] observe the complementary power of the above two paradigms, thus promoting the performance through the model ensemble. In this paper, we adopt the Se2seq paradigm to fine-tune LLMs with instruction tuning. ### Instruction Tuning for LLMs Instruction tuning [21, 16] can improve the ability of model generalization by learning from a large number of tasks guided by instruction, which has been successfully applied to fine-tune LLMs on some specific tasks. The work on task-specific instruction tuning can be categorized into three types by data sources: ChatGPT-generated, human-annotated, and hybrid dataset of ChatGPT and human. ChatGPT-generated dataSeveral works adopted the data generated by ChatGPT to fine-tune LLMs in the form of instructions. Ho et al. [6] proposed Fine-tune-CoT, a method that generates reasoning samples from LLMS to fine-tune smaller models, which enables substantial reasoning capability of small models. Wang et al. [19] proposed SCOTT, a faithful knowledge distillation method to learn a small, self-consistent CoT model from a teacher model that is orders of magnitude. Chen et al. [1] explored distilling the reasoning ability of LLMs into a more compact student model for multimodal named entity and multimodal relation extraction. Chen et al. [1] proposed a data synthesis framework built upon the data generation functions parameterized by LLMs and prompts and used synthesized data to fine-tune LLaMA. Human-annotated dataSome works directly convert the supervised data into the format of instructions to fine-tune LLMs. Zhang et al. [24] proposed to fine-tune LLaMA [18] on financial sentiment analysis with a small portion of supervised financial sentiment analysis data. Wang et al. [20] proposed a unified information extraction framework based on instruction tuning to model various information extraction tasks and capture the inter-task dependency. Hybrid dataset of ChatGPT and humanRecently, some works utilized the hybrid data of humans and ChatGPT/GPT-4 to fine-tune LLMs. Zhang et al. [25] proposed to leverage both distilled data from ChatGPT and real-world data from doctors to fine-tune Bloom [17]. Yu et al. [22] adopted a hybrid data of Chinese education and general-domain instructions [12] generated by GPT-4 to fine-tune LLaMA [18]. In this paper, we follow this line and fine-tune LLMs on native CGEC with the hybrid dataset of ChatGPT-generated and human-annotated with instruction tuning. ## 3 Methods Fig. 1 illustrates the framework of our method, which involves the construction of parallel data comprising six types of native Chinese grammatical errors to facilitate the fine-tuning of open-source Language Model (LLMs). While human-annotated data offer high-quality samples, the associated high cost remains a significant concern. To address this, we adopt a compromise approach. We first guide ChatGPT to generate ungrammatical sentences with clues by providing those clues collected from the Internet. Then, we annotate the ungrammatical Figure 1: The framework of our method. sentences without clues collected from the Internet. Additionally, we propose an error-invariant augmentation technique to substitute named entities in the parallel data with similar ones, further enhancing the model's capability to correct native Chinese grammatical errors. Finally, we convert the parallel data into instructions, which are then utilized for fine-tuning LLMs. Detailed explanations of these steps are provided in the following subsections. ### Hybrid Dataset Construction #### 3.1.1 ChatGPT-generated Data As shown in the first three lines of Table 1, the grammatical errors with clues are easy to detect and correct by recognizing the specific clues. For example, _"more than"_ and _"about"_ are used together leading to **redundant component**, _"The cause"_ and _"caused by"_ are used together leading to **structural confusion**, and _"prompting"_ and _"pace"_ are used together leading to **improper collocation**. Conversely, we can construct the ungrammatical sentences by inserting these cues into grammatical sentences. Thanks to the strong capabilities of ChatGPT, we can instruct ChatGPT to generate the ungrammatical sentences that meet our requirements by providing these clues collected from public websites 6. An example is as shown in Fig. 2. Footnote 6: [https://wenku.baidu.com](https://wenku.baidu.com) #### 3.1.2 Human-annotated Data Some types of native ungrammatical errors are hard to recognize, as shown in the last three lines of Table 1. We can find that those ungrammatical sentences are fluent and with no obvious clues of grammatical errors can help us to recognize them. For these types of grammatical errors, we mainly collected ungrammatical sentences from publicly available websites7 and then manually annotated them. Figure 3: An example of error-invariant augmentation. Figure 2: Process of ungrammatical sentences generated by ChatGPT. ### Error-invariant Data Augmentation To prioritize the model's focus on native grammar errors and improve its robustness, we have devised an error-invariant augmentation method, as shown in Fig. 3. Native Chinese grammatical errors are often subtle and infrequently found in the position of named entities. To address this, we adopt a strategy of substituting the named entities in the parallel data with similar ones8. By employing this augmentation method, the model can concentrate on identifying unchanged errors rather than specific nouns, thereby improving its performance in correcting subtle and imperceptible grammar errors. Footnote 8: [https://github.com/chatopera/Synonyms](https://github.com/chatopera/Synonyms) ### Instruction Tuning Instruction tuning[21, 16] has emerged as the mainstream approach for fine-tuning LLMs by providing explicit instructions to enhance model comprehension. In this paper, we followed this mainstream trend and fine-tuned LLMs with instruction tuning. Instruction details are as shown in Table 2, which mainly consists of four components. 1. **Task prefix**: This component guides LLMs to assume the role of an AI assistant. 2. **Task description**: Here, the specific task that LLMs are required to accomplish is outlined. 3. **Input**: This corresponds to ungrammatical sentences that are used as input during the fine-tuning process. 4. **Output**: This represents grammatical sentences, which serve as the expected output during fine-tuning. \begin{table} \begin{tabular}{c l} \hline \hline \multirow{2}{*}{Instruction} & \{**Task Prefix**\} \\ & \multicolumn{1}{c}{Human:} & \multicolumn{1}{c}{**Task Description**\} & \multicolumn{1}{c}{**Input**} & Assistant :\{**Output**\} \\ \hline \multirow{3}{*}{Task Prefix} & A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, & \\ & detailed, and polite answers to the human’s questions. & \\ \cline{1-1} & \multicolumn{1}{c}{Evaluate this sentence for grammar mistake} \\ \cline{1-1} & \multicolumn{1}{c}{_Ungrammatical sentence_} \\ \cline{1-1} & \multicolumn{1}{c}{_Grammatical sentence_} \\ \hline \hline \end{tabular} \end{table} Table 2: Components of an instruction. \begin{table} \begin{tabular}{c|c|c c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Number} & \multicolumn{3}{c|}{Percentage of Different & \multicolumn{3}{c}{Grammatical Errors (\%)} \\ & & \multicolumn{3}{c}{ChatGPT-generated} & \multicolumn{3}{c}{Human-annotated} \\ & & RC & SC & IC & IWO & IL & MC \\ \hline training set & 1061 & 23.54 & 28.25 & 13.70 & 6.50 & 13.18 & 15.07 \\ validating set & 500 & - & - & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: Statistic of the dataset. ## 4 Experiments ### Datasets We constructed a total of 1061 parallel data samples for training, and the data statistics are provided in Table 3. Roughly 35% of the data were manually annotated, while the remaining 65% were generated using ChatGPT. To evaluate the performance of our model, we utilized the validating set available on the NLPCC2023 SharedTask1 website9, which consists of 500 parallel data samples. We report the model's performance on this validating set for all the experiments conducted. Footnote 9: [https://github.com/masr2000/NaCGEC](https://github.com/masr2000/NaCGEC) ### Metrics The evaluation of a grammatical error correction system relies on the extent to which its proposed corrections or edits align with the gold-standard edits [11]. In line with previous research [10, 26], we adopt the word-level and char-level MaxMatch (M2) Scorer [3] for evaluation10. This scorer computes Precision, Recall, and F\({}_{0.5}\) scores, comparing the gold edit set with the system edit set. Footnote 10: [https://github.com/HillZhang1999/MuCGEC/tree/main/scores/ChERRANT](https://github.com/HillZhang1999/MuCGEC/tree/main/scores/ChERRANT) ### Hyper-parameters The models are implemented in PyTorch using the Huggingface Transformers11. We used phoenix-inst-chat-7b 12[2] as the backbone. We set the max sequence length to 256. The model is trained with the AdamW optimizer, where the batch size and epoch are set to 64 and 3, respectively. We set the learning rate and the schedule type of learning rate to 2e-5 and 'linear', respectively. The warmup step is set to 5. The hyper-parameters are shown in Table 4. Footnote 11: [https://huggingface.co/FreedomIntelligence/phoenix-inst-chat-7b](https://huggingface.co/FreedomIntelligence/phoenix-inst-chat-7b) \begin{table} \begin{tabular}{c|c} \hline Backbone & phoenix-inst-chat-7b \\ \hline Max length & 256 \\ \hline Optimizer & AdamW \\ \hline Batch size & 64 \\ \hline Epoch & 1 \\ \hline Learning rate & 2e-5 \\ \hline Lr schedule type & Linear \\ \hline Warmup steps & 5 \\ \hline \end{tabular} \end{table} Table 4: Details of hyper-parameters. ### Experimental Results To validate the effectiveness of our method, we conducted a comparison between our GrammarGPT and the state-of-the-art (SOTA) baseline, S2S_BART [26]. S2S_BART utilizes Chinese BART as the pre-trained model and fine-tunes it on the Lang8 [28] and HSK [23] datasets, which consist of approximately 1.2 million parallel data samples. We also fine-tuned S2S_BART on the hybrid dataset that we constructed, and the results are presented in Table 5. Remarkably, we observed that S2S_BART trained on our 1k hybrid dataset achieved 17.57 and 18.16 \(F_{0.5}\) on Word-level and Char-level separately, which is comparable to that baseline model using the 1.2M data from foreign language speakers. We attribute this to the significant discrepancy between the grammatical errors made by foreign language speakers and native Chinese speakers, making it challenging to effectively improve the performance of native CGEC by relying solely on data from foreign language speakers. These results further highlight the effectiveness of our method in constructing a hybrid dataset that contains native Chinese grammatical errors. Furthermore, our GrammarGPT exhibited substantial improvement with only about 1k data samples for fine-tuning, achieving 32.56 and 35.84 \(F_{0.5}\), respectively. It is almost double the performance of baseline models, showcasing the remarkable potential of open-source LLMs in native CGEC. The final result on the official test set shows that our GrammarGPT ranks 3\({}^{rd}\)13. \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{Data} & \multicolumn{3}{c|}{Word-level} & \multicolumn{3}{c}{Char-level} \\ & & Prec & Rec & F\({}_{0.5}\) & Prec & Rec & F\({}_{0.5}\) \\ \hline \multirow{3}{*}{w/o Augmentation} & Human-annotated & 12.20 & 1.51 & 5.04 & 13.89 & 1.48 & 5.19 \\ & ChatGPT-generated & 30.38 & 7.21 & 18.49 & 30.86 & 7.35 & 18.83 \\ & Hybrid dataset & 41.76 & 11.45 & 27.30 & 44.32 & 11.50 & 28.22 \\ \hline \multirow{3}{*}{w/ Augmentation} & Human-annotated & 15.46 & 4.52 & 10.42 & 16.48 & 4.44 & 10.68 \\ & ChatGPT-generated & 43.75 & 6.33 & 20.04 & 44.90 & 6.49 & 20.56 \\ \cline{1-1} & Hybrid dataset & 42.42 & 16.87 & 32.56 & 46.87 & 18.58 & 35.84 \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study of our method. \begin{table} \begin{tabular}{c|c|c|c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{Model} & \#Param. & Data & Data size & \multicolumn{3}{c|}{Word-level} & \multicolumn{3}{c}{Char-level} \\ & & & Prec & Rec & F\({}_{0.5}\) & Prec & Rec & F\({}_{0.5}\) \\ \hline S2S\_BART & 375M & \begin{tabular}{c} Lang8 \\ HSK \\ \end{tabular} & 1.2M & 22.31 & 10.14 & 17.99 & 22.13 & 9.66 & 17.59 \\ S2S\_BART & 375M & \begin{tabular}{c} Ours \\ Ours \\ \end{tabular} & 1061 & 21.08 & 10.54 & 17.57 & 22.09 & 10.62 & 18.16 \\ GrammarGPT & 7B & \begin{tabular}{c} Ours \\ Ours \\ \end{tabular} & 1061 & **42.42** & **16.87** & **32.56** & **46.67** & **18.58** & **35.84** \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparison between GrammarGPT and the SOTA baseline. ### Ablation Study In our analysis of the impact of our contributions, namely the construction of a hybrid dataset and the error-invariant augmentation method, we present the results in Table 6. Notably, the model trained on ChatGPT-generated data consistently outperforms that trained the human-annotated data, irrespective of whether data augmentation is applied. We attribute this observation to two primary reasons. First, the quantity of human-annotated data is smaller than the data generated by ChatGPT due to the high cost of human annotation. Second, grammatical errors without clues are more challenging to correct. Additionally, our hybrid dataset demonstrates the potential for enhancing the performance of native CGEC. This finding substantiates the effectiveness of our approach in constructing the hybrid dataset consisting of native Chinese grammatical errors. Moreover, by employing the error-invariant augmentation method, we observe our model trained on hybrid dataset has significant improvements in Recall and F\({}_{0.5}\) metrics but only minor improvements in Precision. It indicates that our augmentation technique enhances the model's ability to detect grammatical errors by forcing the model to pay more attention to grammar errors in the augmentation data. ## 5 Conclusion In this paper, we introduce GrammarGPT, an open-source Large Language Model (LLM) specifically designed for native Chinese grammatical error correction. We first construct a hybrid dataset containing approximately 1k parallel data samples. It comprises both ChatGPT-generated data and human-annotated data for dealing with grammatical errors with and without clues. Additionally, we introduced an error-invariant augmentation method to improve the model's capabilities in native Chinese grammatical error correction by forcing the model to pay more attention to grammar errors in the augmentation data. We further fine-tune the open-source large-scale language model on the constructed dataset. Experimental results and in-depth analysis demonstrate the effectiveness of our GrammarGPT in native Chinese grammatical error correction. ## Acknowledge This work is supported by the National Natural Science Foundation of China (Grant No. 62271432) and the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen (Grant No. B10120210117).
2306.03819
LEACE: Perfect linear concept erasure in closed form
Concept erasure aims to remove specified features from a representation. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provably prevents all linear classifiers from detecting a concept while changing the representation as little as possible, as measured by a broad class of norms. We apply LEACE to large language models with a novel procedure called "concept scrubbing," which erases target concept information from every layer in the network. We demonstrate our method on two tasks: measuring the reliance of language models on part-of-speech information, and reducing gender bias in BERT embeddings. Code is available at https://github.com/EleutherAI/concept-erasure.
Nora Belrose, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, Stella Biderman
2023-06-06T16:07:24Z
http://arxiv.org/abs/2306.03819v3
# LEACE: Perfect linear concept erasure in closed form ###### Abstract Concept erasure aims to remove specified features from a representation. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provably prevents all linear classifiers from detecting a concept while changing the representation as little as possible, as measured by a broad class of norms. We apply LEACE to large language models with a novel procedure called "concept scrubbing," which erases target concept information from _every_ layer in the network. We demonstrate our method on two tasks: measuring the reliance of language models on part-of-speech information, and reducing gender bias in BERT embeddings. Code is available at [https://github.com/EleutherAI/concept-erasure](https://github.com/EleutherAI/concept-erasure). ## 1 Introduction The ability to prevent a machine learning system from using a specified concept is important for fairness and interpretability. Popular notions of fairness require that protected attributes should not causally affect predictions [25; 29], and interpretability research often estimates the causal effect of a concept by attempting to remove it from a model's internal representations [11; 35; 28; 6; 19]. What it means for a model \(\mathcal{M}\) to "use" a concept \(\mathrm{Z}\) is often vague and application-specific, but a necessary condition is that its outputs--and therefore its inputs and hidden states--should have significant _mutual information_ with \(\mathrm{Z}\).1 **Concept erasure** leverages this fact to limit \(\mathcal{M}\)'s use of \(\mathrm{Z}\)_without_ finetuning or inspecting its parameters. Instead, we edit the input or hidden states \(\mathrm{X}\) used by \(\mathcal{M}\) to minimize the predictive \(\mathcal{V}\)-information \(I_{\mathcal{V}}(\mathrm{X}\rightarrow\mathrm{Z})\)[50], a tractable lower bound on the mutual information \(I(\mathrm{X};\mathrm{Z})\) which measures the degree to which classifiers from the family \(\mathcal{V}\) can predict \(\mathrm{Z}\). Intuitively, if no classifier in \(\mathcal{V}\) can outperform a constant function at predicting \(\mathrm{Z}\)--a condition known as **guardedness**--then \(\mathcal{M}\) can't use \(\mathrm{Z}\) either, at least if \(\mathcal{V}\) is expressive enough relative to \(\mathcal{M}\). Footnote 1: This follows from the fact that causal dependence is a special kind of statistical dependence [31]. By the data processing inequality, \(\mathcal{M}\)’s output can’t have any more information about \(\mathrm{Z}\) than its input or hidden states. In this work, we improve upon existing concept erasure techniques using a theory-driven approach. We focus on the case where \(\mathcal{V}\) is the set of linear classifiers, and prove a previously unnoticed equivalence: a classification task is linearly guarded _if and only if_ every class has exactly the same mean feature vector (SS 3). Leveraging this equivalence, we derive a simple necessary and sufficient condition for an affine transformation to produce linearly guarded features. We then identify the unique _surgical_ transformation in this family--the one that minimizes the mean squared distance from the original features with respect to _all_ norms induced by inner products, including the popular Euclidean and Mahalanobis norms. We name it **LEAst-squares Concept Erasure (LEACE)** (SS 4). While prior work has focused on preventing linear models from leveraging \(\mathrm{Z}\), we aim to erase concepts from deep neural networks as well. Interpretability research has shown that networks can be usefully described as encoding features in linear subspaces [12; 27; 48], suggesting that fundamentally nonlinear methods may not be necessary for successful erasure in DNNs.2 In light of this, we introduce a simple procedure called **concept scrubbing** (SS 6), which sequentially applies LEACE to the intermediate representations at each layer of a deep network. Footnote 2: We do not wish to make metaphysical claims about whether neural networks “truly” encode information linearly or nonlinearly. Following Cao [4], we take a pragmatist stance: what matters is that tools built under a linear feature assumption are often useful in practice [1; 20; 46]. We empirically validate our proposals, demonstrating the superiority of LEACE for erasing gender bias from BERT representations (SS 5.2), and using concept scrubbing to measure the extent to which large language models use part-of-speech information (SS 6). ## 2 Preliminaries Consider a \(k\)-class classification task over jointly defined random vectors \(\mathrm{X}\) (the input data) and \(\mathrm{Z}\) (the one-hot labels), taking values in \(\mathbb{R}^{d}\) and \(\mathcal{Z}=\{(z_{1},\ldots z_{k})\in\{0,1\}^{k}\;\big{|}\;\sum_{j=1}^{k}z_{j}= 1\}^{3}\) respectively, with \(\mathbb{E}[\mathrm{X}]<\infty\) and each \(\mathbb{P}(\mathrm{Z}=j)>0\), and a predictor \(\eta(\cdot;\mathbf{\theta}):\mathbb{R}^{d}\to\mathbb{R}^{k}\), chosen from a function class \(\mathcal{V}=\{\eta(\cdot;\mathbf{\theta})\;|\;\mathbf{\theta}\in\Theta\}\) (presumed to contain all constant functions) so as to minimize the expectation \(\mathbb{E}\big{[}\mathcal{L}(\eta(\mathrm{X}),\mathrm{Z})\big{]}\) of some \(\mathcal{L}:\mathbb{R}^{k}\times\mathcal{Z}\to[0,\infty)\) in a class \(\mathfrak{L}\) of loss functions. ### Guardedness We borrow the concept of **guardedness** from Ravfogel et al. [36], who define it in terms of \(\mathcal{V}\)-information [50]. We opt for a slightly more general definition here, which is equivalent to theirs in the case of cross-entropy loss (see Appendix F). **Definition 2.1** (Guardedness).: _Let \(\mathrm{X}\), \(\mathrm{Z}\), \(\mathcal{V}\), and \(\mathfrak{L}\) be as defined as above, and let \(\chi\) be the set of all jointly defined random vectors of finite first moment taking values in \(\mathbb{R}^{d}\). We say \(\mathrm{X}\;(\mathcal{V},\mathfrak{L})\)-**guards**\(\mathrm{Z}\) if, for all losses \(\mathcal{L}\in\mathfrak{L}\), it maximizes the minimum expected loss:_ \[\mathrm{X}\in\operatorname*{argmax}_{\mathrm{X}^{\prime}\in\chi}\;\inf_{\mathbf{ \theta}\in\Theta}\;\mathbb{E}\Big{[}\mathcal{L}(\eta(\mathrm{X}^{\prime};\mathbf{ \theta}),\mathrm{Z})\Big{]}.\] _In other words, its conditional distribution \(\mathbb{P}(\mathrm{X}|\mathrm{Z}=\cdot)\) is among the worst possible distributions for predicting \(\mathrm{Z}\) from \(\mathrm{X}\) using a predictor of the form \(\eta(\cdot;\mathbf{\theta})\in\mathcal{V}\) and a loss function in \(\mathfrak{L}\)._ **Definition 2.2** (Trivially Attainable Loss).: _The **trivially attainable loss** for labels \(\mathrm{Z}\) and loss \(\mathcal{L}\) is the lowest possible expected loss available to a constant predictor \(\eta(\mathbf{x})=\mathbf{b}\):_ \[L_{\tau}=\inf_{\mathbf{b}\in\mathbb{R}^{k}}\mathbb{E}[\mathcal{L}(\mathbf{b}, \mathrm{Z})]\] _We will sometimes write it \(L_{\tau}^{(\mathcal{Z},\mathcal{L})}\) in cases of possible ambiguity. If there is a specific constant predictor actually achieving this loss, we call it the **trivial predictor**\(\eta_{\tau}=\eta_{\tau}^{(\mathcal{Z},\mathcal{L})}\)._ We examine this problem in the important case of loss functions \(\mathcal{L}:\mathbb{R}^{k}\times\mathcal{Z}\to[0,\infty)\) which are convex in the prediction \(\eta(\mathbf{x})\), and linear predictors that take the functional form \(\eta(\mathbf{x};\mathbf{b},\mathbf{W})=\mathbf{b}+\mathbf{W}\mathbf{x}\), for some bias \(\mathbf{b}\in\mathbb{R}^{k}\) and weight matrix \(\mathbf{W}\in\mathbb{R}^{k\times d}\). **Definition 2.3** (Linear Guardedness).: _If \(\mathrm{X}\;(\mathcal{V},\mathfrak{L})\)-guards \(\mathrm{Z}\), where \(\mathfrak{L}\) is the class of nonnegative loss functions which are convex in their first argument, and \(\mathcal{V}\) is the class of linear predictors \(\eta(\mathbf{x})=\mathbf{b}+\mathbf{W}\mathbf{x}\), we say that \(\mathrm{X}\)**linearly guards**\(\mathrm{Z}\)._ ### Statistical Parity To measure the effect of linear guardedness on main-task classifiers, we use the following minimal definition of "fairness" with respect to an attribute, adapted from Edwards and Storkey [9]. **Definition 2.4** (Statistical Parity).: _Let \(\mathrm{X}\) and \(\mathrm{Z}\) be defined as above, and let \(f\) be a function with domain \(\mathbb{R}^{d}\). Then \(f\) exhibits **statistical parity** with respect to \(\mathrm{Z}\) when evaluated on \(\mathrm{X}\) if_ \[\forall z\in\mathcal{Z}:\mathbb{E}[f(\mathrm{X})|\mathrm{Z}=z]=\mathbb{E}[f( \mathrm{X})].\] Theoretical Results Our primary theoretical result is that the following conditions are all equivalent: 1. The data \(\mathrm{X}\) linearly guards the labels \(\mathrm{Z}\). (Definition 2.3) 2. For all convex losses \(\mathcal{L}\), the trivially attainable loss is optimal on \((\mathrm{X},\mathrm{Z})\). (Definition 2.2) 3. The class-conditional mean vectors \(\mathbb{E}[\mathrm{X}|\mathrm{Z}=i]\) are equal to the unconditional mean \(\mathbb{E}[\mathrm{X}]\). 4. Every component of \(\mathrm{X}\) has zero covariance with every component of \(\mathrm{Z}\). 5. Every linear classifier evaluated on \(\mathrm{X}\) exhibits statistical parity w.r.t. \(\mathrm{Z}\) (Definition 2.4). The equivalence of conditions 1 and 2 is relatively straightforward to show, and the relevant theorems can be found in Appendix B. The other equivalences are proven below. ### Equality of Class Centroids Implies Linear Guardedness The following result establishes the implication from condition 3 to condition 2. **Theorem 3.1**.: _Suppose \(\mathcal{L}\) is convex in the linear prediction \(\eta\). Then if each class-conditional mean \(\mathbb{E}\big{[}\mathrm{X}|\mathrm{Z}=i\big{]}\) is equal to \(\mathbb{E}\big{[}\mathrm{X}\big{]}\), the trivially attainable loss cannot be improved upon._ Proof.: Let \(\eta(\mathbf{x})=\mathbf{b}+\mathbf{W}\mathbf{x}\) be any linear predictor. By Jensen's inequality4, the loss with \(\eta\) evaluated on \(\mathrm{X}\) is lower bounded by the loss with \(\eta\) evaluated on the unconditional mean of the data \(\mathbb{E}\big{[}\mathrm{X}\big{]}\): Footnote 4: Specifically, its generalization to convex functions over \(\mathbb{R}^{k}\). See [13] p. 76. \[\mathbb{E}\Big{[}\mathcal{L}(\eta,\mathrm{Z})\Big{]} =\mathbb{E}_{\mathbb{Z}}\Big{[}\mathbb{E}\Big{[}\mathcal{L}(\eta, \mathrm{Z})|\mathrm{Z}\Big{]}\Big{]}\] \[\geq\mathbb{E}_{\mathbb{Z}}\Big{[}\mathcal{L}\Big{(}\mathbb{E} \big{[}\eta|\mathrm{Z}\big{]},\mathrm{Z}\Big{)}\Big{]}\] (Jensen's inequality) \[=\mathbb{E}_{\mathbb{Z}}\Big{[}\mathcal{L}\Big{(}\mathbf{b}+ \mathbf{W}\mathbb{E}\big{[}\mathrm{X}|\mathrm{Z}\big{]},\mathrm{Z}\Big{)}\Big{]}\] (linearity of \[\eta\] ) \[=\mathbb{E}_{\mathbb{Z}}\Big{[}\mathcal{L}\Big{(}\mathbf{b}+ \mathbf{W}\mathbb{E}\big{[}\mathrm{X}\big{]},\mathrm{Z}\Big{)}\Big{]}.\] (by assumption) This in turn is the loss of the constant predictor \(\eta^{\prime}(\mathbf{x})=\mathbf{b}+\mathbf{W}\mathbb{E}\big{[}\mathrm{X}\big{]}\). Since the trivially attainable loss is the best that can be done by a constant predictor, and we have just seen that _every_ predictor's loss is lower bounded by that of some constant predictor, we cannot improve upon the trivially attainable loss. Intuitively, this shows that the classifier's expected loss is lower-bounded by the loss it would receive if each data point were replaced with the centroid of its class. But, if these centroids are all equal, the loss can't be any lower than what we'd get if every data point were replaced with the _global_ mean \(\mathbb{E}[\mathrm{X}]\). In that case, the data points are indistinguishable and we can't do better than \(\mathbf{W}=\mathbf{0}\). ### Linear Guardedness Implies Equality of Class Centroids We now prove the implication from condition 2 to condition 3. Condition 2 applies when the trivially attainable loss is optimal for _all_ convex losses, including cross-entropy loss in particular. And if it holds for cross-entropy loss, we now show that condition 3--the class centroids are equal--must follow. First a more general lemma: **Lemma 3.2**.: _Suppose \(\mathcal{L}\) has bounded partial derivatives, which when off-category never vanish and do not depend on the category, i.e. \(\partial\mathcal{L}(\eta,z_{1})/\partial\eta_{i}=\partial\mathcal{L}(\eta,z_{2 })/\partial\eta_{i}\neq 0\) for all categories \(z_{1},z_{2}\neq i\). If \(\mathbb{E}\left[\mathcal{L}(\eta,\mathrm{Z})\right]\) is minimized among linear predictors by the constant predictor \(\eta(\mathbf{x})=\mathbf{b}^{*}+\mathbf{W}^{*}\mathbf{x}\) with \(\mathbf{W}^{*}=\mathbf{0}\), then each class-conditional mean \(\mathbb{E}\big{[}\mathrm{X}|\mathrm{Z}=i\big{]}\) is equal to \(\mathbb{E}\big{[}\mathrm{X}\big{]}\)._ Proof.: The first-order optimality condition on the \(i^{\text{th}}\) component of our parameters \(\mathbf{b}\) and \(\mathbf{W}\) yields the equations: \[\mathbb{E}\Bigg{[}\frac{\partial\mathcal{L}(\eta,\mathrm{Z})}{\partial\eta_{i}} \cdot\frac{\partial\eta_{i}}{\partial b_{i}}\Bigg{]}=0\quad\text{and}\quad \mathbb{E}\Bigg{[}\frac{\partial\mathcal{L}(\eta,\mathrm{Z})}{\partial\eta_{i}} \cdot\frac{\partial\eta_{i}}{\partial\mathbf{W}_{1}}\Bigg{]}=\mathbf{0}, \tag{1}\] where we have used the boundedness of \(\mathcal{L}\)'s partial derivative and the finite first moment of \(\frac{\partial\eta_{i}}{\partial b_{i}}=1\) and \(\frac{\partial\eta_{i}}{\partial\mathbf{W}_{i}}=\mathrm{X}\) to justify (via the Dominated Convergence Theorem) interchanging the derivative with the expectation. Since \(\eta\) is constant over all values of \(\mathrm{X}\), and \(\frac{\partial\eta_{i}}{\partial b_{i}}=1\), the first equation in (1) reduces to: \[\mathbb{P}(\mathrm{Z}=i)\frac{\partial\mathcal{L}(\eta,i)}{\partial\eta_{i}}+ \mathbb{P}(\mathrm{Z}\neq i)\frac{\partial\mathcal{L}(\eta,\neq i)}{\partial \eta_{i}}=0, \tag{2}\] where \(\frac{\partial\mathcal{L}(\eta,\neq i)}{\partial\eta_{i}}\) is an abuse of notation denoting the off-category partial derivative, emphasizing its independence of the category \(\mathrm{Z}\). Similarly, the constancy of \(\eta\) and the fact that \(\frac{\partial\eta_{i}}{\partial\mathbf{W}_{i}}=\mathrm{X}\) reduces the second equation in (1) to: (3) Solving for \(\mathbb{P}(\mathrm{Z}=i)\frac{\partial\mathcal{L}(\eta,i)}{\partial\eta_{i}}\) in (2) and substituting in (3) gives us: \[\mathbb{P}(\mathrm{Z}\neq i)\frac{\partial\mathcal{L}(\eta,\neq i)}{\partial \eta_{i}}\cdot\Bigg{(}\mathbb{E}\big{[}\mathrm{X}\big{|}\mathrm{Z}\neq i\big{]} -\mathbb{E}\big{[}\mathrm{X}\big{|}\mathrm{Z}=i\big{]}\Bigg{)}=\mathbf{0}.\] If \(\mathbb{P}(\mathrm{Z}\neq i)=0\), then \(\mathbb{E}[\mathrm{X}]=\mathbb{E}[\mathrm{X}|\mathrm{Z}=i]\) is trivially true. Otherwise, using the non-vanishingness of the off-category partial derivative \(\frac{\partial\mathcal{L}(\eta,\neq i)}{\partial\eta_{i}}\), division yields the equivalence of \(\mathbb{E}\big{[}\mathrm{X}\big{|}\mathrm{Z}=i\big{]}\) to \(\mathbb{E}\big{[}\mathrm{X}\big{|}\mathrm{Z}\neq i\big{]}\), and hence to the unconditional mean \(\mathbb{E}\big{[}\mathrm{X}\big{]}\). We now show that Lemma 3.2 applies to the widely used cross entropy loss: **Theorem 3.3**.: _If the class probabilities \(\mathbb{P}(\mathrm{Z}=j)\) are all nonzero, and the trivially obtainable loss is optimal for categorical cross-entropy loss \(\mathcal{L}(\eta,z)=-\log\frac{\exp(\eta_{i})}{\sum_{i=1}^{k}\exp(\eta_{i})}\), then each class has the same mean \(\mathbb{E}\big{[}\mathrm{X}\big{|}\mathrm{Z}=z\big{]}\)._ Proof.: In this case, the trivial predictor \(\eta_{r}(\mathrm{Z})_{j}=\log(\mathbb{P}(\mathrm{Z}=j))\) exists, achieving the trivially obtainable loss, which we have assumed optimal. Furthermore, \(\mathcal{L}\) has on-category partial derivative \(\partial\mathcal{L}(\eta,i)/\partial\eta_{i}=\exp(\eta_{i})/\!\sum_{j=1}^{k} \exp(\eta_{j})-1\in(-1,0]\), and nonvanishing off-category partial derivative \(\partial\mathcal{L}(\eta,\neq i)/\partial\eta_{i}=\exp(\eta_{i})/\!\sum_{j=1}^ {k}\exp(\eta_{j})\in(0,1]\), both bounded, so the conditions of Lemma 3.2 apply. ### Linearly Guarded Labels Have Zero Covariance with the Features The next theorem establishes the equivalence of conditions 3 and 4. **Theorem 3.4**.: _Let \(\mathrm{X}\) be a random vector taking values in \(\mathbb{R}^{d}\) with finite first moment, and \(\mathrm{Z}\) a random vector taking values in \(\{0,1\}^{k}\) with one-hot encoding, with each class probability \(\mathbb{P}(\mathrm{Z}=j)\) being nonzero. Then the class-conditional means \(\mathbb{E}[\mathrm{X}|\mathrm{Z}=j]\) are all equal to the unconditional mean \(\mathbb{E}[\mathrm{X}]\) if and only if every component of \(\mathrm{X}\) has zero covariance with every component of \(\mathrm{Z}\), i.e. the cross-covariance matrix \(\mathbf{\Sigma}_{\mathrm{XZ}}\), whose \((i,j)^{\text{th}}\) entry is \(\mathrm{Cov}(\mathrm{X}_{i},\mathrm{Z}_{j})\), is the zero matrix._ Proof.: Since \(\mathrm{Z}\) is one-hot, we can rewrite the \((i,j)^{\text{th}}\) entry of \(\mathbf{\Sigma}_{\mathrm{XZ}}\) as: \[\mathbb{E}[\mathrm{X}_{i}\mathrm{Z}_{j}]-\mathbb{E}[\mathrm{X}_{i}]\mathbb{E}[ \mathrm{Z}_{j}]=\mathbb{P}(\mathrm{Z}=j)\Big{(}\mathbb{E}[\mathrm{X}_{i}| \mathrm{Z}=j]-\mathbb{E}[\mathrm{X}_{i}]\Big{)}.\] As \(\mathbb{P}(\mathrm{Z}=j)>0\), it follows that \(\mathbb{E}[\mathrm{X}_{i}|\mathrm{Z}=j]=\mathbb{E}[\mathrm{X}_{i}]\) if and only if \(\mathrm{Cov}(\mathrm{X}_{i},\mathrm{Z}_{j})=0\). ### Linear Guardedness is Equivalent to Linear Statistical Parity This last theorem establishes the equivalence of conditions 3 and 5. **Theorem 3.5**.: _Let \(\mathrm{X}\) and \(\mathrm{Z}\) be defined as above. Then every linear predictor \(f(\mathbf{x})=\mathbf{b}+\mathbf{W}\mathbf{x}\) exhibits statistical parity w.r.t. \(\mathrm{Z}\) when evaluated on \(\mathrm{X}\) if and only if each class-conditional mean \(\mathbb{E}\big{[}\mathrm{X}|\mathrm{Z}=z\big{]}\) is equal to \(\mathbb{E}\big{[}\mathrm{X}\big{]}\)._ Proof.: Suppose each class-conditional mean \(\mathbb{E}\big{[}\mathrm{X}|\mathrm{Z}=z\big{]}\) is equal to \(\mathbb{E}\big{[}\mathrm{X}\big{]}\). Then by the linearity of expectation, we have for all \(z\in\mathcal{Z}\): \[\mathbb{E}[f(\mathrm{X})|\mathrm{Z}=z]=\mathbb{E}[\mathbf{W}\mathrm{X}+ \mathbf{b}|\mathrm{Z}=z]=\mathbf{W}\mathbb{E}[\mathrm{X}|\mathrm{Z}=z]+ \mathbf{b}=\mathbf{W}\mathbb{E}[\mathrm{X}]+\mathbf{b}=\mathbb{E}[f(\mathrm{ X})].\] This matches the definition of statistical parity provided in Definition 2.4. Conversely, suppose every linear predictor \(f(\mathbf{x})=\mathbf{b}+\mathbf{W}\mathbf{x}\) exhibits statistical parity w.r.t. \(\mathrm{Z}\) when evaluated on \(\mathrm{X}\). Then this holds for the identity function \(\mathrm{id}(\mathbf{x})=\mathbf{x}\), and thus for all \(z\in\mathcal{Z}\): \[\mathbb{E}[\mathrm{X}|\mathrm{Z}=z]=\mathbb{E}[\mathrm{id}(\mathrm{X})|\mathrm{ Z}=z]=\mathbb{E}[\mathrm{id}(\mathrm{X})]=\mathbb{E}[\mathrm{X}].\] We have thus established the equivalence of all five conditions stated earlier. ## 4 Least-Squares Concept Erasure In Section 3 we saw that \(\mathrm{X}\) linearly guards \(\mathrm{Z}\) if and only if each component of \(\mathrm{X}\) has zero covariance with each component of \(\mathrm{Z}\). We will now characterize the set of affine transformations \(r(\mathbf{x})=\mathbf{P}\boldsymbol{x}+\mathbf{b}\) such that \(r(\mathrm{X})\) linearly guards \(\mathrm{Z}\). **Theorem 4.1**.: _Let \(\mathrm{X}\) and \(\mathrm{Z}\) be random vectors taking values in \(\mathbb{R}^{d}\) and \(\mathbb{R}^{k}\) respectively, with \(\mathrm{X}\) of finite first moment. Then given some affine function \(r(\boldsymbol{x})=\mathbf{P}\boldsymbol{x}+\mathbf{b}\), the modified random vector \(r(\mathrm{X})\) linearly guards \(\mathrm{Z}\) if and only if the columns of the cross-covariance matrix \(\boldsymbol{\Sigma}_{\mathrm{XZ}}\) are contained in the null space of \(\mathbf{P}\)._ Proof.: From Theorem 3.4 we know that \(r(\mathrm{X})\) linearly guards \(\mathrm{Z}\) if and only if \(\mathrm{Cov}(r(\mathrm{X}),\mathrm{Z})\) is the zero matrix. By the linearity property of cross-covariance, we have: \[\mathbf{0}=\mathrm{Cov}(r(\mathrm{X}),\mathrm{Z})=\mathrm{Cov}(\mathbf{P} \mathrm{X}+\mathbf{b},\mathrm{Z})=\mathbf{P}\mathrm{Cov}(\mathrm{X},\mathrm{Z })=\mathbf{P}\boldsymbol{\Sigma}_{\mathrm{XZ}}.\] Therefore, \(r(\mathrm{X})\) linearly guards \(\mathrm{Z}\) if and only if \(\mathrm{ker}(\mathbf{P})\supseteq\mathrm{colsp}(\boldsymbol{\Sigma}_{\mathrm{XZ}})\). **Implications for prior work.** Notably, the above theorems imply that three previously proposed methods in the literature, Spectral Attribute Removal (SAL) [41], Mean Projection [18], and Fair PCA [23], are guaranteed to achieve linear guardedness given suitable hyperparameters. See Appendix C for further discussion. ### Derivation of LEACE Theorem 4.1 is a very weak condition, which is far from identifying unique values for \(\mathbf{P}\) and \(\mathbf{b}\). In most applications, however, we'd like to make a "small" edit to \(\mathrm{X}\) so that useful information contained in \(\mathrm{X}\) is maximally preserved. We operationalize the notion of a small edit in terms of the mean squared norm \(\mathbb{E}\|r(\mathrm{X})-\mathrm{X}\|_{\mathbf{M}}^{2}\) defined by some positive-definite inner product \(\mathbf{M}\).5 While we are primarily interested in the Euclidean (\(\mathbf{M}=\mathbf{I}\)) and Mahalanobis (\(\mathbf{M}=\boldsymbol{\Sigma}_{\mathrm{XX}}^{+}\)) norms, it will turn out that there is a _single_ erasure function that minimizes _all_ such norms simultaneously. We will see in Section 6 that ensuring edits are small in this sense provides substantial benefit to downstream task performance as compared to other methods which also guard the labels \(\mathrm{Z}\). Footnote 5: Our proofs also include degenerate “inner products” where \(\mathbf{M}\) is singular, and the associated seminorms. Below, we derive the optimal eraser under the assumption that \(\mathrm{X}\) and \(\mathrm{Z}\) are centered. In Appendix D, we derive an alternative, almost surely equivalent formulation, in the setting of the Hilbert space of centered random variables. **Theorem 4.2**.: _Let \(X\) and \(Z\) be centered random vectors taking values in \(\mathbb{R}^{d}\) and \(\mathbb{R}^{k}\) respectively, each of finite second moment. Let \(\mathbf{M}\in\mathbb{R}^{d\times d}\) be a p.s.d. matrix defining a (possibly degenerate) inner product on \(\mathbb{R}^{d}\): \(\langle\mathbf{x},\mathbf{y}\rangle_{\mathbf{M}}=\mathbf{x}^{T}\mathbf{M} \mathbf{y}\). Let \(\mathbf{\Sigma}_{\mathrm{XX}}\in\mathbb{R}^{d\times d}\) be \(X\)'s covariance matrix, and \(\mathbf{\Sigma}_{\mathrm{XZ}}\in\mathbb{R}^{d\times k}\) be the cross-covariance matrix of \(X\) and \(Z\). Let \(\mathbf{A}^{+}\) denote the Moore-Penrose pseudoinverse of a matrix \(\mathbf{A}\), and let \(\mathbf{A}^{1/2}\) be the p.s.d. square root of a p.s.d. matrix \(\mathbf{A}\). Then the objective_ \[\operatorname*{argmin}_{\mathbf{P}\in\mathbb{R}^{d\times d}}\mathbb{E}\Big{[} \big{\|}\mathbf{P}\mathbf{X}-\mathbf{X}\big{\|}_{\mathbf{M}}^{2}\Big{]} \quad\operatorname*{subject\;to}\;\operatorname*{Cov}(\mathbf{P}\mathbf{X}, \mathrm{Z})=\mathbf{0}\] _has the following solution:_ \[\mathbf{P}^{*}=\mathbf{I}-\mathbf{W}^{+}\mathbf{P}_{\mathbf{W}\mathbf{\Sigma} _{\mathrm{XZ}}}\mathbf{W},\] _where \(\mathbf{W}\) is the whitening transformation \((\mathbf{\Sigma}_{\mathrm{XX}}^{1/2})^{+}\) and \(\mathbf{P}_{\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}}}=(\mathbf{W}\mathbf{ \Sigma}_{\mathrm{XZ}})(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{+}\) is the orthogonal projection matrix onto \(\operatorname*{colsp}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})\)._ Proof.: First note that for any centered random vector \(\mathrm{A}\) with covariance matrix \(\mathbf{\Sigma}_{\mathrm{AA}}\) we have by linearity \(\mathbb{E}\|\mathbf{A}\|_{2}^{2}=\sum_{i}\mathbb{E}[a_{i}^{2}]=\sum_{i} \mathrm{Var}[a_{i}]=\operatorname*{tr}(\mathbf{\Sigma}_{\mathrm{AA}})\). Also recall that any p.s.d. inner product \(\langle\mathbf{x},\mathbf{y}\rangle_{\mathbf{M}}\) is equivalent to the Euclidean inner product between \(\mathbf{M}^{1/2}\mathbf{x}\) and \(\mathbf{M}^{1/2}\mathbf{y}\); that is \(\langle\mathbf{x},\mathbf{y}\rangle_{\mathbf{M}}=\mathbf{x}^{T}\mathbf{M} \mathbf{y}=(\mathbf{M}^{1/2}\mathbf{x})^{T}\mathbf{M}^{1/2}\mathbf{y}\). We use these facts to rewrite the objective as \[\mathbb{E}\|(\mathbf{P}-\mathbf{I})\mathbf{X}\|_{\mathbf{M}}^{2}=\mathbb{E}\| \mathbf{M}^{1/2}(\mathbf{P}-\mathbf{I})\mathbf{X}\|_{2}^{2}=\operatorname*{tr }\Big{[}\mathbf{M}^{1/2}(\mathbf{P}-\mathbf{I})\mathbf{\Sigma}_{\mathrm{XX}} (\mathbf{P}-\mathbf{I})^{T}\mathbf{M}^{1/2}\Big{]}.\] We enforce the constraint \(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{0}\) using a \(d\times k\) matrix of Lagrange multipliers \(\mathbf{\Lambda}\), adding the Frobenius inner product \(\langle\mathbf{\Lambda},\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}\rangle_{F}= \sum_{i}\sum_{j}(\mathbf{\Lambda})_{ij}(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ} })_{ij}=\operatorname*{tr}(\mathbf{\Lambda}^{T}\mathbf{P}\mathbf{\Sigma}_{ \mathrm{XZ}})\) to the objective. We now have the Lagrangian \[\mathcal{L}(\mathbf{P}) =\operatorname*{tr}\bigl{(}\mathbf{M}^{1/2}(\mathbf{P}-\mathbf{I })\mathbf{\Sigma}_{\mathrm{XX}}(\mathbf{P}-\mathbf{I})^{T}\mathbf{M}^{1/2} \bigr{)}+\operatorname*{tr}(\mathbf{\Lambda}^{T}\mathbf{P}\mathbf{\Sigma}_{ \mathrm{XZ}})\] \[=\operatorname*{tr}\Bigl{(}\bigl{(}\mathbf{M}^{1/2}\mathbf{P} \mathbf{\Sigma}_{\mathrm{XX}}^{1/2}-\mathbf{M}^{1/2}\mathbf{\Sigma}_{ \mathrm{XX}}^{1/2}\bigr{)}\bigl{(}\mathbf{M}^{1/2}\mathbf{P}\mathbf{\Sigma}_{ \mathrm{XX}}^{1/2}-\mathbf{M}^{1/2}\mathbf{\Sigma}_{\mathrm{XX}}^{1/2}\bigr{)} ^{T}\Bigr{)}+\operatorname*{tr}(\mathbf{\Lambda}^{T}\mathbf{P}\mathbf{\Sigma}_{ \mathrm{XZ}}).\] To differentiate \(\mathcal{L}\), we use trace derivative formulas from Petersen et al. [32]. For the first term we use \[\frac{d}{d\mathbf{X}}\mathrm{tr}\bigl{[}(\mathbf{A}\mathbf{X}\mathbf{B}+ \mathbf{C})(\mathbf{A}\mathbf{X}\mathbf{B}+\mathbf{C})^{T}\bigr{]}=2\mathbf{A} ^{T}(\mathbf{A}\mathbf{X}\mathbf{B}+\mathbf{C})\mathbf{B}^{T},\] where \(\mathbf{A}=\mathbf{M}^{1/2}\), \(\mathbf{X}=\mathbf{P}\), \(\mathbf{B}=\mathbf{\Sigma}_{\mathrm{XX}}^{1/2}\), \(\mathbf{C}=-\mathbf{M}^{1/2}\mathbf{\Sigma}_{\mathrm{XX}}^{1/2}\). This yields \[2\mathbf{M}^{1/2}(\mathbf{M}^{1/2}\mathbf{P}\mathbf{\Sigma}_{ \mathrm{XX}}^{1/2}-\mathbf{M}^{1/2}\mathbf{\Sigma}_{\mathrm{XX}}^{1/2}) \mathbf{\Sigma}_{\mathrm{XX}}^{1/2}=2(\mathbf{MP}\mathbf{\Sigma}_{\mathrm{XX} }-\mathbf{M}\mathbf{\Sigma}_{\mathrm{XX}})=2\mathbf{M}(\mathbf{P}-\mathbf{I}) \mathbf{\Sigma}_{\mathrm{XX}}.\] For the second term, we use the cyclic property of trace to rewrite it as \(\operatorname*{tr}(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}\mathbf{\Lambda}^{T})\). Then we apply the identity \(\frac{d}{d\mathbf{A}}\mathrm{tr}(\mathbf{A}\mathbf{B})=\mathbf{B}^{T}\), with \(\mathbf{A}=\mathbf{P},\mathbf{B}=\mathbf{\Sigma}_{\mathrm{XZ}}\mathbf{ \Lambda}^{T}\), yielding \(\mathbf{\Lambda}\mathbf{\Sigma}_{\mathrm{XZ}}^{T}\). We now solve for \(\mathbf{P}\) as a function of the Lagrange multipliers \(\mathbf{\Lambda}\). When \(\frac{d\mathcal{L}}{d\mathbf{P}}\) vanishes, we have \[\mathbf{0} =2\mathbf{M}(\mathbf{P}-\mathbf{I})\mathbf{\Sigma}_{\mathrm{XX} }+\mathbf{\Lambda}\mathbf{\Sigma}_{\mathrm{XZ}}^{T}\] \[\mathbf{M}\mathbf{\Sigma}_{\mathrm{XX}}-\frac{1}{2}\mathbf{ \Lambda}\mathbf{\Sigma}_{\mathrm{XZ}}^{T} =\mathbf{M}\mathbf{P}\mathbf{\Sigma}_{\mathrm{XX}}.\] Full rank case.In the case where \(\mathbf{M}\), \(\mathbf{\Sigma}_{\mathrm{XX}}\), and \(\mathbf{\Sigma}_{\mathrm{XZ}}^{T}\mathbf{\Sigma}_{\mathrm{XX}}^{+}\mathbf{ \Sigma}_{\mathrm{XZ}}\) are full rank, we have \[\mathbf{P}=\mathbf{I}-\frac{1}{2}\mathbf{M}^{+}\mathbf{\Lambda}\mathbf{\Sigma}_ {\mathrm{XZ}}^{T}\mathbf{\Sigma}_{\mathrm{XX}}^{+}, \tag{4}\] because we can cancel the r.h.s. \(\mathbf{M}\) and \(\mathbf{\Sigma}_{\mathrm{XX}}\) terms with \(\mathbf{M}^{+}\) and \(\mathbf{\Sigma}_{\mathrm{XX}}^{+}\). Plugging Eq. 4 into the constraint \(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{0}\) we get \(\mathbf{\Sigma}_{\mathrm{XZ}}=\frac{1}{2}\mathbf{M}^{+}\mathbf{\Lambda}\mathbf{ \Sigma}_{\mathrm{XZ}}^{T}\mathbf{\Sigma}_{\mathrm{XX}}^{+}\mathbf{\Sigma}_{ \mathrm{XZ}}\), which has the unique solution \[\mathbf{\Lambda}=2\mathbf{M}\mathbf{\Sigma}_{\mathrm{XZ}}(\mathbf{\Sigma}_{ \mathrm{XZ}}^{T}\mathbf{\Sigma}_{\mathrm{XX}}^{+}\mathbf{\Sigma}_{\mathrm{XZ} })^{+}. \tag{5}\] Plugging \(\mathbf{\Lambda}\) back into the equation for \(\mathbf{P}\) yields the solution: \[\mathbf{P}^{*}=\mathbf{I}-\mathbf{M}^{\pm}\mathbf{M}\mathbf{\Sigma}_{ \mathrm{XZ}}(\mathbf{\Sigma}_{\mathrm{XZ}}^{T}\mathbf{\Sigma}_{\mathrm{XX}}^{+} \mathbf{\Sigma}_{\mathrm{XZ}})^{+}\mathbf{\Sigma}_{\mathrm{XZ}}^{T}\mathbf{ \Sigma}_{\mathrm{XX}}^{+}. \tag{6}\] Importantly, this expression does not contain \(\mathbf{M}\), so \(\mathbf{P}^{*}\) is optimal for _any_ choice of \(\mathbf{M}\). Equivalent formulations.There is an equivalent formula for \(\mathbf{P}^{*}\) that provides more intuitive insight about what LEACE does. To see this, we first rewrite \(\mathbf{\Sigma}^{+}_{\mathrm{XX}}=\mathbf{W}\mathbf{W}\) and rearrange slightly: \[\mathbf{P}^{*}=\mathbf{I}-\mathbf{\Sigma}_{\mathrm{XZ}}\big{(}(\mathbf{W} \mathbf{\Sigma}_{\mathrm{XZ}})^{T}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}}) \big{)}^{+}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{T}\mathbf{W},\] then apply the identity \(\mathbf{A}^{+}=(\mathbf{A}^{T}\mathbf{A})^{+}\mathbf{A}^{T}\) with \(\mathbf{A}=\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}}\): \[\mathbf{P}^{*}=\mathbf{I}-\mathbf{\Sigma}_{\mathrm{XZ}}(\mathbf{W}\mathbf{ \Sigma}_{\mathrm{XZ}})^{+}\mathbf{W}. \tag{7}\] We now left-multiply \(\mathbf{\Sigma}_{\mathrm{XZ}}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{+} \mathbf{W}\) by the orthogonal projection matrix \(\mathbf{W}^{+}\mathbf{W}\), which does not change the result since its row space consists of exactly the dimensions along which \(\mathrm{X}\) has nonzero variance, which must contain the column space of \(\mathbf{\Sigma}_{\mathrm{XZ}}\), i.e. the dimensions along which \(\mathrm{X}\) has nonzero covariance with \(\mathrm{Z}\): \[\mathbf{P}^{*}=\mathbf{I}-\mathbf{W}^{+}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ }})(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{+}\mathbf{W}.\] This allows us to see \((\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})(\mathbf{W}\mathbf{\Sigma}_{\mathrm{ XZ}})^{+}\) as an orthogonal projection onto \(\mathrm{colsp}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})\), which we denote \(\mathbf{P}_{\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}}}\). This leads to our final solution: \[\mathbf{P}^{*}=\mathbf{I}-\mathbf{W}^{+}\mathbf{P}_{\mathbf{W}\mathbf{\Sigma} _{\mathrm{XZ}}}\mathbf{W}. \tag{8}\] Intuitively, \(\mathbf{P}^{*}\) whitens \(\mathrm{X}\), orthogonally projects onto \(\mathrm{colsp}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{\perp}\), then unwhitens the result. See Fig. 1 for a visualization. Extension to the singular case.Equation 6 is unique when \(\mathbf{M}\), \(\mathbf{\Sigma}_{\mathrm{XX}}\), and \(\mathbf{\Sigma}^{T}_{\mathrm{XZ}}\mathbf{\Sigma}^{+}_{\mathrm{XX}}\mathbf{ \Sigma}_{\mathrm{XZ}}\) are all full rank. When any of these are singular, there are infinitely many equivalent solutions, but Eq. 6 is always one of them. Defining \(\mathbf{\Lambda}\) as in Eq. 5, we see Eq. 6 satisfies first-order optimality, for: \[2\mathbf{M}\big{(}\mathbf{P}^{*}-\mathbf{I}\big{)}\mathbf{\Sigma }_{\mathrm{XX}}+\mathbf{\Delta}\mathbf{\Sigma}^{T}_{\mathrm{XZ}}\] \[=\mathbf{0},\] where the insertion of \(\mathbf{\Sigma}^{+}_{\mathrm{XX}}\mathbf{\Sigma}_{\mathrm{XX}}\) is justified by the fact that \(\mathrm{X}\in\mathrm{colsp}(\mathbf{\Sigma}_{\mathrm{XX}})\) almost surely and therefore \(\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbb{E}[\mathrm{X}\mathrm{Z}^{T}]=\mathbb{E} [(\mathbf{\Sigma}_{\mathrm{XX}}\mathbf{\Sigma}^{+}_{\mathrm{XX}})\mathrm{XZ}^ {T}]=(\mathbf{\Sigma}_{\mathrm{XX}}\mathbf{\Sigma}^{+}_{\mathrm{XX}})\mathbf{ \Sigma}_{\mathrm{XZ}}\). Using the formula from Eq. 7, we also see the constraint \(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{0}\) is satisfied in the singular case: \[\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}=\big{(}\mathbf{I}-\mathbf{\Sigma}_{ \mathrm{XZ}}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{+}\mathbf{W}\big{)} \mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{\Sigma}_{\mathrm{XZ}}-\mathbf{\Sigma}_{ \mathrm{XZ}}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{\pm}\mathbf{W}\mathbf{ \Sigma}_{\mathrm{XZ}}=\mathbf{0}.\] Canceling \((\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{+}\mathbf{W}\mathbf{\Sigma}_{ \mathrm{XZ}}\) is valid because it is an orthogonal projection onto \(\mathrm{rowsp}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})=\ker(\mathbf{W}\mathbf{ \Sigma}_{\mathrm{XZ}})^{\perp}\). This is identical to \(\ker(\mathbf{\Sigma}_{\mathrm{XZ}})^{\perp}\) since \(\ker(\mathbf{W})=\ker(\mathbf{\Sigma}_{\mathrm{XX}})=\mathrm{colsp}(\mathbf{ \Sigma}_{\mathrm{XX}})^{\perp}\) by construction and \(\mathrm{colsp}(\mathbf{\Sigma}_{\mathrm{XZ}})\cap\mathrm{colsp}(\mathbf{ \Sigma}_{\mathrm{XX}})^{\perp}=\emptyset\). Intuitively, \(\mathbf{W}\) doesn't zero out anything in the range of \(\mathbf{\Sigma}_{\mathrm{XZ}}\), so it doesn't expand the nullspace. The above theorem assumes that the variables \(\mathrm{X}\) and \(\mathrm{Z}\) are centered, and does not include a bias term. Below we extend our results to the uncentered case, and derive the least squares-optimal bias \(\mathbf{b}^{*}\). **Theorem 4.3**.: _Let \(\mathrm{X}\) and \(\mathrm{Z}\) be random vectors taking values in \(\mathbb{R}^{d}\) and \(\mathbb{R}^{k}\) respectively, each of finite second moment. Define \(\mathbf{M}\) and \(\mathbf{P}^{*}\) as in Theorem 4.2 and \(\mathbf{b}^{*}=\mathbb{E}[\mathrm{X}]-\mathbf{P}^{*}\mathbb{E}[\mathrm{X}]\). Then \((\mathbf{P}^{*},\mathbf{b}^{*})\) minimizes \(\mathbb{E}\big{\|}\mathbf{P}\mathrm{X}+\mathbf{b}-\mathrm{X}\big{\|}^{2}\), subject to \(\mathrm{Cov}(\mathbf{P}\mathrm{X}+\mathbf{b},\mathrm{Z})=\mathbf{0}\)._ Proof.: Let \(\mathbf{P}\in\mathbb{R}^{d\times d}\) and define \(\tilde{\mathrm{X}}=\mathrm{X}-\mathbb{E}[\mathrm{X}]\) and \(\mathbf{c}=\mathbf{P}\mathbb{E}[\mathrm{X}]+\mathbf{b}-\mathbb{E}[\mathrm{X}]\). Then, \[\mathbb{E}\big{\|}\mathbf{P}\mathrm{X}+\mathbf{b}-\mathrm{X} \big{\|}^{2}_{\mathbf{M}} =\mathbb{E}\big{\|}(\mathbf{P}\tilde{\mathrm{X}}-\tilde{\mathrm{X}})+ \mathbf{c}\big{\|}^{2}_{\mathbf{M}}\] \[=\mathbb{E}\big{\|}\mathbf{P}\tilde{\mathrm{X}}-\tilde{\mathrm{X}} \big{\|}^{2}_{\mathbf{M}}+2\mathbb{E}\big{[}\mathbf{P}\tilde{\mathrm{X}}-\tilde{ \mathrm{X}}\big{]}^{T}\mathbf{M}\mathbf{c}+\mathbf{c}^{T}\mathbf{M}\mathbf{c}\] \[=\mathbb{E}\big{\|}\mathbf{P}\tilde{\mathrm{X}}-\tilde{\mathrm{X}} \big{\|}^{2}_{\mathbf{M}}+\mathbf{c}^{T}\mathbf{M}\mathbf{c},\] where we have eliminated the middle term because \(\mathbf{P}\) is linear and \(\mathbb{E}[\tilde{\mathrm{X}}]=0\). Since \(\mathbf{M}\) is p.s.d., our objective is minimized for \(\mathbf{c}=\mathbf{0}\), i.e. \(\mathbf{b}=\mathbb{E}[\mathrm{X}]-\mathbf{P}\mathbb{E}[\mathrm{X}]\). The problem thus reduces to choosing \(\mathbf{P}\) so as to minimize \(\mathbb{E}\big{\|}\mathbf{P}\tilde{\mathrm{X}}-\tilde{\mathrm{X}}\big{\|}^{2}_ {\mathbf{M}}\) subject to \(\mathrm{Cov}(\mathbf{P}\mathrm{X}+\mathbf{b},\mathrm{Z})=\mathrm{Cov}( \mathbf{P}\tilde{\mathrm{X}},\mathrm{Z})=\mathbf{0}\), which Theorem 4.2 shows occurs when \(\mathbf{P}=\mathbf{P}^{*}\). Putting together Theorems 4.2 and 4.3 and rearranging, we arrive at the LEACE formula: \[r_{\mathrm{LEACE}}(\mathbf{x})=\mathbf{x}-\mathbf{W}^{+}\mathbf{P}_{\mathbf{W}\mathbf{ \Sigma}_{\mathrm{XZ}}}\mathbf{W}\big{(}\mathbf{x}-\mathbb{E}[\mathrm{X}]\big{)} \tag{1}\] Intuitively, LEACE de-means and whitens \(\mathbf{x}\), projects onto the subspace responsible for correlations between X and Z, then unwhitens the result. Finally, it subtracts this value from \(\mathbf{x}\), thereby surgically removing the linearly available information about Z. ### Oblique Projections are Least-Squares Optimal Prior work on linear concept erasure has assumed that erasure functions should be orthogonal projections [34; 38; 41], appealing to the well-known fact that an orthogonal projection of a point \(\mathbf{x}\) onto a subspace \(U\) yields the nearest point in \(U\) to \(\mathbf{x}\). But even in the case where X is centered, \(r_{\mathrm{LEACE}}\) is _not_ an orthogonal projection in general. Orthogonal projection matrices are symmetric, and \(\mathbf{I}-\mathbf{W}^{+}\mathbf{P}_{\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}}} \mathbf{W}\) is only symmetric in the special case where \(\mathbf{P}_{\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}}}\) and \(\mathbf{W}\) commute. It is an _oblique_ projection however, since applying \(\mathbf{P}^{*}\) twice yields the same result as applying it once: \((\mathbf{P}^{*})^{2}=\mathbf{I}-\mathbf{W}^{+}\mathbf{P}_{\mathbf{W}\mathbf{ \Sigma}_{\mathrm{XZ}}}\mathbf{W}^{+}\mathbf{P}_{\mathbf{W}\mathbf{\Sigma}_{\mathrm{ XZ}}}\mathbf{W}=\mathbf{P}^{*}\). Orthogonal projections are generally not least-squares optimal for concept erasure because the necessary and sufficient condition for linear guardedness, \(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{0}\), is a constraint on the _nullspace_ of \(\mathbf{P}\), and not on its range. We may freely choose the range of the projection to minimize the mean squared distance, as long as we zero out \(\mathrm{colsp}(\mathbf{\Sigma}_{\mathrm{XZ}})\). In Figure 1, an orthogonal projection would map all points onto the the dashed line, thereby preserving less of the variance of the original data than LEACE does (green line). See Appendix E for a numerical example and further discussion. ### Connection with Canonical Correlation Analysis LEACE is closely related to canonical correlation analysis (CCA), a statistical tool introduced by Hotelling [22] in 1936. CCA takes two correlated random vectors \(\mathrm{X}\in\mathbb{R}^{m}\) and \(\mathrm{Z}\in\mathbb{R}^{n}\) and produces ordered orthonormal bases \(A=(\mathbf{a}_{1}^{*},\mathbf{a}_{2}^{*},\ldots)\) and \(B=(\mathbf{b}_{1}^{*},\mathbf{b}_{2}^{*},\ldots)\) defining the scalar projections of X and Y which have maximum correlation with one another: \[(\mathbf{a}_{1}^{*},\mathbf{b}_{1}^{*})=\operatorname*{argmax}_{(\mathbf{a}, \mathbf{b})\;\in\;\mathbb{R}^{m}\times\mathbb{R}^{n}}\mathrm{corr}(\mathbf{a }^{T}\mathrm{X},\mathbf{b}^{T}\mathrm{Z}) \tag{9}\] The projections \((\mathbf{a}_{1}^{T}\mathrm{X},\mathbf{b}_{1}^{T}\mathrm{Z})\) are known as the first pair of canonical variables, and their correlation is the first canonical correlation. There are \(\mathrm{rank}(\mathbf{\Sigma}_{\mathrm{XZ}})\leq\min(m,n)\) such pairs, defined recursively as maximizing Equation 9 subject to the constraint that they are uncorrelated with all previous pairs. All pairs of canonical variables can be computed efficiently by performing SVD on the cross-covariance matrix of the _whitened versions_ of X and Z, that is, \(\mathbf{\Sigma}_{\mathrm{XX}}^{-1/2}\mathbf{\Sigma}_{\mathrm{XZ}}\mathbf{\Sigma}_{ \mathrm{ZZ}}^{-1/2}\). The canonical correlations are the singular values of this matrix, and the canonical variables are defined by its singular vectors rotated back into the original basis.6 Footnote 6: See [https://en.wikipedia.org/wiki/Canonical_correlation](https://en.wikipedia.org/wiki/Canonical_correlation) for a proof sketch. Press [33] offers a slightly different, yet equivalent construction. Figure 1: LEACE projection in 3 steps. First the data is whitened, ensuring equal variance in all directions. It is then orthogonally projected onto \(\mathrm{colsp}(\mathbf{W}\mathbf{\Sigma}_{\mathrm{XZ}})^{\perp}\), guaranteeing linear guardedness. Finally, we unwhiten the data so that its covariance structure mimics the original. LEACE can be viewed as a projection of X away from the span of the \(\min(d,k-1)\) canonical basis vectors that characterize its correlations with Z. While not an orthogonal projection in the original basis (Section 4.2), it _is_ orthogonal in the whitened basis, after applying \(\mathbf{W}=\mathbf{\Sigma}_{\mathrm{XX}}^{-1/2}\) (Equation 1). ### Extension to Continuous \(\mathrm{Z}\) While not a focus of this work, it's worth noting that LEACE can also be applied to the setting where Z takes arbitrary values in \(\mathbb{R}^{k}\), as long as we restrict ourselves to the ordinary least squares regression loss \(\mathcal{L}(\eta,\mathbf{z})=\|\eta-\mathbf{z}\|_{2}^{2}\). In particular, the proofs of equivalence between conditions 1 and 2 given in Appendix B make no categorical assumption on Z, and the equivalence between the optimality of a zero weight matrix (condition 2) and zero cross-covariance (condition 4) is well known in the OLS setting. We can then apply Theorems 4.2 and 4.3, which also make no categorical assumption, to derive the same optimal affine eraser as in the categorical case. ## 5 Evaluation ### Intrinsic Evaluation Following Ravfogel et al. [37] we evaluate the ability of our method to remove gender information from the last hidden layer of a frozen BERT model. We use the biographies dataset of De-Arteaga et al. [7], composed of short biographies annotated by both binary gender and profession. We embed each biography with the [CLS] representation in the last layer of BERT, enforce the same-conditional-mean constraint to remove gender information from the [CLS], and then evaluate the performance of the model, after the intervention, on the main task of profession prediction. We compare our intervention with RLACE [37], which uses gradient-based optimization to solve a linear concept-erasure adversarial game. Concept erasure results.First, we evaluate the ability of logistic regression classifiers to recover the removed information. The results, presented in Fig. 2, show that both RLACE and our method, but not INLP, are able to achieve near-random accuracy with a rank-1 edit. At the same time, our method is around 2 orders of magnitude faster, and does not require gradient-based optimization. Converged solution.Inspecting the cosine similarity between the first eigenvector of our projection matrix and the RLACE projections of different rank, we observe that they are essentially identical, with cosine similarity greater than 0.999. This suggests that the adversarial game proposed in Ravfogel et al. [37] converges to our linearly optimal solution, modulo multiplicative terms. Note however that our approach is many times faster due to a closed-form solution. In addition, their approach does not limit the magnitude of destructive perturbations to the original representation. ### Downstream Fairness How does our intervention affect the behavior of the model on the main classification task of profession prediction? We fit a logistic regression profession-prediction classifier over the projected [CLS] representations. To measure the bias in a classifier, we follow De-Arteaga et al. [7] and use the TPR-GAP measure, which quantifies the bias in a classifier by considering the difference (GAP) in the true positive rate (TPR) between individuals with different protected attributes (e.g., race or gender). We use the notation \(\mathrm{GAP}_{z,y}^{\mathrm{TPR}}\) to denote the TPR-gap in some main-class label \(y\) (e.g., "nurse" prediction) for some protected group \(z\) (e.g., "female"), we also consider \(\mathrm{GAP}_{z}^{\mathrm{TPR,RMS}}\), the RMS of the TPR-gap Figure 2: Gender prediction accuracy after bias-removal projection against the dimensionality of the neutralized subspace for INLP, RLACE, and LEACE on BERT representations. across all professions for a protected group \(z\): \[GAP_{z}^{TPR,RMS}=\sqrt{\frac{1}{|C|}\sum_{y\in C}(GAP_{z,y}^{TPR})^{2}}\] To calculate the relation between the bias the model exhibits and the bias in the data, we also calculate \(\sigma_{(\mathrm{GAP}^{TPR},\%\mathrm{Women})}\), the correlation between the TPR gap in a given profession and the percentage of women in that profession. Results.The main-task classifier achieves profession-prediction accuracy of 77.3% on the projected representations (compared with 79.3% over the original representations), indicating that the intervention minimally affects the ability to predict the profession of a person from the representation of their biography. At the same time, the TPR gap drops significantly from 0.198 to 0.084, indicating a sharp drop in the biased behavior of the profession classifier. Indeed, inspecting the correlation \(\sigma_{(\mathrm{GAP}^{TPR},\%\mathrm{Women})}\) between the gap (per profession) and the representation of women in this profession, we see that this correlation plummets from 0.867 to 0.392 after erasure. Re-fitting the main-task logistic regression classifier over the projected representations yields a slightly higher main-task accuracy of 78.1%, at the price of significantly increasing the TPR gap to 0.158.7 Footnote 7: The softmax probabilities of a multiclass logistic regression classifier can leak the removed information if _another_ classifier is stacked on top of it [36], though this setup is not linear. ### Revisiting Amnesic Probing Elazar et al. [11] have introduced the idea of _amnesic probing_ as a causal intervention that aims to test the importance of a given concept (e.g., part-of-speech tag) to some main task (e.g., language modeling). They applied Iterative Nullspace Projection (INLP) to remove different concepts from the hidden representations of the model, and assessed the degree to which its behavior changed when performing masked language modeling. Since INLP often requires dozens of iterations to completely erase the concept, its usage in this context raises concerns of collateral damage due to magnitude of the intervention and the non-exhaustive nature of INLP removal. Here, we replicate their experiments on the bert-base-uncased model with our interventions. Experimental setup.We use part-of-speech (POS) tags as our concept of interest. We collect sentences and their coarse POS tags ("Noun", "Verb" etc.; 18 in total) from the English Universal Dependencies dataset [30]. We tokenize the sentences with the BERT tokenizer and map each wordpiece to the POS tag of the word to which it belongs. We collect the unmasked BERT representations for each layer, intervene to linearly erase the POS concept from that layer, and continue the forward pass until the last layer, from which we compute the distribution of the MLM over the vocabulary. Note that in each experiment we intervene on a single layer. We quantify the decrease in accuracy Figure 3: The correlation between \(GAP_{female,y}^{TPR}\) and the relative proportion of women in profession \(y\), for BERT representation, before (left; R=0.867) and after (right; R=0.392) the projection. following the intervention, as well as the increase in the loss. We compare with a baseline intervention of a random orthogonal projection whose null space has the same rank as the label space (18). For INLP, we perform 20 iterations. This is needed because INLP does not effectively remove the concept; even after 20 iterations, classification accuracy is above majority accuracy. As a result, INLP reduces the rank of the representation by 360. By contrast, our method decreases the rank just by 18. Results.The results are shown in Fig. 3(b). Our intervention only mildly changes BERT LM accuracy and loss until layer 8, with the highest drop recorded in layer 11. INLP, in contrast, shows maximum effect at layer 6. Since it removes hundreds of dimensions, it is difficult to attribute this effect to the erasure of the concept. These results suggest that the _causal_ effect of the POS concept on the language model is concentrated in layer 11. Interestingly, this stands in contrast with POS linear probing results, which are optimal at earlier layers [44]. As Elazar et al. [11] have noted, probing does not generally correlate with intervention-based analysis techniques. Concept erasure can be used to understand the kinds of information that neural networks use internally. The intuition is that, if a model "uses" a concept like gender or syntactic structure, then intervening on its hidden states to erase this information should cause its performance to degrade considerably. This approach was pioneered by Elazar et al. [11], who use Iterative Nullspace Projection (INLP) to erase part-of-speech information from the final layer hidden states of BERT. This technique is based on training multiple classifiers to predict the concept of interest, and projecting the representation to the null space of the classifier coefficient vectors. While they found that the effect of a concept erasure intervention was often not much larger than a _random_ intervention of the same size, we show that these results are highly dependent on the concept erasure method used. ## 6 Concept Scrubbing Unfortunately, Elazar et al. [11] were forced to limit their interventions to a single layer due to the limitations of INLP. INLP often requires the deletion of several dozen dimensions before linear guarding is achieved--as demonstrated in Figure 2. Kumar et al. [24] show empirically and theoretically that INLP causes needless "collateral damage" to useful parts of the representation that are orthogonal to the concept being erased. Because of this collateral damage, it's impossible to apply INLP to multiple layers of a transformer without causing its outputs to collapse into gibberish. ``` 0: Model with \(\ell\) layers \(f=f_{\ell}\circ\ldots\circ f_{1}\) 0: Design matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) 0: Label matrix \(\mathbf{Z}\in\mathbb{R}^{n\times k}\) 0: LEACE parameters for each layer in \(f\) 1:\(\mathbf{H}_{1}\leftarrow\mathrm{Embed}(\mathbf{X})\) 2:\(L\leftarrow\)list() 3:for\(l\in 1\ldots\ell\)do 4: Fit \((\mathbf{P},\mathbf{b})\) on \(\mathbf{H}_{l}\) and \(\mathbf{Z}\) (Alg.??) 5: Append \((\mathbf{P},\mathbf{b})\) to \(L\) 6:\(\mathbf{H}_{l}\leftarrow\mathbf{P}(\mathbf{H}_{l}-\mu_{\mathbf{H}_{l}})+\mu_{ \mathbf{H}_{l}}\) (Eq. 1) 7:\(\mathbf{H}_{l+1}\gets f_{l}(\mathbf{H}_{l})\) 8:return\(L\) ``` **Algorithm 1** Concept scrubbing Figure 4: Amnesic probing results on bert-base-uncased. Instead, we would like to erase all linear information about a concept in _every_ intermediate representation, which we term **concept scrubbing**. LEACE makes concept scrubbing possible and eminently practical. It causes minimal collateral damage, induces little computational overhead, and the covariance statistics it relies on can be computed in a _streaming_ fashion, without ever storing all the hidden states in memory or on disk. **Algorithm.** Any intervention on the model at layer \(\ell\) changes the distribution of hidden states at layers \(\ell^{\prime}>\ell\). Because of this, the naive approach of independently fitting LEACE parameters \((\mathbf{P},\mathbf{b})\) for all layers of the clean model, then applying them all at once, may fail to fully erase the target concept. Instead, we fit LEACE parameters _sequentially_, starting from the first layer and proceeding to the final layer. After we compute \((\mathbf{P},\mathbf{b})\) for a layer, we immediately use them to scrub the hidden states for that layer, then feed these scrubbed representations to the next layer (Algorithm 1). ### Experimental details **Dataset.** For each model family, we use a sample from the respective pretraining distribution: the validation split of the Pile [14] for the Pythia models [2], and the RedPajama replication of the LLaMA pretraining corpus for the LLaMA family [45]. sampling a slice of \(2^{22}\) tokens for fitting the LEACE parameters and another slice of \(2^{22}\) tokens for evaluation. Since neither corpus comes with part-of-speech tags, we use the model from the SpaCy library [21] to automatically generate Universal Dependency tags [26]. **Baseline method.** We also run concept scrubbing using full-rank SAL [41], which is similar to our method but lacks a bias term and does not adjust for correlations between features (Appendix C). **Architecture.** We focus on autoregressive language models. We evaluate our method on EleutherAI's Pythia 160M, 1.4B, 6.9B, and 12B models [2], and Meta's LLaMA 7B, 13B, and 30B [45]. We apply concept erasure to the input of each transformer block, immediately after normalization is applied (LayerNorm or RMSNorm). **Randomized erasure.** Almost any intervention on a neural network will cause its performance to degrade to some extent. Following Elazar et al. [11], we isolate the effect of the concept erasure by comparing it to a control condition in which we orthogonally project onto a _random_ linear subspace of the same rank as the cross-covariance matrix. To reduce the variance of our results, we sample a fresh subspace for each minibatch, and erase that subspace at each layer, reporting the cross-entropy loss averaged over subspaces. **Constraining norm growth.** In early experiments, we found that at specific layers in some models, concept scrubbing with LEACE would cause the norm of the representation to diverge, leading to NaN outputs. By contrast, SAL never caused divergence, even though it causes a larger disruption to model performance on average (Table 1). This is because SAL uses an orthogonal projection \(\mathbf{Q}\), whose eigenvalues are thus all in \(\{0,1\}\), so the norm of the hidden state can never increase after erasure, while LEACE's oblique projection matrix \(\mathbf{P}\) does generally have singular values greater than 1. To combine the superior average-case MSE of LEACE with the stability of SAL, we adopt a simple regularization heuristic. After constructing \(\mathbf{P}\), we analytically compute the trace of the covariance matrix of the hidden states after applying \(\mathbf{P}\). If \(\operatorname{tr}(\mathbf{P}\mathbf{\Sigma}_{\mathrm{XX}}\mathbf{P}^{\mathrm{ T}})>\operatorname{tr}(\mathbf{\Sigma}_{\mathrm{XX}})\), we solve a quadratic equation to find the convex combination \(\mathbf{P}^{\prime}=\alpha\mathbf{P}+(1-\alpha)\mathbf{Q}\) such that \(\operatorname{tr}(\mathbf{\Sigma}_{\mathrm{XX}})=\operatorname{tr}(\mathbf{P} ^{\prime}\mathbf{\Sigma}_{\mathrm{XX}}(\mathbf{P}^{\prime})^{\mathrm{T}})\). By \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{LLaMA} & \multicolumn{3}{c}{Pythia} \\ \cline{2-7} Condition & 7B & 13B & 30B & 160M & 1.4B & 6.9B & 12B \\ \hline No intervention & 0.69 & 0.66 & 0.62 & 0.90 & 0.70 & 0.64 & 0.62 \\ Random erasure & 0.69 & 0.66 & 0.62 & 0.99 & 0.72 & 0.66 & 0.63 \\ \hline LEACE & 1.73 & 1.84 & 1.96 & 2.79 & 2.25 & 3.57 & 3.20 \\ SAL & 3.24 & 3.26 & 3.16 & 3.53 & 3.44 & 4.17 & 4.69 \\ \hline unigram entropy & 2.90 & 2.90 & 2.90 & 2.66 & 2.66 & 2.66 & 2.66 \\ \hline \hline \end{tabular} \end{table} Table 1: Perplexity in autoregressive language models when removing linearly available part-of-speech information from the input to each transformer layer. Units are bits per UTF-8 byte. The unigram baseline assigns probabilities to tokens based only on their frequency and not on the context. Theorem 4.1, the set of matrices which ensure linear guardedness is convex,8 so \(\mathbf{P}^{\prime}\) is guaranteed to be in the feasible set. Furthermore, since our mean squared error objective is convex, \(\mathbf{P}^{\prime}\) is guaranteed to have no worse MSE than \(\mathbf{Q}\). We find this solves the divergence issue in practice. Footnote 8: In fact, it is a subspace of \(\mathbb{R}^{d\times d}\). For any matrices \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{d\times d}\) such that \(\mathbf{A}\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{0}\) and \(\mathbf{B}\mathbf{\Sigma}_{\mathrm{XZ}}=\mathbf{0}\), we have by linearity \((\alpha\mathbf{A}+\beta\mathbf{B})\mathbf{\Sigma}_{\mathrm{XZ}}=\alpha\mathbf{ A}\mathbf{\Sigma}_{\mathrm{XZ}}+\beta\mathbf{B}\mathbf{\Sigma}_{\mathrm{XZ}}= \alpha\mathbf{0}+\beta\mathbf{0}=\mathbf{0}\) for any scalars \(\alpha\) and \(\beta\). **Training efficiency.** Algorithm 1 avoids redundant computation by caching the layer \(i\) hidden states for _every_ data point, then using them to run layer \(i+1\). This approach has the downside of requiring a large amount of memory or disk space during training (up to 500GB in our experiments). It's possible to avoid caching any hidden states and instead recompute them as needed, at the expense of increasing the total compute cost from \(O(\ell)\) to \(O(\ell^{2})\). ### Results We find strong evidence that autoregressive language models heavily rely on linearly encoded part-of-speech information. While erasing a randomly selected subspace has little to no effect on language modeling performance, scrubbing away part-of-speech information induces a large increase in perplexity across all models (Table 1). The specific numbers, however, depend on the erasure method used: SAL induces significantly larger increases in perplexity for all models we tested. We take this to mean that SAL inflicts more collateral damage on other useful features in the representation than LEACE does. In other words, interventions made with LEACE are more _surgical_ than those made with prior work; they more closely approximate the ideal of a perfect intervention which only erases the target concept and keeps everything else fixed [47, 16]. If this experiment were conducted with SAL alone, we would have _overestimated_ the causal effect of part-of-speech. ## 7 Limitations and Future Work Much work remains to be done to validate concept scrubbing. Specifically, we'd like to see experiments that target concepts much narrower than part-of-speech, and use behavioral metrics to determine whether scrubbing changes the network in the ways we'd intuitively expect. If these experiments succeed, an exciting next step would be the incorporation of concept scrubbing into the pretraining and/or finetuning process. This may make it possible to train deep neural networks subject to _conceptual constraints_. It remains to be seen if gradient-based optimizers will be able to "circumvent" such constraints by learning completely nonlinear representations of protected attributes. In this work, we focused exclusively on _linear_ concept erasure due to its simplicity and tractability. Some authors have proposed nonlinear concept erasure techniques based on kernel methods, but have found that erasure functions fit using one kernel do not generalize well to other kernels [38, 41]. We conjecture that it is intractable to nondestructively edit X so as to prevent a general nonlinear adversary from recovering Z, unless the data generating process for X is known in detail.9 Footnote 9: We suspect erasing a concept is at least as hard as extracting it from the original representation. But in the worst case, information about Z could be encoded _cryptographically_ in X, which would be intractable to decode given standard computational complexity assumptions. If the data is generated by a known algorithm, however, it may be possible to efficiently eliminate mutual information between Z and X by simply breaking the links in the causal graph that connect them. A major motivation of concept erasure is that it promises to prevent models from using a concept in a _post hoc_, model-agnostic fashion. But if our concept scrubbing procedure turns out to yield unsatisfactory results in practical use cases, the most promising research direction might then be to improve model-_specific_ techniques, such as those that modify the training procedure [9, 10, 15]. ## 8 Acknowledgements We are grateful to CoreWeave for providing the compute resources used in Section 6. Shauli Ravfogel is grateful to be supported by the Bloomberg Data Science PhD Fellowship.
2308.15269
Simulation of 3+1D glasma in Milne coordinates I: Development of the framework
We propose a new numerical method for $3+1$D glasma simulation using Milne coordinates. We formulate the classical Yang-Mills field and $3$D classical color current on a lattice at the initial proper time, specified as a moment just before the collision of the two nuclei. By solving the evolution equations, we extract observables of the $3$D glasma at later times. We demonstrate the efficiency of our method in terms of numerical cost and apply it to the central collisions of Au-Au. We also discuss possible further improvements of our method.
Hidefumi Matsuda, Xu-Guang Huang
2023-08-29T14:38:59Z
http://arxiv.org/abs/2308.15269v2
# Simulation of 3+1D glasma in Milne coordinates I: Development of the framework ###### Abstract We propose a new numerical method for \(3+1\)D glasma simulation using Milne coordinates. We formulate the classical Yang-Mills field and 3D classical color current on a lattice at the initial proper time, specified as a moment just before the collision of the two nuclei. By solving the evolution equations, we extract observables of the 3D glasma at later times. We demonstrate the efficiency of our method in terms of numerical cost and apply it to the central collisions of Au-Au. We also discuss possible further improvements of our method. ## I Introduction The experiment of relativistic heavy-ion collisions provides us the unique way to create deconfined quantum chromodynamics (QCD) matter of extraordinarily high temperature and density. Over the past few decades, many experimental results have indicated the emergence of a new state of matter, referred to as the quark-gluon plasma (QGP), in heavy-ion collisions, where quarks and gluons behave as a hydrodynamic fluid. The analysis of a relativistic heavy-ion collision requires different kinds of descriptions of the spacetime evolution of the system since the matter produced in the collision experiences varied stages in its evolution. The classical Yang-Mills (CYM) theory offers one of such descriptions. It can well describe the non-equilibrium evolution of the highly occupied gluonic system, called glasma, that appears immediately after the collision [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. The glasma simulation with the CYM field plays an important role in understanding the non-equilibrium stage between the moment of the collision and the onset of the hydrodynamic evolution of the QGP. In fact, the glasma simulation is widely used to establish the initial conditions for subsequent hydrodynamic evolution in the analysis of experimental data [16]. The theoretical background for why the CYM theory is a good description of the initial gluonic matter is based on the color glass condensate (CGC) picture, which stands as a valid description of the high-energy nucleus [1; 2]. In such a high-energy nucleus, the dominant degrees of freedom are soft gluons emitted from hard partons. The McLerran-Venugopalan (MV) model in the CGC effective theory describes the soft gluons as the CYM fields and the hard partons as their color sources. Consequently, the glasma generated in the collision of such high-energy nuclei can also be described well by the CYM field. However, such a success of the glasma simulation has been largely limited by the boost invariance assumption, which shows good agreement with experimental data only around the midrapidity region. Recently, much attention has been paid to the \(3+1\)D glasma simulation beyond the boost invariance assumption that is necessary to understand observables across a broader region of rapidity [17; 18; 19; 20; 21; 22; 23]. Different approaches for incorporating the rapidity dependence and their implementations have been considered. In Refs. [17; 18], the authors consider the rapidity-dependent distribution of the classical color charges inside a single nucleus by numerically solving the JIMWLK equation [24; 25; 26; 27; 28; 29; 30], while assuming that the color sources are static in time. In Refs [19; 20; 21; 22], the authors propose numerical simulation methods that focus on the recoil effect of the nuclei and track the dynamical evolution of the CYM field with the dynamical 3D color current. The analytic analysis for the 3+1D glasma with the dynamical color current is also performed, employing the weak field approximation [23]. The purpose of this study is to propose a new numerical simulation method for the \(3+1\)D glasma with the incorporation of the recoil effect, in which the classical color current is treated as a 3D dynamical object. The initial conditions for the CYM field and classical color current are provided on a lattice before the collision occurs, and their discretized evolution equations are subsequently solved to determine their values at later times. Numerical simulations are performed in Milne coordinates, albeit with a difference from the usual Milne coordinates. Usually, the proper time \(\tau=\sqrt{t^{2}-z^{2}}\) is introduced such that the collision of the two nuclei occurs at \(\tau=0\). In contrast, we employ a modified Milne coordinates, \((\tilde{\tau}=\sqrt{\tilde{t}^{2}-z^{2}},x,y,\tilde{\eta}=(1/2)\ln[(\tilde{t}+ z)/(\tilde{t}-z)])\), where \(\tilde{t}\) is shifted from \(t\) by a positive constant as \(\tilde{t}=t+\text{positive constant}\). Consequently, the two nuclei are still apart at the initial proper time \(\tilde{\tau}_{\text{ini}}\) which is taken as a sufficiently small number. We evaluate the physical quantities in the modified Milne coordinates and then transform the results to the usual Milne coordinates via a general coordinate transformation. The above strategy of giving initial conditions on a lattice, evolving them in time, and transforming the results into the usual Milne coordinates is analogous with that in Ref. [22], where the simulations are performed in Minkowski coordinates and the results are then transformed into the usual Milne coordinates. The advantage of using the Milne coordinates is the following. The numerical simulations on a finite lattice in Milne coordinates correspond to a longitudinally expanding system in terms of Minkowski coordinates due to the relation \(z=\tilde{\tau}\sinh\tilde{\eta}\). As a result, numerical simulations in the Milne coordinates do not require a large lattice size in the \(\tilde{\eta}\) direction, while numerical simulations in Minkowski coordinates require a system size in the \(z\) direction large enough to include the outgoing nuclei within the lattice. Therefore, our new method is expected to cost lower numerical resources, which is important in actual applications since tracking the dynamical evolution of the 3D glasma requires a lot of numerical resources. In Sec. II, we present the formulation of the \(3+1\)D glasma on the lattice. In Sec. III, we present the numerical results. This section is divided into two parts. In the first part, we test the effectiveness of our numerical method. We check whether the continuity equations are violated under the evolution of the glasma and check the consistency with the method proposed in Ref. [22] by comparing our results for the transverse pressure and the energy density in the local rest frame with theirs. In the second part, we simulate the dynamical evolution of the \(3+1\)D glasma using the setup that mimics the central collisions of Au-Au at \(\sqrt{s}=200\) GeV. We show the dynamical evolution of the energy density and address discussions about the obtained results. In Sec. IV, we summarize our main results. ## II Method We develop the numerical method for the \(3+1\)D glasma simulation in Milne coordinates in this section. This method is an extension of the description of the \(2+1\)D glasma using the MV model in the CGC effective theory. This section is organized as follows. In Sec. II.1, we give a brief review of the description of the boost-invariant (\(2+1\)D) glasma using the MV model. In Sec. II.2, we explain how to extend the \(2+1\)D glasma description to the \(3+1\)D glasma with the dynamical classical color current in continuous spacetime. In Sec. II.3, we show the formulation of the \(3+1\)D glasma on a discretized space and continuous proper time. In Sec. II.4, we define the energy-momentum tensor on a lattice. ### \(2+1\)D glasma in continuous spacetime Here we briefly review how the \(2+1\)D glasma is described using the MV model in the CGC effective theory. According to the CGC picture, the dominant degrees of freedom inside a relativistic nucleus are the soft gluons that are emitted from partons with large momenta. In the MV model, the soft and hard partons are separately treated in the classical approximation [1; 2]: The soft partons are described by the CYM field \(A_{\mu}\) and the hard partons are described by the classical color current \(J^{\mu}\), the source of the soft partons. Here, the classical color current \(J^{\mu}\) moving toward the positive \(z\) direction is given by the density of the sum of the color charges carried by hard partons located around \(x\), \[J^{\mu}(x)=\frac{1}{g}\delta^{\mu,+}\rho(x^{-},\mathbf{x}_{\perp})\, \tag{1}\] where \(x^{\mp}=(t\mp z)/\sqrt{2}\) are the light-cone coordinates and \(\rho\) is the classical color charge density, randomly given according to a probability density \(P[\rho]\) for each event. This color charge density \(\rho\) is also assumed to be static, namely independent of \(x^{+}\), which is reflected by the fact that the lifetime of the hard partons is much longer than that of the soft partons due to the time dilation. The soft CYM field emitted from the static sources is given by the solution to the classical equations of motion \([D_{\mu},F^{\mu\nu}]=J^{\nu}\), and under the gauge condition \(A_{-}=0\), it has the following form [1], \[A_{\pm}=0\,\ \ A_{i}=\frac{i}{g}V\partial_{i}V^{\dagger}\, \tag{2}\] where \(A_{i}\) is the transverse gauge field and \(V^{\dagger}\) is the Wilson line, formally given by \[V^{\dagger}(x^{-},\mathbf{x}_{\perp})=P_{x^{-}}\exp\left[-i\int_{- \infty}^{x^{-}}dx^{\prime-}\partial_{\perp}^{-2}\rho_{\rm cov}(x^{\prime-}, \mathbf{x}_{\perp})\right]\,. \tag{3}\] Here, \(\rho_{\rm cov}\) is the color charge density in the covariant gauge condition and is related to the color charge density in the \(A_{-}=0\) gauge condition through the gauge transformation, \[\rho_{\rm cov}=V^{\dagger}\rho_{{}_{(A_{-}=0)}}V. \tag{4}\] As shown in Eq. (2) and Eq. (3), the solution to the equation of motion is a functional of \(\rho\), and thus the event average of a given observable \(\mathcal{O}\) is obtained as the ensemble average over \(P[\rho]\), \[\langle\mathcal{O}\rangle_{\rm eve}=\int\mathcal{D}\rho\mathcal{O}[\rho]P[\rho ]. \tag{5}\] Fortunately, in the high-energy limit, \(P[\rho]\) can be well approximated by the normal distribution function, no matter what gauge condition is chosen, \[P[\rho(x^{-},\mathbf{x}_{\perp})]\propto\exp\left[-\frac{{\rm Tr}[\rho(x^{-},\mathbf{x }_{\perp})]^{2}}{2[g^{2}\mu(x^{-},\mathbf{x}_{\perp})]^{2}}\right]\,. \tag{6}\] where \([g^{2}\mu(x^{-},\mathbf{x}_{\perp})]^{2}\) is the squared color charge density per unit volume \(dx^{-}dxdy\). Therefore, in the high-energy limit, the event average in Eq.(5) can be estimated numerically using the Gaussian random number \(\rho_{\rm cov}\) that satisfies the following event average, \[\langle\rho_{\rm cov}^{a}(x^{-},\mathbf{x}_{\perp})\rho_{\rm cov}^{b}( x^{\prime-},\mathbf{x}_{\perp}^{\prime})\rangle_{\rm eve}\] \[=\delta^{a,b}\left(g^{2}\mu(x^{-},\mathbf{x}_{\perp})\right)^{2}\delta (x^{-}-x^{\prime-})\delta^{2}(\mathbf{x}_{\perp}-\mathbf{x}_{\perp}^{\prime}). \tag{7}\] It should be mentioned that solving the JIMWLK equation [24; 25; 26; 27; 28; 29; 30], the evolution equation for momentum rapidity, yields the Wilson line at the energy of interest beyond the high-energy limit approximation. Using the MV model with the high-energy limit approximation, the glasma created in the collision of two nuclei can be obtained as a boost-invariant CYM field. In this approximation, hard partons are assumed to be recoilless, and thus the total classical color current is given by the incoherent sum of the two static color currents, \[J^{\mu}(x)=\frac{1}{g}\delta^{\mu+}\delta(x^{-})\rho^{(1)}( \mathbf{x}_{\perp})+\frac{1}{g}\delta^{\mu-}\delta(x^{+})\rho^{(2)}(\mathbf{x}_{\perp} )\, \tag{8}\] where the color charge density from each nucleus is assumed to be distributed on an infinitely thin sheet due to the Lorentz contraction, \(\rho^{(1/2)}(x^{\mp},\mathbf{x}_{\perp})\propto\delta(x^{\mp})\). Then, solving the classical equation of motion \([D_{\mu},F^{\mu\nu}]=J^{\nu}\) with the Fock-Schwinger (FS) gauge condition \(A^{\tau}=0\) yields the initial condition of the glasma at \(\tau=0^{+}\) as a regular solution, \[A_{i} =A_{i}^{(1)}+A_{i}^{(2)}\,\ \ A_{\eta}=0\, \tag{9}\] \[E^{i} =0\,\ \ E^{\eta}=ig[A_{i}^{(1)},A_{i}^{(2)}]\, \tag{10}\] where the transverse and longitudinal electric fields are defined as \(E^{i}=\tau\partial_{\tau}A_{i}\) and \(E^{\eta}=\partial_{\tau}A_{\eta}/\tau\), respectively, and \(A_{i}^{(1/2)}\) is the transverse gauge field emitted from the single nucleus 1 or 2, respectively, \[A_{i}^{(1/2)}=\frac{i}{g}V_{\rm 2D}^{(1/2)}\partial_{i}V_{\rm 2D}^{(1/2)\dagger}. \tag{11}\] In this paper, the index \(i\) denotes the transverse directions, 1 and 2, unless otherwise stated. The Wilson line \(V_{\rm 2D}^{(1/2)\dagger}\) is independent of \(x^{\mp}\) and is given by \[V_{\rm 2D}^{(1/2)\dagger}(\mathbf{x}_{\perp})=P_{x^{\mp}}\exp\left[-i \int_{-\infty}^{\infty}dx^{\prime\mp}\partial_{\perp}^{-2}\rho_{\rm cov}^{(1/2 )}(x^{\prime\mp},\mathbf{x}_{\perp})\right]\,. \tag{12}\] Here, since two classical color charges are only located on the light-cone, \(\rho^{(1/2)}\propto\delta(x^{\mp})\), the upper limit of the integration in Eq. (12) can be taken as infinity. Therefore, the solutions shown in Eq.(9) and Eq.(10) are boost invariant. To study the boost-invariant glasma at a late time, we have to evolve the CYM field starting from the boost-invariant initial condition by solving the classical equation of motion inside the light-cone (\(J=0\)), \([D_{\mu},F^{\mu\nu}]=0\). ### \(3+1\)D glasma in continuous spacetime We explain how to extend the \(2+1\)D glasma description to the \(3+1\)D glasma description with the dynamical 3D classical color current that represents two colliding nuclei with finite longitudinal thickness. As an example, we consider the situation where the two colliding nuclei have the same radius \(R\) and Lorentz gamma factor \(\gamma\). The generalization to the collisions of two nuclei with different radiuses and gamma factors can be carried out straightforwardly. Let us first revisit the total classical color current to get the initial condition for the \(3+1\)D glasma. Our \(3+1\)D glasma method considers the setup where the two nuclei are still far apart at the initial proper time \(\tilde{\tau}=\tilde{\tau}_{\rm ini}\) in the Milne coordinate defined as \((\tilde{\tau}=\sqrt{2x^{-}x^{+}},\tilde{\eta}=(1/2)\ln(x^{+}/x^{-}))\). It should be noted that we distinguish the Milne coordinates defined here from the usual Milne coordinates \((\tau=\sqrt{2(x^{-}-x_{\rm c})(x^{+}-x_{\rm c})},\eta=(1/2)\ln([x^{+}-x_{\rm c }]/[x^{-}-x_{\rm c}]))\) in which central positions of the two nuclei coincide at \(\tau=0\). The center positions of the nuclei (1) and (2) in the Milne coordinates are initially taken as a sufficiently large negative and positive value, \(-|\tilde{\eta}_{\rm ini}|\) and \(|\tilde{\eta}_{\rm fini}|\). The corresponding center positions in the light-cone coordinates are given by \(x^{\mp}=x_{\rm c}=\tilde{\tau}_{\rm ini}e^{|\tilde{\eta}_{\rm ini}|}/\sqrt{2}\), and thus the nuclei (1) and (2) exist within \(x^{-}=[x_{\rm c}-R/(\gamma\sqrt{2}),x_{\rm c}+R/(\gamma\sqrt{2})]\) and \(x^{+}=[x_{\rm c}-R/(\gamma\sqrt{2}),x_{\rm c}+R/(\gamma\sqrt{2})]\), respectively, as shown in Fig. 1. The initial proper time \(\tau_{\rm ini}\) is set so small that the two nuclei do not overlap at \(\tilde{\tau}=\tilde{\tau}_{\rm ini}\), which requires the relation, \(x^{\mp}|_{\tilde{\eta}=0,\tilde{\tau}=\tilde{\tau}_{\rm ini}}=\tau_{\rm ini}/ \sqrt{2}<x_{\rm c}-R/(\gamma\sqrt{2})\). Then, the classical color current at \(\tilde{\tau}_{\rm ini}\) can be assumed to be the incoherent sum of the two classical color currents, \[J^{\mu}(x)=\frac{1}{g}\delta^{\mu+}\rho^{(1)}(x^{-},\mathbf{x}_{\perp} )+\frac{1}{g}\delta^{\mu-}\rho^{(2)}(x^{+},\mathbf{x}_{\perp}). \tag{13}\] The transverse gauge field and electric field at \(\tilde{\tau}=\tilde{\tau}_{\rm ini}\) are also assumed to be the incoherent sum of those from each nucleus, \[A_{i}(x) =A_{i}^{(1)}(x)+A_{i}^{(2)}(x)\, \tag{14}\] \[E^{i}(x) =E^{(1)i}(x)+E^{(2)i}(x)\] \[=x^{-}\partial_{-}A_{i}^{(1)}(x)+x^{+}\partial_{+}A_{i}^{(2)}(x)\, \tag{15}\] where the transverse gauge field from a single nucleus, \(A_{i}^{(1/2)}(x)\), is given by the Wilson line as given in Eq. (3), Figure 1: Spacetime picture of a collision of relativistic nuclei with the finite longitudinal extension \(R/\gamma\). At initial proper time \(\tilde{\tau}_{\rm ini}\), the two nuclei are still apart from each other. and the electric gauge field, \(E^{(1/2)i}=x^{\mp}\partial_{\mp}A_{i}^{(1/2)}\), is obtained by using the relation \(\tau\partial_{\tau}=x^{-}\partial_{-}+x^{+}\partial_{+}\). The longitudinal components of the gauge field and electric field are given in the same form as those shown in Eq.(9) and Eq.(10), \[A_{\tilde{\eta}}=0\,\quad E^{\tilde{\eta}}=ig[A_{i}^{(1)},A_{i}^{(2)}]. \tag{16}\] We have used the modified FS gauge condition \(A^{\tilde{\tau}}=0\), and then this set of initial conditions satisfies Gauss's law \([D_{\mu},F^{\mu\tilde{\tau}}]=J^{\tilde{\tau}}\). It should be noted that \(E^{\tilde{\eta}}\) is negligibly small since the two nuclei are largely apart from each other at \(\tilde{\tau}=\tilde{\tau}_{\rm ini}\). In the actual calculation, we present the initial condition given above on a lattice and evolve them by solving their evolution equations numerically. The evolution equation for the CYM field is the classical equation of motion \([D_{\mu},F^{\mu\nu}]=J^{\nu}\) and the evolution equation for the classical current is the continuity equation \([D_{\mu},J^{\mu}]=0\). Since \(J^{\mu}\) has two degrees of freedom as \(J^{\mu}=\delta^{\mu}+J^{(1)}+\delta^{\mu}-J^{(2)}\), an additional assumption is required so that \([D_{\mu},F^{\mu\nu}]=J^{\nu}\) and \([D_{\mu},J^{\mu}]=0\) form a closed system of equations. In this study, we assume that \(J^{(1/2)}\) obeys the continuity equation for each nucleus, \([D_{\pm},J^{(1/2)}]=0\). This assumption is valid if at least two nuclei do not overlap, e.g., before the collision or after the two nuclei have passed. To more accurately track the evolution of the color current beyond this assumption, it is necessary to solve the equations of motion for the microscopic degrees of freedom that carry the color charge. For convenience, we introduce the current defined as \(\tilde{J}^{(1/2)}\equiv x^{\mp}J^{(1/2)}\) and rewrite the continuity equation in the Milne coordinates as, \[(\tilde{\tau}\partial_{\tilde{\tau}}\pm D_{\tilde{\eta}})\tilde{J}^{(1/2)}=0. \tag{17}\] The classical equation of motion and Gauss's law in the Milne coordinates are written as \[\partial_{\tilde{\tau}}E^{1} =-\tilde{\tau}[D_{2},B^{3}]+\frac{1}{\tilde{\tau}}[D_{3},B^{2}]\, \tag{18}\] \[\partial_{\tilde{\tau}}E^{2} =\tilde{\tau}[D_{1},B^{3}]-\frac{1}{\tilde{\tau}}[D_{3},B^{1}]\,\] (19) \[\partial_{\tilde{\tau}}E^{\tilde{\eta}} =-\frac{1}{\tilde{\tau}}\epsilon^{\tilde{\eta}ik}[D_{j},B^{k}]- \frac{1}{\tilde{\tau}}\left[\tilde{J}^{(1)}-\tilde{J}^{(2)}\right]\,\] (20) \[[D_{i},E^{i}] =\tilde{J}^{(1)}+\tilde{J}^{(2)}\, \tag{21}\] where \(\epsilon_{\tilde{\eta}ik}\) is Levi-Civita symbol, and the magnetic field \(B\) is defined as \(B^{i}=\epsilon^{ijk}F_{jk}/2\ \ (i=1,2,\tilde{\eta})\). ### \(3+1\)D glasma in discretized space and continuous proper time For numerical calculations, we discretize the CYM field and classical current on the \(L_{\perp}^{2}\times L_{\tilde{\eta}}\) lattice, whose grid positions are labeled by a set of integers, \((i_{x}=0,1,\cdots,L_{\perp}-1,i_{y}=0,1,\cdots,L_{\perp}-1,i_{\tilde{\eta}}=0,1,\cdots,L_{\tilde{\eta}}-1)\). These integers are related to spatial coordinates as \(x=a_{\perp}(i_{x}-(L_{\perp}-1)/2)\), \(y=a_{\perp}(i_{y}-(L_{\perp}-1)/2)\) and \(\tilde{\eta}=a_{\tilde{\eta}}(i_{\tilde{\eta}}-L_{\tilde{\eta}}/2)\) with the lattice spacings, \(a_{\perp}\) and \(a_{\tilde{\eta}}\). All the quantities shown in this and later sections are made dimensionless normalizing with the transverse spatial lattice spacing \(a_{\perp}\), and the \(\tilde{\eta}\) component of the gauge field \(A_{\tilde{\eta}}\) is normalized by the longitudinal lattice spacing \(a_{\tilde{\eta}}\). We first consider the initial condition, the equation of motion, and the Gauss's law for the discretized CYM field. The gauge field and longitudinal electric field at the initial proper time, shown in Eq. (15) and Eq. (16), are discretized in a way that has been done in many papers (first in Ref. [3]), \[U_{i,x}=\left(U_{i,x}^{(1)}+U_{i,x}^{(2)}\right)\left(U_{i,x}^{(1) \dagger}+U_{i,x}^{(2)\dagger}\right)^{-1}\,\quad U_{\tilde{\eta},x} =I\, \tag{22}\] and \[E_{x}^{\tilde{\eta}} =\frac{i}{4g}\sum_{i}\Bigl{[}\left(U_{i,x+\hat{\eta}/2}-I\right) \left(U_{i,x+\hat{\eta}/2}^{(2)\dagger}-U_{i,x+\hat{\eta}/2}^{(1)\dagger} \right)\] \[+\left(U_{i,x+\hat{\eta}/2-\hat{\imath}}^{\dagger}-I\right) \left(U_{i,x+\hat{\eta}/2-\hat{\imath}}^{(2)}-U_{i,x+\hat{\eta}/2-\hat{\imath }}^{(1)}\right)-\ {\rm h.c.}\Bigr{]}\, \tag{23}\] where \(U_{i,x}^{(1/2)}=V_{x}^{(1/2)}V_{x+\hat{\imath}}^{(1/2)\dagger}\) is the link variable for nucleus \(1/2\), respectively, and the way of evaluating the Wilson line on the lattice is given in Appendix A. The initial transverse electric field on the lattice is given by \[E_{x}^{i} =\frac{a_{\tilde{\eta}}x^{-}}{g}V_{x}^{(1)}\left[\partial_{i}^{ \rm F}\partial_{\perp}^{-2}\rho_{\rm cov}^{(1)}(x^{-},\mathbf{x}_{\perp})\right]V_{x }^{(1)\dagger}\] \[+\frac{a_{\tilde{\eta}}x^{+}}{g}V_{x}^{(2)}\left[\partial_{i}^{ \rm F}\partial_{\perp}^{-2}\rho_{\rm cov}^{(2)}(x^{+},\mathbf{x}_{\perp})\right]V_{x }^{(2)\dagger}\, \tag{24}\] where \(\partial^{\rm F}\) is a forward difference, and \(\partial_{\perp}^{-2}\rho_{\rm cov}^{(1/2)}\) is obtained through discretized Fourier transformation in the transverse directions, \[\partial_{\perp}^{-2}\rho_{\rm cov}^{(1/2)}(x^{\mp},\mathbf{x}_{\perp})=\frac{1}{L_{ \perp}^{2}}\sum_{k_{1},k_{2}=0}^{L_{\perp}}\frac{\tilde{\rho}_{\rm cov}^{(1/2)}(x ^{\mp},\mathbf{k}_{\perp})}{k_{\rm lat,\perp}^{2}}e^{i\mathbf{x}_{\perp}\cdot\mathbf{k}_{ \perp}}. \tag{25}\] where \(\mathbf{k}_{\perp}=(k_{1},k_{2})=2\pi/L(n_{1},n_{2})\) (\(n_{1},n_{2}=0,1,\cdots,L_{\perp}-1\)) is a wave number on the lattice, \(k_{\rm lat,\perp}=2\sqrt{\sin^{2}\frac{k_{1}}{2}+\sin^{2}\frac{k_{2}}{2}}\) is the transverse momentum on the lattice, and \(\tilde{\rho}_{\rm cov}^{(1/2)}\) is the discrete Fourier transform of \(\rho_{\rm cov}^{(1/2)}\) in the transverse direction, \(\tilde{\rho}_{\rm cov}^{(1/2)}(x^{\mp},\mathbf{k}_{\perp})=\sum_{x^{1}x^{2}}\rho_{ \rm cov}^{(1/2)}(x^{\mp},\mathbf{x}_{\perp})e^{-i\mathbf{x}_{\perp}\cdot\mathbf{k}_{ \perp}}\). As will be explained in the later sections, we introduce the infrared regulator in \(\rho\), and as a result, the classical color charge vanishes at \(k_{\rm lat,\perp}=0\). To obtain the equation of motion and Gauss's law for the discretized CYM field with the dynamical current, we begin with the case in the absence of the dynamical current [9], \[\partial_{\tilde{\tau}}U_{i,x} =ig\frac{g_{ii}E^{i}_{x}}{a_{\tilde{\tau}}\tilde{\tau}}U_{i,x}\, \tag{26}\] \[\partial_{\tilde{\tau}}E^{i}_{x} =-\frac{ia_{\tilde{\eta}}\tilde{\tau}}{2g}\sum_{j\neq i}g^{ii}g^{jj }\left[W_{ij,x}-U^{\dagger}_{j,x-\hat{j}}W_{ij,x-\hat{j}}U_{j,x-\hat{j}}\right]\,\] (27) \[\sum_{i=1,2,\tilde{\eta}}\left(E^{i}_{x}-U^{\dagger}_{i,x-\hat{i} }E^{i}_{x-\hat{i}}U_{i,x-\hat{i}}\right)=0\, \tag{28}\] where the index \(i\) runs over \(1,2\) and \(\tilde{\eta}\), and \(g^{\mu\nu}=\mathrm{diag}(1,-1,-1,-(a_{\tilde{\eta}}\tilde{\tau})^{-2})\) is the metric of the Milne coordinates on the lattice, and \(W_{ij,x}\equiv U_{ij,x}-U^{\dagger}_{ij,x}\) is the difference of the plaquette, \(U_{ij,x}=U_{i,x}U_{j,x+\hat{i}}U^{\dagger}_{i,x+\hat{j}}U^{\dagger}_{j,x}\), and its Hermite conjugate. In below we discuss the definition of the electric field on the lattice. There are two ways to define the electric field on the lattice, the left and right electric fields, connected by the relation \(E^{i}_{\mathrm{R},x}=-U^{\dagger}_{i,x-\hat{i}}E^{i}_{\mathrm{L},x-\hat{i}}U_ {i,x-\hat{i}}\). The electric field we consider here is the left one, \(E^{i}_{x}=E^{i}_{\mathrm{L},x}\). Therefore, we have to add the left current, \(\tilde{J}^{(1/2)}_{\mathrm{L},x}\), to the equation of motion for \(E^{\tilde{\eta}}\) as shown in Eq. (20), \[\partial_{\tilde{\tau}}E^{\tilde{\eta}}_{x} =-\frac{i}{2ga_{\tilde{\eta}}\tilde{\tau}}\sum_{j\neq\tilde{\eta} }\left[W_{\tilde{\eta}j,x}-U^{\dagger}_{j,x-\hat{j}}W_{\tilde{\eta}j,x-\hat{j} }U_{j,x-\hat{j}}\right]\] \[-\frac{1}{\tilde{\tau}}\left[\tilde{J}^{(1)}_{\mathrm{L},x}- \tilde{J}^{(2)}_{\mathrm{L},x}\right]. \tag{29}\] where \(\tilde{J}^{(1/2)}_{\mathrm{L},x}\) is located on \(x+\hat{\tilde{\eta}}/2\) as well as the left electric field \(E^{\tilde{\eta}}_{x}\). On the other hand, the Gauss's law, which is shown in Eq. (28), is independent of the choice of the electric field on the lattice since the left-hand side of Eq. (28) is nothing but the sum of the left and right electric fields, \[\sum_{i=1,2,\tilde{\eta}}\left(E^{i}_{x}-U^{\dagger}_{i,x-\hat{i }}E^{i}_{x-\hat{i}}U_{i,x-\hat{i}}\right)=\sum_{i=1,2,\tilde{\eta}}\left(E^{i }_{\mathrm{L},x}+E^{i}_{\mathrm{R},x}\right). \tag{30}\] Therefore, we add the current \(\tilde{J}^{(1/2)}\), which is independent of the left/right choice, to Gauss's law as shown in Eq.(21), \[\sum_{i=1,2,\tilde{\eta}}\left(E^{i}_{x}-U^{\dagger}_{i,x-\hat{i }}E^{i}_{x-\hat{i}}U_{i,x-\hat{i}}\right)=a_{\tilde{\eta}}\left(\tilde{J}^{(1 )}_{x}+\tilde{J}^{(2)}_{x}\right). \tag{31}\] Next, we consider the initial condition for the discretized currents \(\tilde{J}^{(1/2)}_{x}\) and \(\tilde{J}^{(1/2)}_{\mathrm{L},x}\), and the continuity equations for them. The initial condition for \(\tilde{J}^{(1/2)}\) in the modified Fock-Schwinger gauge condition, shown in Eq. (13), is discretized as \[\tilde{J}^{(1/2)}_{x}=\frac{x^{\mp}}{g}V^{(1/2)}_{\mathrm{cov},x}V^{(1/2)}_{x} . \tag{32}\] Here we employ the gauge transformations for the color charge density shown in Eq. (4). The initial condition for the left current is assumed to have the same expression as that for \(\tilde{J}^{(1/2)}_{x}\), \[\tilde{J}^{(1/2)}_{\mathrm{L},x}=\frac{x^{\mp}}{g}V^{(1/2)}_{x+\frac{\hat{ \tilde{\eta}}}{2}}\rho^{(1/2)}_{\mathrm{cov},x+\frac{\hat{\tilde{\eta}}}{2}}V^{( 1/2)\dagger}_{x+\frac{\hat{\tilde{\eta}}}{2}}. \tag{33}\] Since \(U_{\tilde{\eta}}=I\) at \(\tilde{\tau}_{\mathrm{ini}}\), the longitudinal electric field \(E^{\tilde{\eta}}\) is initially independent of the left/right choice, \(E^{\tilde{\eta}}_{\mathrm{L}}=-E^{\tilde{\eta}}_{\mathrm{R}}\). Thus, it is reasonable to assume that the color current is also independent of the left/right choice at the initial proper time. Then, to get the continuity equations, we perform \(\tilde{\tau}\) derivative on the left and right hands of the Gauss's law given in Eq. (31), \[a_{\tilde{\eta}}\partial_{\tilde{\tau}}\left(\tilde{J}^{(1)}_{x}+ \tilde{J}^{(2)}_{x}\right)=-\frac{1}{\tilde{\tau}}\left[\tilde{J}^{(1)}_{ \mathrm{L},x}-\tilde{J}^{(2)}_{\mathrm{L},x}\right]\] \[+\frac{1}{\tilde{\tau}}U^{\dagger}_{\tilde{\eta},x-\hat{\tilde{ \eta}}}\left[\tilde{J}^{(1)}_{\mathrm{L},x-\hat{\tilde{\eta}}}-\tilde{J}^{(2)}_{ \mathrm{L},x-\hat{\tilde{\eta}}}\right]U_{\tilde{\eta},x-\hat{\tilde{\eta}}}. \tag{34}\] In accordance with the discussions in the continuum limit (see discussions above Eq. (17)), we assume that the color currents, \(\tilde{J}^{(1)}\) and \(\tilde{J}^{(2)}\), evolve according to \[\tilde{\tau}\partial_{\tilde{\tau}}\tilde{J}^{(1/2)}_{x}=\mp\frac{1}{a_{ \tilde{\eta}}}\left[\tilde{J}^{(1/2)}_{\mathrm{L},x}-U^{\dagger}_{\tilde{\eta},x- \hat{\tilde{\eta}}}\tilde{J}^{(1/2)}_{\mathrm{L},x-\hat{\tilde{\eta}}}U_{\tilde{ \eta},x-\hat{\tilde{\eta}}}\right]. \tag{35}\] This equation agrees with Eq. (17) in the continuum limit. In addition, following Ref. [22], we assume that the evolution equation for the left current is given as, \[\tilde{\tau}\partial_{\tilde{\tau}}\tilde{J}^{(1/2)}_{\mathrm{L},x}=\mp\frac{1}{ a_{\tilde{\eta}}}\left[U_{\tilde{\eta},x}\tilde{J}^{(1/2)}_{x+\hat{\tilde{\eta}}}U^{ \dagger}_{\tilde{\eta},x}-\tilde{J}^{(1/2)}_{x}\right]. \tag{36}\] This equation also agrees with Eq. (17) in the continuum limit. ### Energy-momentum tensor in discretized space We define the energy-momentum (EM) tensor of the CYM field on the lattice. In principle, the EM tensor on discrete spacetime cannot be defined as Noether current due to translational symmetry breaking by the lattice. To define an appropriate "EM tensor" on the grid point for the real-time lattice simulation, we translate the expression of the EM tensor in continuous spacetime onto lattice, \[T^{\mu\nu}_{x} =-g^{\kappa\sigma}F_{(\mathrm{grid})\mu\kappa,x}F_{(\mathrm{grid}) \nu\sigma,x}\] \[+\frac{1}{4}g_{\mu\nu}g^{\alpha\beta}g^{\gamma\omega}F_{(\mathrm{ grid})\alpha\gamma,x}F_{(\mathrm{grid})\beta\omega,x}\, \tag{37}\] where the field strength on the grid point can be written with the electric and magnetic field on the grid point, \[F^{i\tilde{\pi}}_{(\mathrm{grid})x} =\frac{E^{i}_{(\mathrm{grid})x}}{\sqrt{-\mathrm{det}g_{\mu\nu}}}\, \tag{38}\] \[F^{ij}_{(\mathrm{grid})x} =\epsilon^{ijk}B^{k}_{(\mathrm{grid})x}. \tag{39}\] The electric field on the grid point is defined as the distance between the left and right electric field, \[E^{i}_{(\text{grid})x}\equiv\frac{1}{2}\left[E^{i}_{\text{L},x}-E^{ i}_{\text{R},x+\hat{i}}\right]=\frac{1}{2}\left[E^{i}_{x}+U^{\dagger}_{i,x}E^{i}_{x}U_{ i,x}\right]\, \tag{40}\] and the magnetic field on the grid point is defined using the 4 plaquettes in the neighborhood of the grid point, \[B^{i}_{(\text{grid})x}\equiv\frac{1}{8} \Big{[}\Big{(}U_{i,x}U_{j,x+\hat{i}}U^{\dagger}_{i,x+\hat{j}}U^{ \dagger}_{j,x}\] \[+U^{\dagger}_{j,x-\hat{j}}U_{i,x-\hat{j}}U_{j,x+\hat{i}-\hat{j}}U^ {\dagger}_{i,x}\] \[+U^{\dagger}_{i,x-\hat{i}}U^{\dagger}_{j,x-\hat{i}-\hat{j}}U_{i, x-\hat{i}-\hat{j}}U_{j,x-\hat{j}}\] \[+U_{j,x}U^{\dagger}_{i,x-\hat{i}+\hat{j}}U^{\dagger}_{j,x-\hat{i }}U_{i,x-\hat{i}}\Big{)}-(\text{h.c.})\Big{]}. \tag{41}\] This discretized EM tensor should agree with the continuous one in the continuum limit. To confirm that the \(3+1\)D glasma simulation performed on the lattice works without problems, in the following section, we check the two continuity equations. After that we will apply our method the the central Au-Au collisions. ## III Numerical results We show the numerical results of the \(3+1\)D glasma simulations using the SU(2) CYM field in the Milne coordinates. Our numerical simulations are performed on the \(L_{\perp}^{2}\times L_{\bar{\eta}}\) lattice. The discretized evolution equations on the lattice are given in Appendix B and solved with the leap-flog method. The boundary condition in the transverse directions is periodic, and the CYM field and classical color current are imposed to vanish on the boundary of the \(\bar{\eta}\) direction. This section is organized as follows: In Sec. III.1, we test two relations that should hold in continuous spacetime and check the consistency with calculations performed in Ref. [22]. In Sec. III.2, we show the evolution of some observables using the setup that corresponds to the central collisions of Au-Au at \(\sqrt{s}=200\) GeV. ### Check of our calculations We first confirm that our simulations do not violate two relations derived from the continuity equation. Then, we calculate the transverse pressure and energy density in the local rest frame and check their consistency with those calculated in Ref. [22]. The paper [22] simulates the \(3+1\)D glasma evolution on a lattice using the Minkowski coordinates. The initial color charge density considered here is assumed to be the multiplication of the 1-dimensional normal distribution function \(N_{\text{1D}}\) with the variance \(R/(\sqrt{2})\), which represents the longitudinal shape of the nucleus, and the random number \(\Gamma^{(1/2)}\) \[\rho^{(1/2)}(x^{\mp},\mathbf{x}_{\perp})=N_{\text{1D}}(x^{\mp}-x_{ \text{c}},\frac{R}{\gamma\sqrt{2}})\Gamma^{(1/2)}(\mathbf{x}_{\perp})\, \tag{42}\] where \(x_{\text{c}}\) is the center position of the nuclei in the \(x^{\mp}\) direction, and \(R\) and \(\gamma\) are the radius and the gamma factor of the nuclei. The random number \(\Gamma^{(1/2)}\) satisfies the following event average, \[\langle\Gamma^{(1/2)a}(\mathbf{x}_{\perp})\Gamma^{(1/2)b}(\mathbf{x}^{ \prime}_{\perp})\rangle_{\text{ave}}=\delta^{a,b}Q_{s}^{2}N_{\text{2D}}(\mathbf{x}_ {\perp}-\mathbf{x}^{\prime}_{\perp},\sigma_{\perp})\, \tag{43}\] where \(Q_{s}\) is the saturation scale. To introduce an ultraviolet cutoff for the transverse momentum of \(\rho\), we use the 2-dimensional normal distribution function with the variance \(\sigma_{\perp}\), \(N_{\text{2D}}(\mathbf{x}_{\perp}-\mathbf{x}^{\prime}_{\perp},\sigma_{\perp})\), in Eq. (43) instead of the delta function shown in Eq. (7). The transverse ultraviolet cutoff is necessary to regulate divergence in the local operator of the gauge fields [31; 32]. In addition, we also introduce an infrared cutoff \(m\) by multiplying the regulation factor by the color charge density in the transverse momentum space, \[\tilde{\rho}^{(1/2)}(x^{\mp},\mathbf{k}_{\perp})\to\frac{k_{\text{lat}, \perp}^{2}}{m^{2}+k_{\text{lat},\perp}^{2}}\tilde{\rho}^{(1/2)}(x^{\mp},\mathbf{k} _{\perp})\, \tag{44}\] which means that the contribution of scale less than \(m\) is suppressed. In the calculations in this section, the parameters shown in Table 1 are used, which is consistent with the previous calculations in Ref. [22]. The system size in the longitudinal direction, \(a_{\bar{\eta}}\times L_{\bar{\eta}}\), is taken such that both nuclei are included in the lattice. The lattice spacing in the longitudinal direction, \(a_{\bar{\eta}}\), is small enough not to affect the results. The center position of the nuclei in the light-cone coordinates, \(x^{\mp}=x_{\text{c}}\), is taken such that the overlap of the two incoming nuclei is negligibly small at the initial proper time, \(\tilde{\tau}_{\text{ini}}\). Here we consider the continuity equations in the Milne coordinates, \[[D_{\mu},T^{\mu\bar{\tau}}]=-E^{\bar{\eta}}J^{\bar{\eta}}\, \tag{45}\] \[[D_{\mu},T^{\mu\bar{\eta}}]=0\, \tag{46}\] \begin{table} \begin{tabular}{c|c} \hline \hline \(L_{\perp}\) & \(128\) \\ \hline \(L_{\bar{\eta}}\) & \(224,448,896\) \\ \(a_{\perp}\) & \(1/(8Q_{s})\) \\ \(a_{\bar{\eta}}\) & \(10/L_{\bar{\eta}}\) \\ \(\eta\) & \(Q_{s}\) \\ \(\sigma_{\perp}\) & \(\sqrt{2}/(10Q_{s})\) \\ \(R/\gamma\) & \(1/(2Q_{s}),1/(4Q_{s}),1/(8Q_{s}),1/(16Q_{s})\) \\ \(\tilde{\tau}_{\text{ini}}\) & \(0.1a_{\perp}\) \\ \(x_{\text{c}}\) & \(\tilde{\tau}_{\text{ini}}/\sqrt{2}+3R/\gamma\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters used in Sec. III.1 or expanded explicitly as \[\frac{1}{\tilde{\tau}}\left\{\partial_{\tilde{\tau}}\left[\tilde{ \tau}T^{\tilde{\tau}\tilde{\tau}}\right]+\tilde{\tau}^{2}T^{\tilde{\eta}\tilde{ \eta}}\right\}+\partial_{1}T^{1\tilde{\tau}}+\partial_{2}T^{2\tilde{\tau}}+ \partial_{\tilde{\eta}}T^{\tilde{\eta}\tilde{\tau}}\] \[=-E^{\tilde{\eta}}J^{\tilde{\eta}}\, \tag{47}\] \[\left(\partial_{\tilde{\tau}}T^{\tilde{\eta}\tilde{\tau}}+\frac{3 T^{\tilde{\eta}\tilde{\tau}}}{\tilde{\tau}}\right)+\partial_{1}T^{1\tilde{\eta}}+ \partial_{2}T^{2\tilde{\eta}}+\partial_{\tilde{\eta}}T^{\tilde{\eta}\tilde{ \eta}}=0. \tag{48}\] By integrating the left-hand and right-hand sides of these equations over space and dropping the surface terms, we obtain \[\tilde{\tau}\partial_{\tilde{\tau}}\tau^{\tilde{\tau}\tilde{\tau} }=-\left(\tau^{\tilde{\tau}\tilde{\tau}}+\tilde{\tau}^{2}\tau^{\tilde{\eta} \tilde{\eta}}+\kappa\right)\, \tag{49}\] \[\tilde{\tau}\partial_{\tilde{\tau}}\left(\tilde{\tau}^{3}\tau^{ \tilde{\eta}\tilde{\tau}}\right)=0. \tag{50}\] where \(\tau^{\mu\nu}\equiv\int dxdyd\tilde{\eta}T^{\mu\nu}/V\) and \(\kappa\equiv\int dxdyd\tilde{\eta}E^{\tilde{\eta}}J^{\tilde{\eta}}/V\). To measure the violation of these relations in actual simulations, we use the following quantities, \[C_{1} \equiv-\frac{2a_{\theta}\left(\tau^{\tilde{\tau}\tilde{\tau}}+ \tilde{\tau}^{2}\tau^{\tilde{\eta}\tilde{\eta}}+\kappa\right)|_{\theta=\theta _{\text{ini}}+na_{\theta}}}{\tau^{\tilde{\tau}\tilde{\tau}}|_{\theta=\theta_{ \text{ini}}+(n+1)a_{\theta}}-\tau^{\tilde{\tau}\tilde{\tau}}|_{\theta=\theta_{ \text{ini}}+(n-1)a_{\theta}}}\, \tag{51}\] \[C_{2} \equiv\frac{\tilde{\tau}^{3}\tau^{\tilde{\eta}\tilde{\tau}}|_{ \theta=\theta_{\text{ini}}+na_{\theta}}}{\tilde{\tau}^{3}\tau^{\tilde{\eta} \tilde{\tau}}|_{\theta=\theta_{\text{ini}}=\ln\tau_{\text{ini}}}}\, \tag{52}\] where \(\theta=\ln\tilde{\tau}\) is the time variable used for solving the evolution equations numerically by the difference method, \(\theta_{\text{ini}}=\ln\tilde{\tau}_{\text{ini}}\) is \(\theta\) at the initial proper time, \(n\) is the time step, and \(a_{\theta}\) is the step size (See Appendix B for details.). Both quantities are normalized in such a way that they approach 1 when the violations of the continuity equations are smaller. In the upper and lower panels of Fig. 3, we show the evolution of \(C_{1}-1\) and \(C_{2}-1\), respectively, calculated with \(N_{\tilde{\eta}}=224\), \(448\) and \(896\) using the common random number \(\Gamma^{(1/2)}\) from the same seed. The deviations of \(C_{1}-1\) and \(C_{2}-1\) from \(0\) are found to be small and stable in changes of \(N_{\tilde{\eta}}\). Thus, the effect of the discretization on the dynamics is considered tiny in our calculations with large \(N_{\tilde{\eta}}\) and small \(a_{\tilde{\eta}}\). Next, we calculate the transverse pressure and energy density in the local rest frame on a \(L_{1}^{2}\times L_{\tilde{\eta}}\)=\(128^{2}\times 448\) lattice. To focus only on the EM tensor that the glasma has, we define the subtracted EM tensor as \[T^{\mu\nu}_{\text{sub}}\equiv T^{\mu\nu}-T^{\mu\nu}_{(1)}-T^{\mu\nu}_{(2)}\, \tag{53}\] where \(T^{\mu\nu}\) is the total EM tensor and \(T^{\mu\nu}_{(1/2)}\) is the EM tensor of the nucleus (1/2), respectively. To evaluate \(T^{\mu\nu}_{(1/2)}\), we run two additional simulations in parallel, considering only one nucleus. If the color charge densities change little with collision, the subtracted EM tensor \(T^{\mu\nu}_{\text{sub}}\) can be considered as the EM tensor of the glasma. This subtraction method has been used in Ref. [22] as well. To check the consistency of our method with theirs, we calculate the subtracted transverse pressure and subtracted energy density in the local rest frame, averaged over the transverse plane, which are also calculated in Figure 3: Test of the violation of relations (49) and (50), which should hold in the continuum limit as a result of the continuity equations for the EM tensor. The upper and Lower panels show the quantities defined in Eq. (51) and Eq. (52) that are introduced to measure the violations of Eq. (49) and Eq. (50), respectively. Ref. [22], \[P_{\perp} \equiv\frac{\int d^{2}x_{\perp}\left[T_{\rm sub}^{11}+T_{\rm sub}^{22} \right]}{2V_{\perp}}\, \tag{54}\] \[\varepsilon_{\rm LRF} \equiv\frac{\int d^{2}x_{\perp}\left[T_{\rm sub}^{11}+T_{\rm sub}^{ 22}+\sqrt{\left(T_{\rm sub}^{\tilde{\tau}\tilde{\tau}}+T_{\rm sub}^{\tilde{ \eta}\tilde{\eta}}\right)^{2}-4(T_{\rm sub}^{\tilde{\eta}\tilde{\tau}})^{2}} \right]}{2V_{\perp}} \tag{55}\] where \(V_{\perp}=\int d^{2}\mathbf{x}_{\perp}\). In Ref. [22], these quantities are calculated using Minkovski coordinates, and thus the expression of the energy density in the local rest frame is different, \[\varepsilon_{\rm LRF} \equiv\frac{\int d^{2}x_{\perp}\left[T_{\rm sub}^{11}+T_{\rm sub }^{22}+\sqrt{\left(T_{\rm sub}^{00}+T_{\rm sub}^{33}\right)^{2}-4\left(T_{\rm sub }^{30}\right)^{2}}\right]}{2V_{\perp}}. \tag{56}\] The consistency between Eq. (55) and Eq. (56) can be checked using a general coordinate transformation. Figure 4 shows the \(\eta\) dependence of the transverse pressures normalized by the proper time and the saturation scale, \(\tau P_{\perp}/Q_{s}^{3}\), for different thicknesses, \(Q_{s}R/\gamma=1/2,1/4,1/8\) and \(1/16\). These results are event averages of 50 independent simulations, each given a different random number \(\Gamma^{(1/2)}\). Here \(\tau=\sqrt{2(x^{+}-x_{c})(x^{-}-x_{c})}\) and \(\eta=\frac{1}{2}\ln\frac{x^{+}-x_{c}}{x-x_{c}}\) is the usual Milne coordinates, in which central positions of the two nuclei coincide at \(\tau=0\). Since the two Milne coordinates are in one-to-one correspondence as \((\tau=\tau(\tilde{\tau},\tilde{\eta}),\eta=\eta(\tilde{\tau},\tilde{\eta}))\), numerical simulations with discrete \((\tilde{\tau},\tilde{\eta})\) can only provide observations at a large number of discrete points spread across the \((\tau,\eta)\) plane. Therefore, the result at a fixed proper time \(\tau\) shown in Fig. 4 (and the following figures) is actually sampled from results within \([0.99\tau,1.01\tau]\). The upper panel of Fig. 4 shows that \(\tau P_{\perp}/Q_{s}^{3}\) for \(Q_{s}R/\gamma=1/16\) at \(Q_{s}\tau=1.5,3.0,4.5\) and \(6.0\) agree within the margin of error, which indicates that \(P_{\perp}\) decreases as \(\tau^{-1}\) at \(1.5\leq Q_{s}\tau\leq 6.0\). While this scaling behavior is imposed as the assumption in the paper [22], we clarify that this scaling is established as time elapses since the collision. We present a discussion about the scaling behavior in Appendix D. The lower panel of Fig. 4 shows results for different thicknesses at the late time, in which \(P_{\perp}\) falls as \(\tau^{-1}\). It is found that the transverse pressures at different thicknesses have a similar peak around \(\eta=0\) regardless of the thickness of the nucleus. This peak around \(\eta=0\) becomes milder as the nucleus becomes thinner. This behavior is understandable since the glasma becomes boost-invariant when a nucleus is infinitely thin, as explained in Sec. II.1. These results reproduce well Fig. 8 in Ref. [22]. The most important point to note is that we can reproduce the these results using about 4.5 times smaller number of grids in the longitudinal direction. The number of grids in the \(z\) direction in the calculation in Ref. [22] is 2048, while the number of grids in the \(\tilde{\eta}\) direction in our calculation is 448. Figure 5 shows the \(\eta\) dependence of the energy density in the local rest frame normalized by the proper time and the saturation scale, \(\tau\varepsilon_{\rm LRF}/Q_{s}^{3}\). The upper and lower panels of Fig. 5 are obtained from the same simulations as shown in the upper and lower panels of Fig. 4, respectively. The upper panel of Fig. 5 shows that, in the late time region when \(P_{\perp}\) decreases as \(\tau^{-1}\), \(\varepsilon_{\rm LRF}\) also decreases as \(\tau^{-1}\). The lower panel of Fig. 5 shows that \(\varepsilon_{\rm LRF}\) is about 2 times \(P_{\perp}\) at the late time, which means that the transverse pressure \(P_{\perp}\) is much larger than the longitudinal pressure in the local rest frame, defined as \(P_{\rm LRF,\em L}\equiv\varepsilon_{\rm LRF}-2P_{\perp}\). This definition of \(P_{\perp}\) is obtained by reference to the traceless of the EM tensor, \(T_{\mu}^{\mu}=0\), which is a consequence of the conformal symmetry. It must be mentioned here that the lower panel of Fig. 5 is inconsistent with the lower figure of Fig. 9 in Ref. [22]. Figure 4: The \(\eta\) dependence of the transverse pressure normalized by the proper time and the saturation scale, \(\tau P_{\perp}/Q_{s}^{3}\). All results shown here are event averages of 50 independent simulations. The upper panel shows \(\tau P_{\perp}/Q_{s}^{3}\) for \(Q_{s}R/\gamma=1/16\) at \(Q_{s}\tau=1.5,3.0,4.5\) and \(6.0\). The lower panel shows \(\tau P_{\perp}/Q_{s}^{3}\) for \(Q_{s}R/\gamma=1/16,1/8,1/4\) and \(1/2\), which are calculated at \(Q_{s}\tau=1.5,3.0,4.5\) and \(6.0\), respectively. and the discrepancy of \(\varepsilon_{\rm LRF}\) between our and their results becomes larger as \(Q_{s}R/\gamma\) becomes smaller. Given that our and their calculations for the transverse pressure completely agree, this discrepancy in local energy density is not understood and we leave the investigation of the cause of the discrepancy to a future task. Since the continuity equations are not violated in our simulations, and our calculations are stable for varying lattice sizes and spacings, we believe that there is no fatal problem in our method, at least with respect to the initial condition and the dynamical evolution of the \(3+1\)D glasma. ### Central collisions of Au-Au We show the numerical results using the initial conditions that describe the central Au-Au collisions at \(\sqrt{s}=200\) GeV. The color charge density at the initial proper time \(\tilde{\tau}_{\rm ini}\) is given as the incoherent sum of the color charge density of each nucleon. The color charge density of \(i-\)th nucleon with a radius \(R_{\rm n}\) is assumed to have the Gaussian shape whose center position is \((b_{i}^{1},b_{i}^{2},b_{i}^{\mp})\), \[\rho_{i}^{(1/2)}(x^{\mp},\mathbf{x}_{\perp})\] \[=N_{\rm 1D}(x^{\mp}-b_{i}^{\mp},\frac{R_{\rm n}}{\sqrt{6}\gamma})N_{ \rm 2D}(\mathbf{x}_{\perp}-\mathbf{b}_{\perp,i},\frac{R_{\rm n}}{\sqrt{3}})\Gamma_{i }^{(1/2)}(x^{\mp},\mathbf{x}_{\perp}). \tag{57}\] The random number \(\Gamma_{i}^{(1/2)}\) satisfies the following event average, \[\langle\Gamma_{i}^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\Gamma_{i}^{(1/ 2)b}(x^{\prime\mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\rm eve}\] \[=\delta^{a,b}2\pi\left(\frac{2R_{\rm n}^{2}}{3}+\sigma_{\perp}^{2 }\right)\sqrt{2\pi\left(\frac{R_{\rm n}^{2}}{3\gamma^{2}}+\sigma_{\mp}^{2} \right)}\left(g^{2}\bar{\mu}\right)^{2}\] \[\qquad\times N_{\rm 1D}(x^{\mp}-x^{\prime\mp},\sigma_{\mp})N_{ \rm 2D}(\mathbf{x}_{\perp}-\mathbf{x}_{\perp}^{\prime},\sigma_{\perp})\, \tag{58}\] where \(\sigma_{\perp}\) and \(\sigma_{\mp}\) are the correlation lengths of \(\Gamma^{(1/2)}\) in the transverse and longitudinal direction, respectively, and \(g^{2}\bar{\mu}\) is the parameter controlling the strength of the color charge density. Then, we can obtain the following relation (detailed derivation is presented in Appendix E), \[\langle\rho_{i}^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\rho_{i}^{(1/2)b} (x^{\prime\mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\rm eve}\] \[=\delta^{a,b}\left(g^{2}\bar{\mu}\right)^{2}N_{\rm 1D}(\frac{x^{ \mp}+x^{\prime\mp}}{2}-b_{i}^{\mp},\frac{R_{\rm n}}{\sqrt{3}\gamma})N_{\rm 1D}(x^{ \mp}-x^{\prime\mp},l_{\mp})\] \[\times N_{\rm 2D}(\frac{\mathbf{x}_{\perp}+\mathbf{x}_{\perp}^{\prime}}{2}- \mathbf{b}_{\perp,i},\sqrt{\frac{2}{3}}R_{\rm n})N_{\rm 2D}(\mathbf{x}_{\perp}-\mathbf{x}_{ \perp}^{\prime},l_{\perp})\, \tag{59}\] where \(l_{\perp}\) and \(l_{\mp}\) are the correlation lengths of \(\rho^{(1/2)}\), which are related to \(\sigma_{\perp}\) and \(\sigma_{\mp}\) by \[l_{\perp}^{-2} =\sigma_{\perp}^{-2}+\left(\sqrt{\frac{2}{3}}R_{\rm n}\right)^{- 2}\, \tag{60}\] \[l_{\mp}^{-2} =\sigma_{\mp}^{-2}+\left(\frac{R_{\rm n}}{\sqrt{3}\gamma}\right) ^{-2}. \tag{61}\] We can define the squared color charge density of the nucleon per unit volume \(dx^{-}dxdy\) as \[\left(g^{2}\mu(x^{\mp},\mathbf{x}_{\perp})\right)^{2}\equiv\left(g^{2}\bar{\mu} \right)^{2}N_{\rm 1D}(x^{\mp},\frac{R_{\rm n}}{\sqrt{3}\gamma})N_{\rm 2D}(\mathbf{x}_{ \perp},\sqrt{\frac{2}{3}}R_{\rm n}). \tag{62}\] and its integration over \(x^{\mp}\) at \(\mathbf{x}_{\perp}=0\) is assumed to be proportional to the nucleon saturation scale, \(g^{2}\mu_{\rm 2D,c}=\int dx^{\mp}g^{2}\mu(x^{\mp},0)\propto Q_{\rm n,s}\). The center position of \(i-\)th nucleon, \((\mathbf{b}_{\perp,i},b_{i}^{-})\), is sampled according to the Woods-Saxon distribution, \[f_{\rm ws}(x^{\mp},\mathbf{x}_{\perp})\propto\frac{1}{1+e^{\frac{\sqrt{(x-h_{\rm imp }/2)^{2}}+y^{2}+2(x^{\mp}-x_{\rm e})^{2}/\gamma^{2}-R}{a}}} \tag{63}\] Figure 5: The \(\eta\) dependence of the energy density in the local rest frame normalized by the proper time and the saturation scale, \(\tau_{\rm LRF}/Q_{s}^{3}\). The upper and lower panels of Fig. 5 are obtained from the same simulations as shown in the upper and lower panels of Fig. 4, respectively. with a nucleus radius \(R\), a thinness of a nucleus surface \(a\) and an impact parameter \(b_{\rm imp}\). Then, the color charge density of a nucleus with an atomic number \(A\) has the following event average, \[\langle\rho^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\rho^{(1/2)b}(x^{\prime \mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\rm eve}\] \[=\delta^{a,b}\sum_{i=1}^{A}\left[g^{2}\mu(\frac{x^{\mp}+x^{\prime \mp}}{2}-b_{i}^{\mp},\frac{\mathbf{x}_{\perp}+\mathbf{x}_{\perp}^{\prime}}{2}-\mathbf{b}_{ \perp,i})\right]^{2}\] \[\times N_{\rm 2D}(\mathbf{x}_{\perp}-\mathbf{x}_{\perp}^{\prime},l_{\perp}) N_{\rm 1D}(x^{\mp}-x^{\prime\mp},l_{\mp}). \tag{64}\] In the limit where \(R_{\rm n}\) is infinitely small and \(A\) is infinitely large, Eq. (64) becomes \[\langle\rho^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\rho^{(1/2)b}(x^{\prime \mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\rm eve}\] \[=\delta^{a,b}Af_{\rm ws}(\mathbf{x}_{\perp},x^{\prime\mp})\left(g^{2 }\bar{\mu}\right)^{2}\delta(x^{\mp}-x^{\prime\mp})\delta(\mathbf{x}_{\perp}-\mathbf{x} _{\perp}^{\prime}). \tag{65}\] This way of determining the color charge density for each nucleon is simple and does not take into account fluctuations of \(\rho\) using the knowledge of high-energy QCD. There are more sophisticated ways that can give more realistic determination of the color charge density. The IP-glasma model is the most famous way [16], in which the color charge density for each nucleon is determined by the saturation scale based on the IP-sat model [33; 34]. Aside from this, there is another way in which the color charge density of each nucleon is determined from the transverse momentum-dependent (TMD) gluon distribution parametrized by the GBW model [22]. We use the set of parameters listed in Table. 2 chosen to describe the Au-Au collisions at \(\sqrt{s}=200\) GeV. To simulate a central collision, we choose the impact parameter to be zero, \(b_{\rm imp}=0\). The parameter \(g^{2}\bar{\mu}\) is taken such that \(g^{2}\mu_{\rm 2D,c}=10Q_{\rm n,s}\). The infrared cutoff \(m\) is introduced in the same manner as done in the previous section and is taken as \(0.2\) GeV as the QCD scale. In this setup, the energy density of the glasma generated in the central collision at \(\tau=1\) fm/c and \(\eta=0\) in the \(x\sim y\sim 0\) region is found to be about \(800\) GeV/fm\({}^{3}\). The longitudinal correlation length \(l_{\mp}\) is taken to be the maximal value \(R_{\rm n}/\sqrt{3}\gamma\) which means that the longitudinal correlation only comes from the longitudinal shape of a nucleon. This paper focuses on studying the recoil effect of the dynamical current on the glasma, and we leave a detailed study of the effect of the longitudinal correlation on the \(3+1\)D glasma evolution for future works. Note that the subtraction method used in the previous section is not used in this section. In contrast to the calculations in the previous section, the change of the CYM field of the nucleus before and after the collision of this section is not negligibly small. As a result, the subtracted EM tensor defined in Eq. (53) cannot be regarded as the EM tensor consisting of only glasma contribution. We show the dynamical evolution of the energy density in the central collision. Here the energy density in the Milne coordinates is given by the energy-momentum tensor \(T^{\tau\tau}\), and it is obtained from \(T^{\bar{\tau}},T^{\bar{\eta}\bar{\eta}}\) and \(T^{\bar{\tau}\bar{\eta}}\) using the general coordinates transformation, \[T^{\tau\tau} =\frac{1}{\tilde{\tau}^{2}-2\tilde{\tau}\bar{x}_{\rm c}\cosh\tilde {\eta}+\bar{x}_{\rm c}^{2}}\Big{[}\left(\tilde{\tau}-\bar{x}_{\rm c}\cosh \tilde{\eta}\right)^{2}T^{\bar{\tau}\bar{\tau}}\] \[+\left(\bar{x}_{\rm c}\sinh\tilde{\eta}\right)^{2}\tilde{\tau}^{2 }T^{\bar{\eta}\bar{\eta}}-\bar{x}_{\rm c}\sinh\tilde{\eta}\left(\tilde{\tau}- \bar{x}_{\rm c}\cosh\tilde{\eta}\right)\tilde{\tau}T^{\bar{\tau}\bar{\eta}} \Big{]}. \tag{66}\] In Fig. 6, we show the \(\eta\) dependence of the energy density averaged over the transverse plane, \[\varepsilon\equiv\frac{\int dxdyT^{\tau\tau}}{V_{\perp}}\, \tag{67}\] at different proper times in the central collisions. The results shown in Fig. 6 are averaged over \(10\) events, and they are normalized by the proper time and the saturation scale of the nucleon. It is found that the normalized energy density \(\tau\varepsilon/Q_{\rm n,s}^{4}\) rises in the large \(\eta\) region. This behavior can be interpreted as the contribution from the outgoing nuclei. On the other hand, \(\tau\varepsilon/Q_{\rm n,s}^{4}\) is found to be nearly a constant function of \(\eta\) in the rapidity region where two colliding nuclei have already passed, which indicates that the glasma created in this simulation is nearly boost-invariant. It should be noted that, compared to the calculations in the previous section, establishing the scaling law \(\varepsilon\propto\tau^{-1}\) is uncertain. ## IV Summary We have proposed a new numerical method for \(3+1\)D glasma simulation in Milne coordinates. In this method, the initial condition of the classical Yang-Mills (CYM) field and 3D classical color current is prepared at the time before the collision of the two nuclei occurs. Then, \begin{table} \begin{tabular}{c|c} \hline \hline \(L_{\perp}\) & \(674\) \\ \(L_{\bar{\eta}}\) & \(1792\) \\ \(a_{\perp}\) & \(134/R\) \\ \(a_{\bar{\eta}}\) & \(8/L_{\bar{\eta}}\) \\ \(\gamma\) & \(108\) \\ \(A\) & \(197\) \\ \(R\) & \(6.38\) (fm/c) \\ \(a\) & \(0.535\) (fm/c) \\ \(R_{\rm n}\) & \(1.01\) (fm/c) \\ \(Q_{\rm n,s}\) & \(0.5\) (GeV) \\ \(g^{2}\mu_{\rm 2D,c}\) & \(10Q_{\rm n,s}\) \\ \(l_{\perp}\) & \(2.5/Q_{\rm n,s}\) \\ \(l_{\mp}\) & \(R_{\rm n}/(\sqrt{3}\gamma)\) \\ \(m\) & \(0.2\) (GeV) \\ \(b_{\rm imp}\) & \(0,R\) \\ \(\bar{\tau}_{\rm ini}\) & \(0.1a_{\perp}\) \\ \(x_{\rm c}\) & \(\bar{\tau}_{\rm ini}/\sqrt{2}+1.8R/\gamma\) \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters used in Sec. III.2 the dynamical evolution of the CYM field and classical color current is tracked during the process that the two nuclei collide and pass through each other by solving the discretized evolution equations. Our numerical calculation is performed in the Milne coordinates \((\tilde{\tau},\tilde{\eta})\) where the collision has not yet occurred at \(\tilde{\tau}=\tilde{\eta}_{\rm ni}\). Thus, the Milne coordinates we use differ from the usual Milne coordinates \((\tau,\eta)\) where the center positions of two nuclei coincide at \(\tau=0\). However, the physical quantities presented in the usual Milne coordinates, such as \(T^{\tau\tau}\), can be obtained from that in our modified Milne coordinates by a general coordinate transformation. Our method is a new simulation method of the \(3+1\)D plasma evolution with the dynamical color current. The difference between our method and previous methods [19; 20; 21; 22; 23] is that our plasma simulation is done in the modified Milne coordinates as mentioned above, while the previous simulations are done in the Minkowski coordinates. Since the numerical simulations on a finite lattice in Milne coordinates correspond to a longitudinally expanding system in terms of the Minkowski coordinates because \(z=\tau\sinh\eta\), numerical simulations with our Milne coordinates need much less numerical resources than the simulations in Minkowski coordinates. Reduction of the numerical resource is important in the actual application since tracking the dynamical evolution of the 3D plasma requires huge numerical resources. In Sec. III.1, we first confirmed that two relations derived from the continuity equation of the EM tensor are not violated in the actual simulations, which indicates that the discretization effect on the dynamics is tiny and is well under control. Then, we checked the consistency of our results and the results shown in Ref. [22], using the same setup as Ref. [22]. As a result, the transverse pressure \(P_{\perp}\) calculated in our method completely agrees with their result. The most important thing to note is that we can reproduce the their result using about \(4.5\) smaller number of grids in the longitudinal direction. The number of grids in the \(z\) direction in their calculation is \(2048\), while the number of grids in the \(\tilde{\eta}\) direction in our calculation is \(448\). In addition, we have explicitly shown that the transverse pressure decrease as \(P_{\perp}\propto\tau^{-1}\), which is treated as the assumption in Ref. [22]. On the other hand, the energy density in the local rest frame \(\varepsilon_{\rm LRF}\) calculated in our method do not fully agree with the results shown in Ref. [22]. Our calculation has shown that it is about \(2\) times the transverse pressure, which means that the transverse pressure \(P_{\perp}\) is much larger than the longitudinal pressure in the local rest frame, \(P_{\rm LRF,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ cally, \[V_{x}^{(1/2)\dagger}=P_{x\mp}\exp\left[-i\int_{-\infty}^{x^{\mp}}dx^{ \prime\mp}\partial_{\perp}^{-2}\rho_{\rm cov}^{(1/2)}(x^{\prime\mp},\mathbf{x}_{ \perp})\right]\,. \tag{11}\] In our setup, the color charge density \(\rho_{\rm cov}^{(1/2)}\) doesn't exist at the sufficiently small \(x^{\mp}\), and thus we can replace the lower bound of the integral in Eq. (11) with a small value \(x_{\rm low}\) such that \(\rho^{(1/2)}|_{x^{\mp}=x_{\rm low}}\sim 0\). Then, for the numerical evaluation of Eq. (11), we divide the exponential function in Eq.(11) into many small parts, \[V_{x}^{(1/2)\dagger}= W_{x}^{(1/2)}\Big{|}_{x^{\tilde{\eta}}=\tilde{\eta}\pm\frac{1}{ 2}\Delta\tilde{\eta}}W_{x}^{(1/2)}\Big{|}_{x^{\tilde{\eta}}=\tilde{\eta}\pm \frac{3}{2}\Delta\eta}\] \[\cdots\ W_{x}^{(1/2)}\Big{|}_{x^{\tilde{\eta}}=\mp\ln\frac{\sqrt{ \pi}a_{\rm ini}}{\tau_{\rm ini}}}, \tag{12}\] where \(W^{(1/2)}\) is spaced with interval \(\Delta\tilde{\eta}\) in \(\tilde{\eta}\) coordinates and is given in the following expression, \[W_{x}^{(1/2)}\] \[=\exp\!\left[-i\,\Big{|}x^{\mp}\big{|}_{x^{\tilde{\eta}}=\tilde{ \eta}+\frac{1}{2}\Delta\tilde{\eta}}-x^{\mp}\big{|}_{x^{\tilde{\eta}}=\tilde{ \eta}-\frac{1}{2}\Delta\tilde{\eta}}\big{|}\,\partial_{\perp}^{-2}\rho_{\rm cov }^{(1/2)}|_{x^{\tilde{\eta}}=\tilde{\eta}}\right]\,. \tag{13}\] Here \(\partial_{\perp}^{-2}\rho_{\rm cov}^{(1/2)}\) is obtained by the discrete Fourier transform as shown in Eq. (25). The interval \(\Delta\tilde{\eta}\) should be small enough to converge the evaluated Wilson line. ## Appendix B Discretization of time direction We show here how to solve the evolution equations shown in Sec. II.3 by the difference method. In the actual calculations, we use the time variable \(\theta=\ln\tilde{\tau}\) instead of \(\tilde{\tau}\). Because of the relation \(\partial_{\theta}=\tilde{\tau}\partial_{\tilde{\tau}}\), we can solve the evolution equations efficiently in the small \(\tau\) region where the numerical calculation is more severe. The discretized classical equation of motion with the step size \(a_{\theta}\) is given by, \[U_{i,x}\big{|}_{\theta=\theta_{\rm ini}+(n+2)a_{\theta}}=e^{\frac{ ia_{\theta}y_{\rm Li}E^{i}}{a_{\theta}}}|_{\theta=\theta_{\rm ini}+(n+1)a_{ \theta}}U_{i,x}|_{\theta=\theta_{\rm ini}+na_{\theta}}\, \tag{14}\] \[E^{i}_{x}|_{\theta=\theta_{\rm ini}+(n+2)a_{\theta}}=E^{i}_{x}|_ {\theta=\theta_{\rm ini}+na_{\theta}}\] \[-a_{\theta}a_{\tilde{\eta}}\Big{\{}\frac{i\tilde{\tau}}{2g}\sum_ {i}g^{ii}g^{jj}\left[W_{ij,x}-U^{\dagger}_{j,x-\tilde{j}}W_{ij,x-\tilde{j}}U_{ j,x-\tilde{j}}\right]\] \[+\delta^{i\tilde{\eta}}\left[\tilde{J}^{(1)}_{\rm L,x}-\tilde{J} ^{(2)}_{\rm L,x}\right]\Big{\}}\Big{|}_{\theta=\theta_{\rm ini}+(n+1)a_{ \theta}}\, \tag{15}\] where \(\theta_{\rm ini}=\ln\tilde{\tau}_{\rm ini}\) is \(\theta\) at the initial proper time and \(n\) is the time step. The continuity equations are discretized as, \[\tilde{J}^{(1/2)}_{x}|_{\theta=\theta_{\rm ini}+(n+2)a_{\theta}}= \tilde{J}^{(1/2)}_{x}|_{\tau=na_{\theta}}\] \[\mp\frac{a_{\theta}}{a_{\tilde{\eta}}}\left[\tilde{J}^{(1/2)}_{ \rm L,x}-U^{\dagger}_{\tilde{\eta},x-\tilde{\eta}}\tilde{J}^{(1/2)}_{\rm L,x- \tilde{\eta}}U_{\tilde{\eta},x-\tilde{\eta}}\right]|_{\theta=\theta_{\rm ini}+ (n+1)a_{\theta}}\, \tag{16}\] \[\tilde{J}^{(1/2)}_{\rm L,x}|_{\theta=\theta_{\rm ini}+(n+2)a_{ \theta}}=\tilde{J}^{(1/2)}_{\rm L,x}|_{\theta=\theta_{\rm ini}+na_{\theta}}\] \[\mp\frac{a_{\theta}}{a_{\tilde{\eta}}}\left[U_{\tilde{\eta},x} \tilde{J}^{(1/2)}_{x+\tilde{\eta}}U^{\dagger}_{\tilde{\eta},x}-\tilde{J}^{(1/2 )}_{x}\right]|_{\theta=\theta_{\rm ini}+(n+1)a_{\theta}}. \tag{17}\] In the actual calculations, we solve these discretized evolution equations by the leap-flog method, which is convenient for describing the Hamilton dynamics. ## Appendix C Longitudinal pressure in setup for Sec. III.1 In this appendix, we show the longitudinal pressure is negligibly small, compared to the transverse pressure shown in Fig. Fig. 19pt, which means that the whole system expands like a rarefied gas in the longitudinal direction. The longitudinal pressure is defined as \[P_{\rm L}\equiv\frac{\int d^{2}x_{\perp}\tau^{2}T_{\rm sub}^{\eta\eta}}{2V_{ \perp}}. \tag{18}\] Here the energy momentum tensor \(T_{\rm sub}^{\eta\eta}\) is calculated via the general coordinates transformation, \[\tau^{2}T_{\rm sub}^{\eta\eta}=\frac{1}{\tilde{\tau}^{2}-2\tilde {\tau}\bar{x}_{\rm c}\cosh\tilde{\eta}+\bar{x}_{\rm c}^{2}}\Big{[}\left(\bar{x }_{\rm c}\sinh\tilde{\eta}\right)^{2}T_{\rm sub}^{\tilde{\eta}\tilde{\eta}}\] \[+\left(\tilde{\tau}-\bar{x}_{\rm c}\cosh\tilde{\eta}\right)^{2} \tilde{\tau}^{2}T_{\rm sub}^{\tilde{\eta}\tilde{\eta}}-\bar{x}_{\rm c}\sinh \tilde{\eta}\left(\tilde{\tau}-\bar{x}_{\rm c}\cosh\tilde{\eta}\right)\tilde{ \tau}T_{\rm sub}^{\tilde{\eta}\tilde{\eta}}\Big{]}. \tag{19}\] Figure 7 shows the \(\eta\) dependence of \(P_{\rm L}\) normalized by the proper time and the saturation scale, \(\tau P_{\rm L}/Q_{s}^{3}\), for different thicknesses, \(Q_{s}R/\gamma=1/2,1/4,1/8\) and \(1/16\). Comparing Fig. 4 and Fig. 7, it is found that the longitudinal pressure is much smaller than the transverse pressure in the wide rapidity range, \(-2<\eta<2\). ## Appendix D Scaling bahavi In this appendix, we discuss the scaling behavior obtained in Fig. 4, \(P_{\perp}\propto\tau^{-1}\). First, let us see the continuity equation for \(T^{\tau\tau}\) without the dynamical current, \[\frac{1}{\tau}\left\{\partial_{\tau}\left[\tau T^{\tau\tau}\right]+\tau^{2}T^{ \eta\eta}\right\}+\partial_{1}T^{1\tau}+\partial_{2}T^{2\tau}+\partial_{ \eta}T^{\eta\tau}=0. \tag{20}\] The current term is neglected here since the nuclei have already passed through in the time region shown in Fig. 4 and Fig. 5. By the integration over the transverse plane, the continuity equation leads to the evolution equation of the sum of the transverse and longitudinal pressure, \[\partial_{\tau}\left[\tau(P_{\perp}+P_{\mathrm{L}})\right]=-P_{\mathrm{L}}-\tau \partial_{\eta}\tau^{\eta\tau}. \tag{100}\] where \(\tau^{\eta\tau}\equiv\int dxdyT^{\eta\tau}/V_{\perp}\), the longitudinal pressure is defined as \(P_{\mathrm{L}}=\int dxdy\tau^{2}T^{\eta\eta}/V_{\perp}\), To obtain Eq. (100), we use the relation \(T^{\tau\tau}=T^{11}+T^{22}+T^{\eta\eta}\) resulting from the conformal symmetry of the CYM theory. Since the longitudinal pressure is quite smaller than the transverse pressure, as shown in appendix C, we can drop \(P_{\mathrm{L}}\) and obtain the evolution equation of the transverse pressure as \[\partial_{\tau}\left[\tau P_{\perp}\right]=-\tau\partial_{\eta}\tau^{\eta\tau}. \tag{101}\] Therefore the realization of the scaling behavior indicates that the derivative term \(\tau\partial_{\eta}\tau^{\eta\tau}\) in Eq. (101) is negligible as well as \(P_{\mathrm{L}}\). Appendix E Derivation of \(2\)-point correlation function of color charge density of a single nucleon In this appendix, we show the detailed calculation process to obtain Eq. (59). First, let us substitute Eq. (58) into Eq. (57), \[\langle\rho_{i}^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\rho_{i}^{(1/2)b} (x^{\prime\mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\mathrm{eve}}\] \[=\delta^{a,b}\left(g^{2}\bar{\mu}\right)^{2}2\pi\left(\frac{2R_{ \mathrm{n}}^{2}}{3}+\sigma_{\mp}^{2}\right)\sqrt{2\pi\left(\frac{R_{\mathrm{n }}^{2}}{3\gamma^{2}}+\sigma_{\perp}^{2}\right)}\] \[\times N_{\mathrm{1D}}(x^{\mp}-b_{i}^{\mp},\frac{R_{\mathrm{n}}} {\sqrt{6}\gamma})N_{\mathrm{2D}}(\mathbf{x}_{\perp}-\mathbf{b}_{\perp,i},\frac{R_{ \mathrm{n}}}{\sqrt{3}})\] \[\times N_{\mathrm{1D}}(x^{\prime\mp}-b_{i}^{\mp},\frac{R_{ \mathrm{n}}}{\sqrt{6}\gamma})N_{\mathrm{2D}}(\mathbf{x}_{\perp}^{\prime}-\mathbf{b}_{ \perp,i},\frac{R_{\mathrm{n}}}{\sqrt{3}})\] \[\times N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp},\sigma_{\mp})N_{ \mathrm{2D}}(\mathbf{x}_{\perp}-\mathbf{x}_{\perp}^{\prime},\sigma_{\perp}). \tag{102}\] The product of Gussian functions of \(x^{\mp}\) and \(x^{\prime\mp}\) in the right-hand of Eq. (102) can be transformed into the product of Gussian functions of \(x^{\mp}+x^{\prime\mp}\) and \(x^{\mp}-x^{\prime\mp}\) as \[\langle\rho_{i}^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\rho_{i}^{(1/2)b} (x^{\prime\mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\mathrm{eve}}\] \[=\delta^{a,b}\left(g^{2}\bar{\mu}\right)^{2}2\pi\left(\frac{2R_{ \mathrm{n}}^{2}}{3}+\sigma_{\perp}^{2}\right)\sqrt{2\pi\left(\frac{R_{\mathrm{ n}}^{2}}{3\gamma^{2}}+\sigma_{\mp}^{2}\right)}\] \[\times N_{\mathrm{1D}}(\frac{x^{\mp}+x^{\prime\mp}}{2}-b_{i}^{\mp },\frac{R_{\mathrm{n}}}{2\sqrt{3}\gamma})N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp },\frac{R_{\mathrm{n}}}{\sqrt{3}\gamma})\] \[\times N_{\mathrm{2D}}(\frac{\mathbf{x}_{\perp}+\mathbf{x}_{\perp}^{ \prime}}{2}-\mathbf{b}_{\perp,i},\frac{R_{\mathrm{n}}}{\sqrt{6}})N_{\mathrm{2D}}( \mathbf{x}_{\perp}-\mathbf{x}_{\perp}^{\prime},\sqrt{\frac{2}{3}}R_{\mathrm{n}})\] \[\times N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp},\sigma_{\mp})N_{ \mathrm{2D}}(\mathbf{x}_{\perp}-\mathbf{x}_{\perp}^{\prime},\sigma_{\perp}). \tag{103}\] Here we use the following relations, \[N_{\mathrm{1D}}(x^{\mp}-b_{i}^{\mp},\frac{R_{\mathrm{n}}}{\sqrt {6}\gamma})N_{\mathrm{1D}}(x^{\prime\mp}-b_{i}^{\mp},\frac{R_{\mathrm{n}}}{ \sqrt{6}\gamma})\] \[=N_{\mathrm{1D}}(\frac{x^{\mp}+x^{\prime\mp}}{2}-b_{i}^{\mp}, \frac{R_{\mathrm{n}}}{2\sqrt{3}\gamma})N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp}, \frac{R_{\mathrm{n}}}{\sqrt{3}\gamma})\] \[N_{\mathrm{2D}}(\mathbf{x}_{\perp}-\mathbf{b}_{\perp,i},\frac{R_{\mathrm{ n}}}{\sqrt{3}})N_{\mathrm{2D}}(\mathbf{x}_{\perp}^{\prime}-\mathbf{b}_{\perp,i},\frac{R_{ \mathrm{n}}}{\sqrt{3}})\] \[=N_{\mathrm{2D}}(\frac{\mathbf{x}_{\perp}+\mathbf{x}_{\perp}^{\prime}}{2 }-\mathbf{b}_{\perp,i},\frac{R_{\mathrm{n}}}{\sqrt{6}})N_{\mathrm{2D}}(\mathbf{x}_{ \perp}-\mathbf{x}_{\perp}^{\prime},\sqrt{\frac{2}{3}}R_{\mathrm{n}}). \tag{104}\] A number of Gaussian functions in the right-hand of Eq. (103) can be reduced by using the relation, \[N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp},\frac{R_{\mathrm{n}}}{ \sqrt{3}\gamma})N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp},\sigma_{\mp})\] \[=\frac{1}{\sqrt{2\pi(R_{\mathrm{n}}^{2}/(3\gamma^{2})+\sigma_{ \mp}^{2})}}N_{\mathrm{1D}}(x^{\mp}-x^{\prime\mp},l_{\mp})\] \[N_{\mathrm{2D}}(x_{\perp}-x_{\perp}^{\prime},\sqrt{\frac{2}{3}}R _{\mathrm{n}})N_{\mathrm{2D}}(x_{\perp}-x_{\perp}^{\prime},\sigma_{\perp})\] \[=\frac{1}{2\pi(2R_{\mathrm{n}}^{2}/3+\sigma_{\perp}^{2})}N_{ \mathrm{2D}}(x_{\perp}-x_{\perp}^{\prime},l_{\perp})\, \tag{105}\] where the correlation lengths \(l_{\mp}\) and \(l_{\perp}\) are defined in Eq. (61) and Eq. (60), respectively. Then, Eq. (103) reads Figure 7: The \(\eta\) dependence of the longitudinal pressure normalized by the proper time and the saturation scale, \(\tau P_{\mathrm{L}}/Q_{s}^{3}\). All results shown here are calculated from the same simulations as shown in Fig. 4 and Fig. 5. Eq. (59), \[\langle\rho_{i}^{(1/2)a}(x^{\mp},\mathbf{x}_{\perp})\rho_{i}^{(1/2)b}(x^{ \prime\mp},\mathbf{x}_{\perp}^{\prime})\rangle_{\text{eve}}\] \[=\delta^{a,b}\left(g^{2}\bar{\mu}\right)^{2}N_{\text{1D}}(\frac{x^ {\mp}+x^{\prime\mp}}{2}-b_{i}^{\mp},\frac{R_{\text{n}}}{\sqrt{3}\gamma})N_{ \text{1D}}(x^{\mp}-x^{\prime\mp},l_{\mp})\] \[\times N_{\text{2D}}(\frac{\mathbf{x}_{\perp}+\mathbf{x}_{\perp}^{\prime}} {2}-\mathbf{b}_{\perp,i},\sqrt{\frac{2}{3}}R_{\text{n}})N_{\text{2D}}(\mathbf{x}_{\perp }-\mathbf{x}_{\perp}^{\prime},l_{\perp})\;. \tag{100}\]
2303.00270
Stability and energy identity for Yang-Mills-Higgs pairs
In this paper, we study the properties of the critical points of Yang-Mills-Higgs functional, which are called Yang-Mills-Higgs pairs. We first consider the properties of weakly stable Yang-Mills-Higgs pairs on a vector bundle over S^n (n > 3). When n > 3, we prove that the norm of its Higgs field is 1 and the connection is actually Yang-Mills. More precisely, its curvature vanishes when n > 4. We also use the bubble-neck decomposition to prove the energy identity of a sequence of Yang-Mills-Higgs pairs over a 4-dimensional compact manifold with uniformly bounded energy. We show there is a subsequence converges smoothly to a Yang-Mills-Higgs pair up to gauge modulo finitely many 4-dimensional spheres with Yang-Mills connections.
Xiaoli Han, Xishen Jin, Yang Wen
2023-03-01T06:45:48Z
http://arxiv.org/abs/2303.00270v1
# Stability and energy identity for Yang-Mills-Higgs pairs # Stability and energy identity for Yang-Mills-Higgs pairs Xiaoli Han\({}^{1}\), Xishen Jin\({}^{2}\), Yang Wen\({}^{3^{*}}\) \({}^{1}\)Math department of Tsinghua university, Beijing, 100084, China [email protected] \({}^{2}\)School of Mathematics, Remin University of China, Beijing, 100872, China [email protected] \({}^{3^{*}}\)Academy of Mathematics and Systems Science, The Chinese Academy of Science, Beijing, 100190, China [email protected] \({}^{*}\)Corresponding author. This is the revised version submitted on 2023.1.7 **Abstract:** In this paper, we study the properties of the critical points of Yang-Mills-Higgs functional, which are called Yang-Mills-Higgs pairs. We first consider the properties of weakly stable Yang-Mills-Higgs pairs on a vector bundle over \(S^{n}\) (\(n\geq 4\)). When \(n\geq 4\), we prove that the norm of its Higgs field is \(1\) and the connection is actually Yang-Mills. More precisely, its curvature vanishes when \(n\geq 5\). We also use the bubble-neck decomposition to prove the energy identity of a sequence of Yang-Mills-Higgs pairs over a \(4-\)dimensional compact manifold with uniformly bounded energy. We show there is a subsequence converges smoothly to a Yang-Mills-Higgs pair up to gauge modulo finitely many \(4-\)dimensional spheres with Yang-Mills connections. **Keywords:** Yang-Mills-Higgs pairs ## 1 Introduction Let \((M,g)\) be an \(n\)-dimensional closed Riemannian manifold and \(E\) be a vector bundle of rank \(r\) over \(M\) with structure group \(G\), where \(G\) is a compact Lie group. Let \(\mathfrak{g}_{E}\) be the adjoint bundle of \(E\). The classical Yang-Mills functional defined on the space of connections of \(E\) is given by \[YM\left(\nabla\right)=\int_{M}|R^{\nabla}|^{2}dV\] where \(\nabla\) is a connection on \(E\), \(R^{\nabla}\) denotes its curvature and \(dV\) is the volume form of \(g\). We denote \(d^{\nabla}\) to be the exterior differential induced by \(\nabla\) and \(\delta^{\triangledown}\) be the formal adjoint of \(d^{\triangledown}\). The critical points of Yang-Mills functional are called Yang-Mills connections and they satisfy \[\delta^{\nabla}R^{\nabla}=0.\] Our interests are the Yang-Mills connections which minimize the Yang-Mills functional locally. At such a connection \(\nabla\), the second variation of the Yang-Mills functional should be non-negative, i.e. \[\left.\frac{d^{2}}{dt^{2}}YM\left(\nabla^{t}\right)\right|_{t=0}\geq 0\] where \(\nabla^{t}\) is a curve of connections with \(\nabla^{0}=\nabla\). Such connections are called stable. Considering the second variation of the Yang-Mills functional with respect to deformations generated by special vector fields, J. Simons announced that every stable Yang-Mills connection on \(S^{n}\) is flat, if \(n>4\) in Tokyo in September of 1977. Bourguignon-Lawson [1] gave a detailed proof of this result. The Ginzburg-Landau equations are the Euler-Lagrange equation of the Ginzburg-Landau functional \[E_{\varepsilon}\left(u\right)=\int_{M}\left(\frac{\left|\nabla u\right|^{2}}{ 2}+\frac{\left(1-\left|u\right|^{2}\right)^{2}}{4\varepsilon^{2}}\right)dV\] where \(u\) is a complex-valued function on \(M\). In [2], Cheng proved that every stable solutions of the Ginzburg-Landau equation on \(S^{n}\) for \(n\geq 2\) are the constant with absolute value \(1\). In this paper, we consider the following Yang-Mills-Higgs functional with self-interaction parameter \(\lambda\geq 0\) as a combination of the Yang-Mills functional and Ginzburg-Landau functional \[\mathscr{A}(\nabla,u)=\frac{1}{2}\int_{M}|R^{\triangledown}|^{2}+|d^{\triangledown }u|^{2}+\frac{\lambda}{4}(1-|u|^{2})^{2}dV \tag{1}\] where \(u\in\Omega^{0}(E)\) is a Higgs field. The Yang-Mills-Higgs functional on \(\mathbb{R}^{3}\) with structure group \(SU\left(2\right)\) was first introduced by P. Higgs in [6]. A. Jaffe-C. Taubes [9] extended the Yang-Mills-Higgs functional to \(\mathbb{R}^{n}\) and also general manifolds. Its critical point is the so-called magnetic monopole (we also call it a Yang-Mills-Higgs pair), i.e. a pair \((\nabla,u)\) satisfying \[\begin{split}\delta^{\triangledown}R^{\triangledown}& =-\frac{1}{2}(d^{\triangledown}u\otimes u^{*}-u\otimes(d^{ \triangledown}u)^{*}),\\ \delta^{\triangledown}d^{\triangledown}u&=\frac{ \lambda}{2}(1-|u|^{2})u,\end{split} \tag{2}\] where for any \(u\in\Omega^{0}(E)\) and \(\phi\in\Omega^{p}(E)\), \(\frac{1}{2}(u\otimes\phi^{*}-\phi\otimes u^{*})\in\Omega^{p}(\mathfrak{g}_{E})\) such that for any \(\varphi\in\Omega^{p}(\mathfrak{g}_{E})\), we have \(\langle\frac{1}{2}(u\otimes\phi^{*}-\phi\otimes u^{*}),\varphi\rangle=- \langle\phi,\varphi u\rangle\). Similar to the stable Yang-Mills connection, a Yang-Mills-Higgs pair \((\nabla,u)\) is called stable if for any curve \((\nabla^{t},u^{t})\) such that \(\nabla^{0}=\nabla\) and \(u^{0}=u\), there holds \[\left.\frac{d^{2}}{dt^{2}}\mathscr{A}(\nabla^{t},u^{t})\right|_{t=0}\ \geq 0. \tag{3}\] Furthermore, we can define the notation of the weakly stable of Yang-Mills-Higgs pairs (c.f. Definition 2.1). The purpose of the present work is to extend some of the results of J. Simons and Bourguignon-Lawson [1] about weakly stable Yang-Mills connections on \(S^{n}\) to the weakly stable Yang-Mills-Higgs pairs. We prove the following theorem. **Theorem 1.1**: _Assume \((\nabla,u)\) is a weakly stable Yang-Mills-Higgs pairs on \(S^{n}\), then_ 1. _If_ \(n\geq 5\)_, then_ \(R^{\triangledown}=0\)_,_ \(d^{\triangledown}u=0\) _and_ \(|u|=1\)_._ 2. _If_ \(n=4\)_, then_ \(d^{\triangledown}u=0\)_,_ \(|u|=1\) _and_ \(\nabla\) _is a Yang-Mills connection (i.e._ \(\delta^{\triangledown}R^{\triangledown}=0\)_)._ The Higgs fields taking values in \(\Omega^{0}(\mathfrak{g}_{E})\) are sometimes concerned in some physical research. The corresponding Yang-Mills-Higgs functional is \[\mathscr{A}(\nabla,\Phi)=\frac{1}{2}\int_{M}|R^{\triangledown}|^{2}+|d^{ \triangledown}\Phi|^{2}+\frac{\lambda}{4}(1-|\Phi|^{2})^{2}dV,\] where \(\Phi\in\Omega^{0}(\mathfrak{g}_{E})\). The Euler-Lagrange equation of \(\mathscr{A}\) is \[\begin{split}\delta^{\triangledown}R^{\triangledown}& =[d^{\triangledown}\Phi,\Phi],\\ \delta^{\triangledown}d^{\triangledown}\Phi&= \frac{\lambda}{2}(1-|\Phi|^{2})\Phi.\end{split} \tag{4}\] The stable Yang-Mills-Higgs pairs \((\nabla,\Phi)\) on \(S^{n}\) have similar properties, which are discussed in Section 4. The energy identity was first established in [3] in dimension \(4\) manifolds for sequences of anti-self-dual Yang-Mills fields. In [16], Tian proved that the defect measure of sequences of Yang-Mills fields on a Riemannian manifold \((M,g)\) of dimension \(n\) (\(n\geq 4\)) is carried by a \(n-4\)-rectifiable subset \(S\) of \(M\). Riviere [13] proved that, in \(4\) dimension, at any point of \(S\) it is the sum of \(L^{2}\) energies of Yang-Mills fields on \(S^{4}\) and this result holds on any dimension under the additional assumption on the \(W^{2,1}\) norm of the curvature. Moreover, in [11], Nabor-Valtorta proved that the \(W^{2,1}\)-norm is automatically bounded for a sequence of stationary Yang-Mills fields with bounded energy. In [14], Song proved the energy identity for a sequence of Yang-Mills-Higgs pairs on a fiber bundle with curved fiber spaces over a compact Riemannian surface and the blow-up only occurs in the Higgs part in the \(2\)-dimensional case. In Section \(5\), we assume that \(M\) is a \(4\)-dimensional compact Riemannian manifold and \(\{(\nabla_{i},u_{i})\}\) is a sequence of Yang-Mills-Higgs pairs over \(M\) with uniformly bounded energy \(\mathscr{A}(\nabla_{i},u_{i})\leq K\). Unlike the case of \(2\)-dimensional manifolds, it will be shown that there is no energy concentration point for Higgs field over \(4-\)dim manifolds and the blow-up only occurs in the curvature part. We prove the following theorem. **Theorem 1.2**: _Assume \(\{(\nabla_{i},u_{i})\}\) is a family of Yang-Mills-Higgs pairs which satisfy the equation (4) and \(\mathscr{A}(\nabla_{i},u_{i})\leq K\). Then there is a finite subset \(\Sigma=\{x_{1},...,x_{l}\}\subset M\), a Yang-Mills-Higgs pair \((\nabla_{\infty},u_{\infty})\) on \(M\setminus\Sigma\) and Yang-Mills connections \(\{\widetilde{\triangledown}_{jk}\mid 1\leq j\leq l,1\leq k\leq K_{j}\}\) over \(S^{4}\), such that there is a subsequence of \(\{(\nabla_{i},u_{i})\}\) converges to \((\nabla_{\infty},u_{\infty})\) in \(C^{\infty}_{loc}(M\setminus\Sigma)\) under gauge transformations and_ \[\lim_{i\to\infty}\mathscr{A}(\nabla_{i},u_{i})=\mathscr{A}(\nabla_{\infty},u _{\infty})+\sum_{j=1}^{l}\sum_{k=1}^{K_{j}}YM(\widetilde{\nabla}_{jk}). \tag{5}\] ## 2 Preliminary Let \((M,g)\) be a compact Riemannian manifold and \(D\) be its Levi-Civita connection. Let \(E\to M\) be a rank \(r\) vector bundle over \(M\) with a compact Lie group \(G\subset SO\left(r\right)\) as its structure group. We also assume \(\langle\,\ \rangle\) is a Riemannian metric of \(E\) compatible with the action of \(G\). Let \(\mathfrak{g}_{E}\) be the adjoint bundle of \(E\). Assume \(\nabla:\Omega^{0}(E)\to\Omega^{1}(E)\) is a connection on \(E\) compatible with the metric \(\langle\,\ \rangle\). Locally, \(\nabla\) takes the form \[\nabla=d+A\] where \(A\in\Omega^{1}\left(\mathfrak{g}_{E}\right)\) is the connection \(1\)-form. For any connection \(\nabla\) of \(E\), the curvature \(R^{\nabla}=\nabla^{2}\in\Omega^{2}\left(\mathfrak{g}_{E}\right)\) measures the extent to which \(\nabla\) fails to commute. Locally, \(R^{\nabla}\) is given by \[R^{\nabla}=dA+\frac{1}{2}[A\wedge A]\] where the bracket of \(\mathfrak{g}_{E}\)-valued \(1\)-forms \(\varphi\) and \(\psi\) is defined to be \[[\varphi\wedge\psi]_{X,Y}=[\varphi_{X},\psi_{Y}]-[\varphi_{Y},\psi_{X}]\] as in [1]. The connection \(\nabla\) on \(E\) induces a natural connection on \(\mathfrak{g}_{E}\). Indeed for \(\phi\in\Omega^{0}(\mathfrak{g}_{E})\) we define \[\nabla(\phi)=[\nabla,\phi],\] i.e. \(\nabla(\phi)(\sigma)=\nabla(\phi(\sigma))-\phi(\nabla\sigma)\) for any section \(\sigma\) of \(E\). By direct calculation, for any \(X\in T(M)\) we have \[\nabla_{X}(\frac{1}{2}(u\otimes\phi^{*}-\phi\otimes u^{*}))=\frac{1}{2}( \nabla_{X}u\otimes\phi^{*}-\phi\otimes(\nabla_{X}u)^{*})+\frac{1}{2}(u\otimes( \nabla_{X}\phi)^{*}-\nabla_{X}\phi\otimes u^{*}).\] Similarly, the curvature of this connection on \(\mathfrak{g}_{E}\) is given by the formula \[R_{X,Y}^{\nabla}(\phi)=[R_{X,Y}^{\nabla},\phi],\] where \(R^{\nabla}\) on the right denotes the curvature of \(E\). We define an exterior differential \(d^{\nabla}:\Omega^{p}(\mathfrak{g}_{E})\rightarrow\Omega^{p+1}(\mathfrak{g}_{ E}),p\geq 0\) as follows. For each real-valued differential \(p\)-form \(\alpha\) and each section \(\sigma\) of \(E\), we set \[d^{\nabla}(\alpha\otimes\sigma)=(d\alpha)\wedge\sigma+(-1)^{p}\alpha\otimes\nabla\sigma\] and extend the definition to general \(\psi\in\Omega^{p}(E)\) by linearity. Note that \(d^{\nabla}=\nabla\) on \(\Omega^{0}(E)\) and \((d^{\nabla}(d^{\nabla}\sigma))_{X,Y}=R_{X,Y}^{\nabla}(\sigma)\). The inner product of \(\Omega^{0}\left(\mathfrak{g}_{E}\right)\) induced by the trace inner product metric on \(\mathfrak{so}\left(r\right)\) is given by \[\langle\phi,\varphi\rangle=\frac{1}{2}\operatorname{Tr}\left(\phi^{T}\varphi \right),\text{ where }\varphi,\phi\in\Omega^{0}\left(\mathfrak{g}_{E}\right).\] Then for any \(\phi,\varphi,\rho\in\Omega^{0}(\mathfrak{g}_{E})\), we have \[\langle[\phi,\varphi],\rho\rangle=\langle\phi,[\varphi,\rho]\rangle.\] Combining with Riemannian metric \(g\), the inner product of \(\Omega^{p}(\mathfrak{g}_{E})\) can be defined by \[\langle\phi,\varphi\rangle=\frac{1}{p!}\sum_{1\leq i_{1},...,i_{p}\leq n} \langle\phi(e_{i_{1}},...e_{i_{p}}),\varphi(e_{i_{1}},...,e_{i_{p}})\rangle \tag{6}\] where \(\phi,\varphi\in\Omega^{p}(\mathfrak{g}_{E})\) and \(\{e_{i}\ |\ 1\leq i\leq n\}\) is an orthogonal basis of \(TM\). Integrating the inner product \(\langle\,\ \rangle\) over \(M\), we get a global inner product \((\,\ )\) in \(\Omega^{p}\left(\mathfrak{g}_{E}\right)\), i.e. \[(\varphi,\psi)=\int_{M}\langle\varphi,\psi\rangle dV,\text{ for any }\varphi,\psi\in\Omega^{p}\left(\mathfrak{g}_{E}\right).\] Define the operator \(\delta^{\nabla}:\Omega^{p+1}(\mathfrak{g}_{E})\rightarrow\Omega^{p}( \mathfrak{g}_{E}),p\geq 0\), to be the formal adjoint of the operator \(d^{\nabla}\). In local coordinates, for any \(\phi\in\Omega^{p}(\mathfrak{g}_{E})\), \[(d^{\nabla}\phi)_{X_{0},\cdots,X_{p}}=\sum_{k=0}^{p}(-1)^{k}(\nabla_{X_{k}} \phi)_{X_{0},\cdots,\widehat{X_{k}},\cdots,X_{p}},\] \[(\delta^{\nabla}\phi)_{X_{1},\cdots,X_{p-1}}=-\sum_{j=1}^{n}(\nabla_{e_{j}} \phi)_{e_{j},X_{1},\cdots,X_{p-1}},\] where \(\{e_{i}\}\) is an orthonormal basis of \(TM\). We can define the Laplace-Beltrami operator \(\Delta^{\nabla}\) by \[\Delta^{\triangledown}=d^{\triangledown}\delta^{\triangledown}+\delta^{ \triangledown}d^{\triangledown}\] and the rough Laplacian operator \(\nabla^{*}\nabla\) by \[\nabla^{*}\nabla=-\sum_{j}(\nabla_{e_{j}}\nabla_{e_{j}}-\nabla_{D_{e_{j}}e_ {j}}).\] For \(\psi\in\Omega^{1}(\mathfrak{g}_{E})\) and \(\varphi\in\Omega^{2}(\mathfrak{g}_{E})\), we recall the following operator \(\mathfrak{R}^{\triangledown}\) defined in [1] \[\mathfrak{R}^{\triangledown}(\psi)(X)=\sum_{j}[R^{\triangledown}(e_{j},X), \psi(e_{j})], \tag{7}\] \[\mathfrak{R}^{\triangledown}(\varphi)(X,Y)=\sum_{j}[R^{\triangledown}(e_{j},X),\varphi(e_{j},Y)]-[R^{\triangledown}(e_{j},Y),\varphi(e_{j},X)]. \tag{8}\] Then we have the following Bochner-Weizenbock formula first introduced in [1]. **Theorem 2.1**: _For any \(\psi\in\Omega^{1}(\mathfrak{g}_{E})\) and \(\varphi\in\Omega^{2}(\mathfrak{g}_{E})\), we have_ \[\Delta^{\triangledown}\psi =\nabla^{*}\nabla\psi+\psi\circ\mathrm{Ric}+\mathfrak{R}^{ \triangledown}(\psi), \tag{9}\] \[\Delta^{\triangledown}\varphi =\nabla^{*}\nabla\varphi+\varphi\circ\left(\mathrm{Ric}\wedge \mathrm{Id}+2R_{M}\right)+\mathfrak{R}^{\triangledown}(\varphi), \tag{10}\] _where_ * \(\mathrm{Ric}:TM\to TM\) _is the Ricci transformation defined by_ \[\mathrm{Ric}\left(X\right)=\sum_{j}R(X,e_{j})e_{j},\] * \(\psi\circ\mathrm{Ric}\in\Omega^{1}\left(\mathfrak{g}_{E}\right)\) _and_ \(\left(\psi\circ\mathrm{Ric}\right)_{X}=\psi_{\mathrm{Ric}\left(X\right)}\in \Omega^{0}\left(\mathfrak{g}_{E}\right),\)__ * \(\mathrm{Ric}\wedge\mathrm{Id}\) _is the extension of the Ricci transformation_ \(\mathrm{Ric}\) _to_ \(\wedge^{2}TM\) _given by_ \[\left(\mathrm{Ric}\wedge\mathrm{Id}\right)_{X,Y}=\left(\mathrm{Ric}\wedge \mathrm{Id}\right)\left(X\wedge Y\right)=\mathrm{Ric}(X)\wedge Y+X\wedge \mathrm{Ric}(Y),\] * \(R_{M}\) _is the curvature of_ \(TM\)_,_ * _For any map_ \(\omega:\wedge^{2}TM\rightarrow\wedge^{2}TM\)_, the composite map_ \(\varphi\circ\omega:\wedge^{2}TM\rightarrow\Omega^{0}\left(\mathfrak{g}_{E}\right)\) _is defined by_ \[\left(\varphi\circ\omega\right)_{X,Y}=\left(\varphi\circ\omega\right)\left(X \wedge Y\right)=\frac{1}{2}\sum_{j=1}^{n}\varphi_{e_{j},\omega_{X,Y}e_{j}}.\] **Remark 2.1**: _In particular, on the standard sphere \(S^{n}\),_ \[\mathrm{Ric}\left(X\right)=\left(n-1\right)X\] _and_ \[\left(R_{M}\right)_{X,Y}Z=-\left(X\wedge Y\right)\left(Z\right)=\left\langle Y,Z\right\rangle X-\left\langle X,Z\right\rangle Y.\] _Thus, we have_ \[\psi\circ\mathrm{Ric}=\left(n-1\right)\psi\] _and_ \[\varphi\circ\left(\mathrm{Ric}\wedge\mathrm{Id}+2R_{M}\right)=2\left(n-2 \right)\varphi.\] Note that for any \(B\in\Omega^{1}\left(\mathfrak{g}_{E}\right)\) and \(w\in\Omega^{0}\left(E\right)\), \[R^{\triangledown+tB}=R^{\nabla}+td^{\triangledown}B+t^{2}B\wedge B \tag{11}\] and \[d^{\triangledown+tB}\left(u+tw\right)=d^{\triangledown}u+t(B\cdot u+d^{ \triangledown}w)+t^{2}B\cdot w. \tag{12}\] Assume \(\left(\nabla,u\right)\) is a Yang-Mills-Higgs pair satisfying the equation (2) and \(\left(\nabla^{t},u^{t}\right)\) is a curve on \[\tilde{\mathcal{A}}=\{\left(\widetilde{\nabla},\tilde{u}\right)\mid \widetilde{\nabla}\text{ is a connection of }E,\text{ }\tilde{u}\in\Omega^{0}(E)\}\] such that \(\nabla^{0}=\nabla\) and \(u^{0}=u\). If we assume \(\frac{d}{dt}\left(\nabla^{t},u^{t}\right)\mid_{t=0}=\left(B,w\right)\), the second variation of \(\mathscr{A}\) is \[\begin{split}\frac{d^{2}}{dt^{2}}\mathscr{A}(\nabla^{t},u^{t}) \mid_{t=0}=\int_{M}&\langle\delta^{\triangledown}d^{\triangledown }B+\mathfrak{R}^{\triangledown}(B)+\frac{1}{2}(Bu\otimes u^{*}-u\otimes(Bu)^{ *}),B\rangle+2\langle d^{\triangledown}u,Bw\rangle\\ &+2\langle d^{\triangledown}w,Bu\rangle+\langle\delta^{\triangledown }d^{\triangledown}w+\lambda\langle u,w\rangle u-\frac{\lambda}{2}(1-|u|^{2} )w,w\rangle dV.\end{split} \tag{13}\] For any gauge transformation \(g\in\mathcal{G}\), \(g\) acts on \((\nabla,u)\) such that \((\nabla^{g},u^{g})=(g\circ\nabla\circ g^{-1},gu)\). Then for any \(\sigma\in\Omega^{0}(\mathfrak{g}_{E})\), let \(g_{t}=\exp(t\sigma)\) be a family of gauge transformations, the variation of \((\nabla,u)\) along \(\sigma\) is \[\frac{d}{dt}(\nabla^{g_{t}},u^{g_{t}})\mid_{t=0}=(-d^{\triangledown}\sigma, \sigma u)\in T_{(\nabla,u)}\tilde{\mathcal{A}}=\Omega^{1}\left(\mathfrak{g}_{E }\right)\times\Omega^{0}\left(E\right).\] In addition, the Yang-Mills-Higgs functional \(\mathscr{A}\) is invariant under the action of gauge group. So it is interesting to consider the variation \((B,w)\) perpendicular to the direction of gauge transformation with respect to the global inner product on \(\Omega^{1}\left(\mathfrak{g}_{E}\right)\times\Omega^{0}\left(\mathfrak{g}_{E}\right)\). Define \[\zeta:\Omega^{0}(\mathfrak{g}_{E}) \rightarrow\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}( \mathfrak{g}_{E})\] \[\sigma \mapsto(-d^{\triangledown}\sigma,\sigma u),\] then \((B,w)\in\mathrm{Im}(\zeta)^{\perp}\) if and only if for any \(\sigma\), \[0=\int_{M}\langle-d^{\triangledown}\sigma,B\rangle+\langle\sigma u,w\rangle dV =\int_{M}\langle-\delta^{\triangledown}B+\frac{1}{2}(w\otimes u^{*}-u \otimes w^{*}),\sigma\rangle dV.\] Thus the space of admissible variation at \((\nabla,u)\) is \[\mathcal{C}=T_{(\nabla,u)}\left(\tilde{\mathcal{A}}/\mathcal{G}\right)= \mathrm{Im}(\zeta)^{\perp}=\{(B,w)\in\Omega^{1}(\mathfrak{g}_{E})\times\Omega ^{0}(E)\mid\delta^{\triangledown}B=\frac{1}{2}(w\otimes u^{*}-u\otimes w^{*})\}. \tag{14}\] Using \(\delta^{\triangledown}(Bu)=(\delta^{\triangledown}B)u-B_{\vdash}d^{\triangledown}u\), where \[B_{\vdash}d^{\triangledown}u=\sum_{j}B(e_{j})\nabla_{e_{j}}u,\] and for \((B,w)\in\mathcal{C}\), we have \[\int_{M}2\langle d^{\triangledown}w,Bu\rangle dV=\int_{M}2\langle w,\delta^{ \triangledown}(Bu)\rangle dV=\int_{M}\langle(w\otimes u^{*}-u\otimes w^{*}), \delta^{\triangledown}B\rangle-2\langle w,B_{\vdash}d^{\triangledown}u\rangle dV.\] For any \(x\in M\), let \(\{e_{i}\mid 1\leq i\leq n\}\) be an orthogonal basis of \(T_{x}M\). Since \(B(e_{i})\in\mathfrak{so}(E_{x})\), we have \[-2\langle w,B_{\vdash}d^{\triangledown}u\rangle=\sum_{i}-2\langle w,B(e_{i}) \nabla_{e_{i}}u\rangle=\sum_{i}2\langle B(e_{i})w,\nabla_{e_{i}}u\rangle=2 \langle Bw,d^{\triangledown}u\rangle.\] Hence for \((B,w)\in\mathcal{C}\), the second variation of \(\mathscr{A}\) is \[\frac{d^{2}}{dt^{2}}\mathscr{A}(\nabla^{t},u^{t})\mid_{t=0}= \int_{M} \langle\Delta^{\triangledown}B+\mathfrak{R}^{\triangledown}(B) +\frac{1}{2}(Bu\otimes u^{*}-u\otimes(Bu)^{*})+d^{\triangledown}u\otimes w^{* }-w\otimes(d^{\triangledown}u)^{*},B\rangle\] \[+\langle\delta^{\triangledown}d^{\triangledown}w+\frac{1}{2}(w \otimes u^{*}-u\otimes w^{*})u-2B_{\vdash}d^{\triangledown}u+\lambda\langle u,w\rangle u-\frac{\lambda}{2}(1-|u|^{2})w,w\rangle dV. \tag{15}\] Define an operator \[\mathscr{S}^{(\nabla,u)}:\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}(E) \rightarrow\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}(E), \tag{16}\] where \[\mathscr{S}^{(\nabla,u)}(B,w)= (\Delta^{\triangledown}B+\mathfrak{R}^{\triangledown}(B)+\frac {1}{2}(Bu\otimes u^{*}-u\otimes(Bu)^{*})+d^{\triangledown}u\otimes w^{*}-w \otimes(d^{\triangledown}u)^{*},\] \[\delta^{\triangledown}d^{\triangledown}w+\frac{1}{2}(w\otimes u ^{*}-u\otimes w^{*})u-2B_{\vdash}d^{\triangledown}u+\lambda\langle u,w \rangle u-\frac{\lambda}{2}(1-|u|^{2})w).\] It is easy to see that \(\mathscr{S}^{(\nabla,u)}\) is a self-adjoint operator on \(\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}(E)\). Furthermore, we can prove that \(\mathscr{S}^{(\nabla,u)}\) is a self-adjoint operator on \(\mathcal{C}\). **Lemma 2.2**: \(\mathscr{S}^{(\nabla,u)}(\mathcal{C})\subset\mathcal{C}\)_._ **Proof** Denote \(\mathscr{S}^{(\nabla,u)}=(\mathscr{S}_{1},\mathscr{S}_{2})\). We only need to prove that for any \((B,w)\in\mathcal{C}\) and \(\sigma\in\Omega^{0}(\mathfrak{g}_{E})\), we have \[\int_{M}\langle\mathscr{S}_{1}(d^{\triangledown}\sigma,-\sigma u),B\rangle+ \langle\mathscr{S}_{2}(d^{\triangledown}\sigma,-\sigma u),w\rangle dV=0.\] First we have \[\Delta^{\triangledown}d^{\triangledown}\sigma=d^{\triangledown}\delta^{ \triangledown}d^{\triangledown}\sigma+\delta^{\triangledown}d^{\triangledown}d ^{\triangledown}\sigma=d^{\triangledown}\Delta^{\triangledown}\sigma+\delta^{ \triangledown}[R^{\triangledown},\sigma]=d^{\triangledown}\Delta^{\triangledown }\sigma+[\delta^{\triangledown}R^{\triangledown},\sigma]-\mathfrak{R}^{ \triangledown}(d^{\triangledown}\sigma).\] The equation (2) implies that for any \(\phi\in\Omega^{1}(\mathfrak{g}_{E})\), we have \[\langle d^{\triangledown}u,\phi u\rangle=\langle\frac{1}{2}(d^{\triangledown }u\otimes u^{*}-u\otimes(d^{\triangledown}u)^{*}),\phi\rangle=-\langle\delta^ {\triangledown}R^{\triangledown},\phi\rangle.\] By direct calculation, we have \[\int_{M}\langle\frac{1}{2}((d^{\triangledown}\sigma\cdot u) \otimes u^{*}-u\otimes(d^{\triangledown}\sigma\cdot u)^{*})+\sigma u\otimes( d^{\triangledown}u)^{*}-d^{\triangledown}u\otimes(\sigma u)^{*},B\rangle dV\] \[= \int_{M}\langle d^{\triangledown}\sigma\cdot u,Bu\rangle-2 \langle d^{\triangledown}u,B\sigma u\rangle dV\] \[= \int_{M}\langle\sigma u,\delta^{\triangledown}Bu\rangle-\langle \sigma\cdot d^{\triangledown}u,Bu\rangle-2\langle d^{\triangledown}u,B \sigma u\rangle dV\] \[= \int_{M}\langle\sigma u,\delta^{\triangledown}B\cdot u\rangle+ \langle d^{\triangledown}u,[\sigma,B]u\rangle dV\] \[= \int_{M}\langle\sigma u,\delta^{\triangledown}B\cdot u\rangle- \langle\delta^{\triangledown}R^{\triangledown},[\sigma,B]\rangle dV.\] Thus \[\int_{M}\langle\mathscr{S}_{1}(d^{\triangledown}\sigma,-\sigma u),B\rangle dV =\int_{M}\langle\delta^{\triangledown}d^{\triangledown}\sigma, \delta^{\triangledown}B\rangle+\langle\sigma u,\delta^{\triangledown}B \cdot u\rangle dV.\] On the other hand, \(\sigma\in\Omega^{0}(\mathfrak{g}_{E})\) implies \(\langle u,\sigma u\rangle=0\). By the equation (2) we have \[-\delta^{\triangledown}d^{\triangledown}(\sigma u)=-\delta^{\triangledown }d^{\triangledown}\sigma\cdot u+2d^{\triangledown}\sigma\cdot d^{\triangledown }u-\frac{\lambda}{2}(1-|u|^{2})\sigma u.\] Hence by \((B,w)\in\mathcal{C}\), we have \[\int_{M}\langle\mathscr{S}_{2}(d^{\triangledown}\sigma,-\sigma u ),w\rangle dV\] \[= \int_{M}-\langle\delta^{\triangledown}d^{\triangledown}\sigma \cdot u,w\rangle-\langle\sigma u,\frac{1}{2}(w\otimes u^{*}-u\otimes w^{*})u \rangle dV\] \[= \int_{M}-\langle\delta^{\triangledown}d^{\triangledown}\sigma, \delta^{\triangledown}B\rangle-\langle\sigma u,Bu\rangle dV\] Then we have \(\int_{M}\langle\mathscr{S}^{(\nabla,u)}(d^{\triangledown}\sigma,-\sigma u),( B,w)\rangle dV=0\) and we finish the proof. \(\square\) Then \(\mathscr{S}^{(\nabla,u)}\mid_{\mathcal{C}}:\mathcal{C}\rightarrow\mathcal{C}\) is a self adjoint and elliptic operator. The eigenvalues of \(\mathscr{S}^{(\nabla,u)}\) are given by \[\lambda_{1}\leq\lambda_{2}\leq\cdots\rightarrow+\infty.\] Similar as in [1], we define the weakly stability of Yang-Mills-Higgs functional at \((\nabla,u)\) as following. **Definition 2.1**: _Assume \((\nabla,u)\) satisfies (2), then it is called weakly stable if \(\lambda_{1}\geq 0\) and stable if \(\lambda_{1}>0\)._ Stability of Yang-Mills-Higgs pairs on \(S^{n}\) In this section, we prove Theorem 1.1. Now, we assume \((M,g)\) is the standard Euclidean sphere \(S^{n}\). In [10], the conformal Killing vector fields of \(S^{n}\) play an important role in studying the non-existence of stable varifolds or currents. Similar methods have been applied to study weakly stable Yang-Mills connections on \(S^{n}\) in [1]. The conformal Killing vector fields of \(S^{n}\) are the gradients of eigenfunctions corresponding to the first non-zero eigenvalue of the Laplace operator. Let us summarize the properties of these vector fields as follow. **Proposition 3.1**: _For any \(v=(v^{1},...,v^{n+1})\in\mathbb{R}^{n+1}\), let \(F_{v}(x):\mathbb{R}^{n+1}\to\mathbb{R}\), \(x\mapsto v\cdot x\) be the inner product of \(v\) and \(x\). Define \(f_{v}=F_{v}\mid_{S^{n}}\) to be the restriction of \(F_{v}\) to \(S^{n}\). Then_ \[V=\operatorname{grad}(f_{v})=\sum_{i=1}^{n+1}(v^{j}-(v\cdot x)x^{j})\frac{ \partial}{\partial x^{j}}=v-(v\cdot x)\,x \tag{17}\] _is a conformal Killing vector fields on \(S^{n}\) and satisfies_ 1. \(D_{X}V=-f_{v}X,\)__ 2. \(D^{*}DV=V.\)__ **Remark 3.1**: _In fact, the space \(\mathscr{V}\) of all \(V\) defined above is the orthogonal complement to the Killing vector fields in the space of all conformal vector fields on \(S^{n}\), i.e._ \[\operatorname{\mathsf{conf}}\left(S^{n}\right)=\operatorname{\mathsf{isom}} \left(S^{n}\right)\oplus\mathscr{V}.\] Similar as in [1], we choose the variation of connection to be \(B_{v}=i_{V}R^{\triangledown}\), where \(i_{V}\) is the contraction about \(V\). The corresponding variation of the Higgs field \(w_{v}\) satisfies \[\frac{1}{2}(w_{v}\otimes u^{*}-u\otimes w_{v}^{*})=\delta^{\triangledown}i_{ V}R^{\triangledown}=-\delta^{\triangledown}R^{\triangledown}(V)=\frac{1}{2}( \nabla_{V}u\otimes u^{*}-u\otimes(\nabla_{V}u)^{*}).\] Hence, \(w_{v}=\nabla_{V}u\) satisfies \((B_{v},w_{v})\in\mathcal{C}\). If we define a quadratic form \(\mathscr{L}^{(\nabla,u)}\) on \(\mathcal{C}\) by setting \[\mathscr{L}^{(\nabla,u)}(B,w)=\int_{M}\langle\mathscr{S}^{(\nabla,u)}(B,w),( B,w)\rangle dV, \tag{18}\] then \(\left.\frac{d^{2}}{dt^{2}}\mathscr{S}(\nabla^{t},u^{t})\right|_{t=0}=\mathscr{ L}^{(\nabla,u)}(B,w)\) for any \((B,w)\in\mathcal{C}\). \(Q(v_{1},v_{2})=\mathscr{L}^{(\nabla,u)}(B_{v_{1}},w_{v_{2}})\) can be viewed as a quadratic form. **Lemma 3.2**: _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair on \(S^{n}\). Then for any \(v\in\mathbb{R}^{n+1}\), we have_ \[\mathscr{L}^{(\nabla,u)}(i_{V}R^{\triangledown},\nabla_{V}u)=\int_{S^{n}}(4- n)|i_{V}R^{\triangledown}|^{2}+(2-n)|\nabla_{V}u|^{2}-2f_{v}(\langle i_{V}R^{ \triangledown},\delta^{\triangledown}R^{\triangledown}\rangle+\langle \delta^{\triangledown}d^{\triangledown}u,\nabla_{V}u\rangle)dV.\] **Proof** Denote \(\mathscr{S}^{(\nabla,u)}=(\mathscr{S}_{1},\mathscr{S}_{2})\). For any \(x\in S^{n}\), let \(\{e_{i}\mid 1\leq i\leq n\}\) be a local orthonormal frame of \(S^{n}\) near \(x\). According to (9), we have \[\Delta^{\triangledown}i_{V}R^{\triangledown}(e_{k})=\nabla^{*}\nabla i_{V}R ^{\triangledown}(e_{k})+(n-1)R^{\triangledown}(V,e_{k})+\mathfrak{R}^{ \triangledown}(i_{V}R^{\triangledown})(e_{k}).\] Since \(De_{k}(x)=0\), at \(x\) \[\nabla^{*}\nabla i_{V}R^{\triangledown}(e_{k}) =-\sum_{j}\nabla_{e_{j}}\nabla_{e_{j}}i_{V}R^{\triangledown}(e_{k})\] \[=-\sum_{j}\nabla_{e_{j}}(\nabla_{e_{j}}i_{V}R^{\triangledown}(e_ {k}))\] \[=-\sum_{j}\nabla_{e_{j}}(\nabla_{e_{j}}(R^{\triangledown}(V,e_{k} ))-R^{\triangledown}(V,D_{e_{j}}e_{k}))\] \[=-\sum_{j}\nabla_{e_{j}}(\nabla_{e_{j}}R^{\triangledown}(V,e_{k })-f_{v}R^{\triangledown}(e_{j},e_{k}))\] \[=\nabla^{*}\nabla R^{\triangledown}(V,e_{k})-2f_{v}\delta^{ \triangledown}R^{\triangledown}(e_{k})+R^{\triangledown}(V,e_{k})\] \[=\Delta^{\triangledown}R^{\triangledown}(V,e_{k})+(5-2n)R^{ \triangledown}(V,e_{k})-\mathfrak{R}^{\triangledown}(R^{\triangledown})(V,e_ {k})-2f_{v}\delta^{\triangledown}R^{\triangledown}(e_{k}),\] where we have used (1) of Proposition 3.1. By \(2\mathfrak{R}^{\triangledown}(i_{V}R^{\triangledown})(e_{k})-\mathfrak{R}^{ \triangledown}(R^{\triangledown})(V,e_{k})=0\), we have \[\Delta^{\triangledown}i_{V}R^{\triangledown}(e_{k})+\mathfrak{R}^{\triangledown }(i_{V}R^{\triangledown})(e_{k})=\Delta^{\triangledown}R^{\triangledown}(V,e_ {k})+(4-n)R^{\triangledown}(V,e_{k})-2f_{v}\delta^{\triangledown}R^{\triangledown }(e_{k}).\] Similarly, by the definition of curvature and \([X,Y]=D_{X}Y-D_{Y}X\), we have \[\delta^{\triangledown}d^{\triangledown}\nabla_{V}u= -\sum_{j}\nabla_{e_{j}}\nabla_{e_{j}}\nabla_{V}u\] \[= -\sum_{j}\nabla_{e_{j}}(R^{\triangledown}(e_{j},V)u+\nabla_{V} \nabla_{e_{j}}u+\nabla_{[e_{j},V]}u)\] \[= \delta^{\triangledown}R^{\triangledown}(V)u-2\mathfrak{R}^{ \triangledown}(d^{\triangledown}u)(V)\] \[-\sum_{j}\nabla_{V}\nabla_{e_{j}}\nabla_{e_{j}}u-2\nabla_{[e_{j },V]}\nabla_{e_{j}}u-[R^{\triangledown}(e_{j},[e_{j},V]),u]-\nabla_{[e_{j},[e _{j},V]]}u.\] At \(x\), we have \[R^{\triangledown}(e_{j},[e_{j},V])=0,\] \[-\sum_{j}\nabla_{V}\nabla_{e_{j}}\nabla_{e_{j}}u =\nabla_{V}\delta^{\triangledown}d^{\triangledown}u-\sum_{j} \nabla_{V}\nabla_{D_{e_{j}}e_{j}}u\] \[=\nabla_{V}\delta^{\triangledown}d^{\triangledown}u-\sum_{j} \Big{(}R^{\triangledown}(V,D_{e_{j}}e_{j})u+\nabla_{D_{e_{j}}e_{j}}\nabla_{V}u +\nabla_{[V,D_{e_{j}}e_{j}]}u\Big{)}\] \[=\nabla_{V}\delta^{\triangledown}d^{\triangledown}u-\sum_{j} \nabla_{[V,D_{e_{j}}e_{j}]}u\] \[=\nabla_{V}\delta^{\triangledown}d^{\triangledown}u-\sum_{j} \nabla_{D_{V}D_{e_{j}}e_{j}}u,\] \[-2\sum_{j}\nabla_{[e_{j},V]}\nabla_{e_{j}}u=-2f_{v}\delta^{\triangledown}d^{ \triangledown}u,\] and \[-\sum_{j}[e_{j},[e_{j},V]]= -\sum_{j}D_{e_{j}}[e_{j},V]\] \[= V+\sum_{j}D_{e_{j}}D_{V}e_{j}\] \[= V+\sum_{j}\big{(}R(e_{j},V)e_{j}+D_{V}D_{e_{j}}e_{j}\big{)}\] \[= (2-n)V+\sum_{j}D_{V}D_{e_{j}}e_{j},\] where we use \(\sum_{j}R(e_{j},X)e_{j}=(1-n)X\) on \(S^{n}\). Hence \[\delta^{\triangledown}d^{\triangledown}\nabla_{V}u=\delta^{\triangledown} R^{\triangledown}(V)u-2\mathfrak{R}^{\triangledown}(d^{\triangledown}u)(V)+ \nabla_{V}\delta^{\triangledown}d^{\triangledown}u+(2-n)\nabla_{V}u-2f_{v} \delta^{\triangledown}d^{\triangledown}u.\] By equation (2), at \(x\) we have \[\sum_{k}\langle\nabla_{V}(\delta^{\triangledown}R^{\triangledown }(e_{k})),R^{\triangledown}(V,e_{k})\rangle\] \[= \sum_{k}\langle\frac{1}{2}\nabla_{V}(u\otimes(\nabla_{e_{k}}u)^{ *}-\nabla_{e_{k}}u\otimes u^{*}),R^{\triangledown}(V,e_{k})\rangle\] \[= \sum_{k}\langle\nabla_{V}u,R^{\triangledown}(V,e_{k})\nabla_{e_{ k}}u\rangle+\langle u,R^{\triangledown}(V,e_{k})\cdot\nabla_{V}\nabla_{e_{k}}u\rangle\] \[= -\langle i_{V}R^{\triangledown},\frac{1}{2}(d^{\triangledown}u \otimes(\nabla_{V}u)^{*}-\nabla_{V}u\otimes(d^{\triangledown}u)^{*})\rangle- \sum_{k}\langle R^{\triangledown}(V,e_{k})u,\nabla_{V}\nabla_{e_{k}}u\rangle.\] \[\sum_{k}\langle\nabla_{e_{k}}(\delta^{\triangledown}R^{\triangledown}(V)),R^{ \triangledown}(V,e_{k})\rangle=\langle i_{V}R^{\triangledown},\frac{1}{2}(d^{ \triangledown}u\otimes(\nabla_{V}u)^{*}-\nabla_{V}u\otimes(d^{\triangledown}u)^ {*})\rangle-\sum_{k}\langle R^{\triangledown}(V,e_{k})u,\nabla_{e_{k}}\nabla_{ V}u\rangle.\] Note that at \(x\), by the equation (2) we have \[\sum_{k}\langle\nabla_{[V,e_{k}]}u,R^{\triangledown}(V,e_{k})u\rangle=\sum_{k }f_{v}\langle u,R^{\triangledown}(e_{k},V)\nabla_{e_{k}}u\rangle=-f_{v} \langle i_{V}R^{\triangledown},\delta^{\triangledown}R^{\triangledown}\rangle.\] Hence by \(R^{\triangledown}(V,e_{k})u=\nabla_{V}\nabla_{e_{k}}u-\nabla_{e_{k}}\nabla_{ V}u-\nabla_{[V,e_{k}]}u\), we have \[\langle i_{V}\Delta^{\triangledown}R^{\triangledown},i_{V}R^{ \triangledown}\rangle\] \[= \sum_{k}\langle\nabla_{V}(\delta^{\triangledown}R^{\triangledown }(e_{k}))-\delta^{\triangledown}R^{\triangledown}(D_{V}e_{k})-\nabla_{e_{k} }(\delta^{\triangledown}R^{\triangledown}(V))+\delta^{\triangledown}R^{ \triangledown}(D_{e_{k}}V),R^{\triangledown}(V,e_{k})\rangle\] \[= -\langle i_{V}R^{\triangledown},(d^{\triangledown}u\otimes( \nabla_{V}u)^{*}-\nabla_{V}u\otimes(d^{\triangledown}u)^{*})\rangle-\langle i _{V}R^{\triangledown},\frac{1}{2}((i_{V}R^{\triangledown}\cdot u)\otimes u^{ *}-u\otimes(i_{V}R^{\triangledown}\cdot u)^{*})\rangle.\] And thus \[\langle\mathscr{S}_{1}(i_{V}R^{\triangledown},\nabla_{V}u),i_{V}R^{\triangledown }\rangle=(4-n)|i_{V}R^{\triangledown}|^{2}-2f_{v}\langle\delta^{\triangledown }R^{\triangledown},i_{V}R^{\triangledown}\rangle.\] On the other hand, by equation (2) and note that \(-2\mathfrak{R}^{\triangledown}(d^{\triangledown}u)(V)-2i_{V}R^{\triangledown }_{\cup}d^{\triangledown}u=0\), we have \[\mathscr{S}_{2}(i_{V}R^{\triangledown},\nabla_{V}u)=(2-n)\nabla_{V}u-2f_{v} \delta^{\triangledown}d^{\triangledown}u.\] Then we finish the proof. \(\square\) **Lemma 3.3**: _Let \(\{v_{i}\mid 1\leq i\leq n+1\}\in\mathbb{R}^{n+1}\) be an orthogonal vector in \(\mathbb{R}^{n+1}\) and the corresponding Killing field is \(V_{i}\). Then_ \[\sum_{i=1}^{n+1}\mathscr{L}^{(\nabla,u)}(i_{V_{i}}R^{\triangledown},\nabla_{ V_{i}}u)=2(4-n)\int_{S^{n}}|R^{\triangledown}|^{2}dx+(2-n)\int_{M}|d^{ \triangledown}u|^{2}dV. \tag{19}\] **Proof** Assume \(v_{i}\) is the vector in \(\mathbb{R}^{n+1}\) such that the \(i\)-th component is \(1\) and the others are \(0\), then at \(x=(x^{1},...,x^{n+1})\), we have \(f_{v_{i}}(x)=x^{i}\) and \[\sum_{i}f_{v_{i}}(x)V_{i}=\sum_{i}x^{i}\frac{\partial}{\partial x^{i}}-\sum_{i, j}(x^{i})^{2}x^{j}\frac{\partial}{\partial x^{j}}=0.\] Then according to Lemma 3.2, \[\sum_{i}\mathscr{L}^{(\nabla,u)}(i_{V_{i}}R^{\triangledown},\nabla_{V_{i}}u)= \sum_{i}\int_{S^{n}}q\left(v_{i},v_{i}\right)dV\] where at any \(x\in S^{n}\), \(q\) is a quadratic form on \(\mathbb{R}^{n+1}\) defined by \[q(v,w)=(4-n)\langle i_{V}R^{\triangledown},i_{W}R^{\triangledown}\rangle+(2- n)\langle\nabla_{V}u,\nabla_{W}u\rangle \tag{20}\] and \(V,W\in TS^{n}\) with respect to \(v,w\in\mathbb{R}^{n+1}\) defined in Proposition 3.1. Now, we compute the value \(q\left(v,w\right)\) at \(x\in S^{n}\). Since \(q\) is a quadratic form on \(\mathbb{R}^{n+1}\), the trace \(\sum\limits_{i=1}q\left(v_{i},v_{i}\right)\) is independent of the choice of basis of \(\mathbb{R}^{n+1}\). We assume that \(\{e_{j}\}_{j=1}^{n}\) is any orthonormal basis for \(T_{x}S^{n}\). Then \(\{e_{0}=x,e_{1},e_{2},\cdots,e_{n}\}\) forms an orthonormal basis for \(\mathbb{R}^{n+1}\). In particular, at \(x\in S^{n}\), we have \[\sum_{i=1}^{n+1}q(v_{i},v_{i})=\sum_{j=0}^{n}q(e_{j},e_{j}).\] From Proposition 3.1, at \(x\), the Killing vector fields \(\varepsilon_{0},\varepsilon_{1},\cdots,\varepsilon_{n}\) with respect to \(e_{0},e_{1},e_{2},\cdots,e_{n}\) are \[\varepsilon_{0}=0,\varepsilon_{1}=e_{1},\cdots,\varepsilon_{n}=e_{n}.\] Since \(\left\{e_{j}\right\}_{j=1}^{n}\) forms an orthonormal basis for \(T_{x}S^{n}\), we have \[\sum_{j=1}^{n}\left|i_{e_{j}}R^{\nabla}\right|=2\left|R^{\nabla}\right|^{2}.\] Hence, at \(x\), \[\sum_{i=1}^{n+1}q(v_{i},v_{i})=\sum_{j=1}^{n}q(e_{j},e_{j})=(4-n)\sum_{j=1}^{n} \left|i_{e_{j}}R^{\nabla}\right|^{2}+(2-n)\sum_{j=1}^{n}\left|\nabla_{e_{j}}u \right|^{2}=2\left(4-n\right)\left|R^{\nabla}\right|^{2}+(2-n)\left|d^{\nabla} u\right|^{2}\] and we complete the proof of this lemma. \(\square\) From the lemma above, we can immediately prove Theorem 1.1 on \(S^{n}(n\geq 5)\). **Theorem 3.4**: _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair on \(S^{n}\) for \(n\geq 5\), then \((\nabla,u)\) is weakly stable if and only if \(R^{\triangledown}=0\), \(d^{\triangledown}u=0\) and \(|u|=1\)._ * If \((\nabla,u)\) satisfies \(R^{\triangledown}=0\), \(d^{\triangledown}u=0\) and \(|u|=1\), then \(\mathscr{A}(\nabla,u)=0\) is a minimum of \(\mathscr{A}\) and \((\nabla,u)\) is obviously weakly stable. On the other hand, assume \((\nabla,u)\) is weakly stable, then for any \(v\in\mathbb{R}^{n+1}\), \(\mathscr{L}^{(\nabla,u)}(i_{V}R^{\triangledown},\nabla_{V}u)\geq 0\). By Lemma 3.3, we have \(R^{\triangledown}=0\) and \(d^{\triangledown}u=0\). From Equation (2), we have \(u=0\) or \(|u|=1\). However, \((\nabla,0)\) can not be weakly stable due to the expression of \(\mathscr{A}\). In fact, we choose a nonzero section \(w\in\Omega^{0}(E)\) such that \(d^{\triangledown}w=0\) and perturb \((\nabla,u)\) along \((0,w)\in\mathcal{C}\). Then \[\mathscr{L}^{(\nabla,0)}\left(0,w\right)=-\frac{\lambda}{2}\int_{S^{n}}|w|^{2 }dV<0\] which contradicts with the weakly stable condition. Thus \(|u|=1\). \(\square\) The case when \(n=4\) is similar as the proof above. First, we can get \(d^{\triangledown}u=0\). Then the Yang-Mills condition can be obtained according to Equation (2). ## 4 The Higgs fields of \(\Omega^{0}(\mathfrak{g}_{E})\) In this section, we consider the Yang-Mills-Higgs functional \[\mathscr{A}(\nabla,\Phi)=\frac{1}{2}\int_{M}|R^{\triangledown}|^{2}+|d^{ \triangledown}\Phi|^{2}+\frac{\lambda}{4}(1-|\Phi|^{2})^{2}dV, \tag{21}\] where \(\Phi\in\Omega^{0}(\mathfrak{g}_{E})\). Assume that \((\nabla,\Phi)\) is a Yang-Mills-Higgs pair satisfying the equation (4) and \((\nabla^{t},\Phi^{t})\) is a curve on \[\tilde{\mathcal{A}}=\{(\tilde{\nabla},\tilde{\Phi})\mid\tilde{\nabla}\text{ is a connection of }E,\ \tilde{\Phi}\in\Omega^{0}(\mathfrak{g}_{E})\}\] such that \(\nabla^{0}=\nabla\) and \(\Phi^{0}=\Phi\). If we assume \(\frac{d}{dt}(\nabla^{t},\Phi^{t})\mid_{t=0}=(B,\phi)\), then the second variation of \(\mathscr{A}\) is \[\begin{split}\frac{d^{2}}{dt^{2}}\mathscr{A}(\nabla^{t},\Phi^{t })\bigg{|}_{t=0}&=\int_{M}\langle\delta^{\triangledown}d^{ \triangledown}B+\mathfrak{R}^{\triangledown}(B)-[[B,\Phi],\Phi],B\rangle+2 \langle d^{\triangledown}\Phi,[B,\phi]\rangle\\ &\qquad+2\langle[B,\Phi],d^{\triangledown}\phi\rangle+\langle \delta^{\triangledown}d^{\triangledown}\phi+\lambda\langle\Phi,\phi\rangle \Phi-\frac{\lambda}{2}(1-|\Phi|^{2})\phi,\phi\rangle dV.\end{split} \tag{22}\] For any gauge transformation \(g\in\mathcal{G}\), \(g\) acts on \((\nabla,\Phi)\) such that \[g\cdot(\nabla,\Phi)=(\nabla^{g},\Phi^{g})=(g\circ\nabla\circ g^{-1},g\circ \Phi\circ g^{-1}).\] Then for any \(\sigma\in\Omega^{0}(\mathfrak{g}_{E})\), assume \(g_{t}=exp(t\sigma)\) is a family of gauge transformations. The variation direction of \((\nabla,u)\) along \(\sigma\) is \[\frac{d}{dt}(\nabla^{g_{t}},\Phi^{g_{t}})\mid_{t=0}=(-d^{\triangledown}\sigma,[ \sigma,\Phi]).\] Similar to the case when the Higgs fields take values in \(\Omega^{0}(E)\), define \[\zeta:\Omega^{0}(\mathfrak{g}_{E}) \rightarrow\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}( \mathfrak{g}_{E}),\] \[\sigma \mapsto(-d^{\triangledown}\sigma,[\sigma,\Phi]),\] and \(\mathcal{C}=Im(\zeta)^{\perp}\). By direct calculation, we have \[\mathcal{C}=\{(B,\phi)\in\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}( \mathfrak{g}_{E})\mid\delta^{\triangledown}B=[\Phi,\phi]\}. \tag{23}\] On \(\mathcal{C}\), we have \[2\int_{M}\langle[B,\Phi],d^{\triangledown}\phi\rangle dV =2\int_{M}\langle[\Phi,d^{\triangledown}\phi],B\rangle dV=2\int_{ M}\langle d^{\triangledown}\left[\Phi,\phi\right]-\left[d^{\triangledown}\Phi, \phi\right],B\rangle dV\] \[=2\int_{M}\langle\delta^{\triangledown}B,[\Phi,\phi]\rangle- \langle B,[d^{\triangledown}\Phi,\phi]\rangle dV\] \[=\int_{M}\langle[[\Phi,\phi],\Phi],\phi\rangle+\langle d^{ \triangledown}\delta^{\triangledown}B,B\rangle-2\langle B,[d^{\triangledown} \Phi,\phi]\rangle dV.\] Then if \((B,\phi)\in\mathcal{C}\), the second variation formula of \(\mathscr{A}\) can be written as \[\begin{split}\frac{d^{2}}{dt^{2}}\mathscr{A}(\nabla^{t},\Phi^{t}) \bigg{|}_{t=0}=\int_{M}\langle\Delta^{\triangledown}B+\mathfrak{R}^{ \triangledown}(B)-[[B,\Phi],\Phi]-2[d^{\triangledown}\Phi,\phi],B\rangle\\ \hskip 113.811024pt+\langle\delta^{\triangledown}d^{\triangledown }\phi+[[\Phi,\phi],\Phi]+2d^{\triangledown}\Phi_{\vdash}B+\lambda\langle\Phi,\phi\rangle\Phi-\frac{\lambda}{2}(1-|\Phi|^{2})\phi,\phi\rangle dV,\end{split} \tag{24}\] where \(d^{\triangledown}\Phi_{\vdash}B=\sum\limits_{i=1}^{n}[\nabla_{e_{i}}\Phi,B(e _{i})]\). Hence, we can define an operator \[\mathscr{S}^{(\nabla,\Phi)}:\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}( \mathfrak{g}_{E})\rightarrow\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}( \mathfrak{g}_{E})\] where \[\begin{split}\mathscr{S}^{(\nabla,\Phi)}(B,\phi)&=( \Delta^{\triangledown}B+\mathfrak{R}^{\triangledown}(B)-[[B,\Phi],\Phi]-2[d^ {\triangledown}\Phi,\phi],\\ &\delta^{\triangledown}d^{\triangledown}\phi+[[\Phi,\phi], \Phi]+2d^{\triangledown}\Phi_{\vdash}B+\lambda\langle\Phi,\phi\rangle\Phi- \frac{\lambda}{2}(1-|\Phi|^{2})\phi).\end{split} \tag{25}\] **Lemma 4.1**: \(\mathscr{S}^{(\nabla,\Phi)}(\mathcal{C})\subset\mathcal{C}\)_._ **Proof** Denote \(\mathscr{S}^{(\nabla,\Phi)}=(\mathscr{S}_{1},\mathscr{S}_{2})\). Note that \(\mathscr{S}^{(\nabla,\Phi)}\) is self-adjoint on \(\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}(\mathfrak{g}_{E})\), we only need to prove that for any \((B,\phi)\in\mathcal{C}=\mathrm{Im}^{\perp}\left(\zeta\right)\) and \(\sigma\in\Omega^{0}(\mathfrak{g}_{E})\), we have \[\int_{M}\langle B,\mathscr{S}_{1}(\zeta(\sigma))\rangle+\langle\phi,\mathscr{ S}_{2}(\zeta(\sigma))\rangle dV=0.\] Since \(\zeta\left(\sigma\right)=\left(-d^{\triangledown}\sigma,[\sigma,\Phi]\right)\), we have \[\begin{split}\mathscr{S}_{1}\left(\zeta\left(\sigma\right) \right)&=-\triangle^{\triangledown}\left(d^{\triangledown} \sigma\right)-\mathfrak{R}^{\triangledown}\left(d^{\triangledown}\sigma \right)+\left[\left[d^{\triangledown}\sigma,\Phi\right],\Phi\right]-2\left[d^ {\triangledown}\Phi,[\sigma,\Phi]\right],\\ \mathscr{S}_{2}\left(\zeta\left(\sigma\right)\right)& =\delta^{\triangledown}d^{\triangledown}\left[\sigma,\Phi \right]+\left[\left[\Phi,[\sigma,\Phi]\right],\Phi\right]-2\sum\limits_{i=1}^{n }\left[\nabla_{e_{i}}\Phi,\nabla_{e_{i}}\sigma\right]+\lambda\langle\Phi,[ \sigma,\Phi]\rangle\Phi-\frac{\lambda}{2}\left(1-|\Phi|^{2}\right)\left[ \sigma,\Phi\right]\\ &=\delta^{\triangledown}d^{\triangledown}\left[\sigma,\Phi \right]+\left[\left[\Phi,[\sigma,\Phi]\right],\Phi\right]-2d^{\triangledown} \Phi_{\vdash}d^{\triangledown}\sigma-\frac{\lambda}{2}\left(1-|\Phi|^{2} \right)\left[\sigma,\Phi\right]\end{split}\] where we use the fact that \(\langle\Phi,\left[\sigma,\Phi\right]\rangle=0\) in the last equality. Note that \[\delta^{\nabla}d^{\gamma}d^{\gamma}\sigma=\delta^{\nabla}[R^{\gamma},\sigma]=[ \delta^{\gamma}R^{\gamma},\sigma]-\mathfrak{R}^{\gamma}(d^{\gamma}\sigma),\] we have \[\Delta^{\gamma}d^{\gamma}\sigma+\mathfrak{R}^{\gamma}(d^{\gamma}\sigma)=d^{ \gamma}\delta^{\gamma}d^{\gamma}\sigma+[\delta^{\gamma}R^{\gamma},\sigma].\] For the third term in \(\mathscr{S}_{1}\left(\zeta\left(\sigma\right)\right)\), we have \[[[d^{\gamma}\sigma,\Phi],\Phi] =[d^{\gamma}[\sigma,\Phi],\Phi]-[[\sigma,d^{\gamma}\Phi],\Phi]\] \[=d^{\gamma}[[\sigma,\Phi],\Phi]-[[\sigma,\Phi],d^{\gamma}\Phi]-[ [\sigma,d^{\gamma}\Phi],\Phi]\] \[=d^{\gamma}[[\sigma,\Phi],\Phi]+2[d^{\gamma}\Phi,[\sigma,\Phi]]+[ [d^{\gamma}\Phi,\Phi],\sigma]\] \[=d^{\gamma}[[\sigma,\Phi],\Phi]+2[d^{\gamma}\Phi,[\sigma,\Phi]]+[ \delta^{\nabla}R^{\nabla},\sigma],\] where we use the Jacobi identity in the third equality to \(\left[\left[\sigma,d^{\nabla}\Phi\right],\Phi\right]\) and apply Equation (4) in the last equality. Combining the equations above, we have \[\mathscr{S}_{1}(\zeta(\sigma))=d^{\gamma}(-\delta^{\gamma}d^{\gamma}\sigma+[[ \sigma,\Phi],\Phi]).\] In the following we deal with \(\mathscr{S}_{2}(\zeta(\sigma))\). By direct computation, we have \[\delta^{\gamma}d^{\gamma}[\sigma,\Phi]=[\sigma,\delta^{\gamma}d^{\gamma}\Phi]+2 d^{\gamma}\Phi_{\vdash}d^{\gamma}\sigma+[\delta^{\gamma}d^{\gamma}\sigma,\Phi].\] Inserting it to \(\mathscr{S}_{2}(\zeta(\sigma))\) and applying the second equation in Equation (4), we obtain \[\mathscr{S}_{2}(\zeta(\sigma))=[\delta^{\gamma}d^{\gamma}\sigma+[[\Phi,\sigma ],\Phi],\Phi].\] Hence, \[\int_{M}\langle B,\mathscr{S}_{1}(\zeta(\sigma))\rangle+\langle \phi,\mathscr{S}_{2}(\zeta(\sigma))\rangle dV\] \[= \int_{M}\langle B,d^{\gamma}(-\delta^{\gamma}d^{\gamma}\sigma+[[ \sigma,\Phi],\Phi])\rangle+\langle\phi,[\delta^{\gamma}d^{\gamma}\sigma+[[ \Phi,\sigma],\Phi],\Phi]\rangle dV\] \[= \int_{M}\langle[\Phi,\phi],-\delta^{\gamma}d^{\gamma}\sigma+[[ \sigma,\Phi],\Phi]\rangle+\langle\phi,[\delta^{\gamma}d^{\gamma}\sigma+[[\Phi, \sigma],\Phi],\Phi]\rangle dV=0,\] where we use the condition \((B,\Phi)\in\mathcal{C}\), i.e. \(\delta^{\nabla}B=[\Phi,\phi]\) in the second equality. Hence \(\mathscr{S}^{(\nabla,\Phi)}\mid_{\mathcal{C}}\colon\mathcal{C}\to\mathcal{C}\) is an elliptic self-adjoint operator. If we assume the minimum eigenvalue of \(\mathscr{S}^{(\nabla,u)}\mid_{\mathcal{C}}\) is \(\lambda_{1}\), then \((\nabla,u)\) is called weakly stable if \(\lambda_{1}\geq 0\) and stable if \(\lambda_{1}>0\). For any \((B,\phi)\in\mathcal{C}\), define \[\mathscr{L}^{(\nabla,\Phi)}(B,\phi)=\int_{M}\langle\mathscr{S}^{(\nabla,\Phi) }(B,\phi),(B,\phi)\rangle dV, \tag{26}\] Then \(\frac{d^{2}}{dt^{2}}\mathscr{S}(\nabla^{t},u^{t})\mid_{t=0}=\mathscr{L}^{( \nabla,u)}(B,\phi)\). Now assume \(M=S^{n}\) and \((\nabla,\Phi)\) is a Yang-Mills-Higgs pairs satisfying (4). For any \(v\in\mathbb{R}^{n+1}\), let \(f_{v}(x)=v\cdot x\in C^{\infty}(S^{n})\) and \(V=grad(f_{v})\) be the Killing field (17) on \(S^{n}\). Similar to the case when the Higgs fields in \(\Omega^{0}(E)\), assume \(B_{v}=i_{V}R^{\gamma}\) and \(\phi_{v}=\nabla_{V}\Phi\). Then \((B_{v},\phi_{v})\in\mathcal{C}\). Similarly, we have **Lemma 4.2**: _Assume \((\nabla,\Phi)\) is a Yang-Mills-Higgs pair on \(S^{n}\). Then for any \(v\in\mathbb{R}^{n+1}\), we have_ \[\mathscr{L}^{(\nabla,\Phi)}(i_{V}R^{\gamma},\nabla_{V}\Phi)=\int_{S^{n}}(4-n) |i_{V}R^{\gamma}|^{2}+(2-n)|\nabla_{V}\Phi|^{2}-2f_{v}(\langle\delta^{\gamma }R^{\gamma},i_{V}R^{\gamma}\rangle+\langle\delta^{\gamma}d^{\gamma}\Phi, \nabla_{V}\Phi\rangle)dV. \tag{27}\] **Proof** Similar to the proof of Lemma 3.2, we have \[\Delta^{\triangledown}i_{V}R^{\triangledown}+\mathfrak{R}^{\triangledown}(i_{V} R^{\triangledown})=i_{V}\Delta^{\triangledown}R^{\triangledown}+(4-n)i_{V}R^{ \triangledown}-2f_{v}\delta^{\triangledown}R^{\triangledown}\] and \[\delta^{\triangledown}d^{\triangledown}\nabla_{V}\Phi=[\delta^{\triangledown }R^{\triangledown}(V),\Phi]-2\mathfrak{R}^{\triangledown}(d^{\triangledown} \Phi)(V)+\nabla_{V}\delta^{\triangledown}d^{\triangledown}\Phi+(2-n)\nabla_{V }\Phi-2f_{v}\delta^{\triangledown}d^{\triangledown}\Phi.\] where \(\mathfrak{R}^{\triangledown}(d^{\triangledown}\Phi)(V)=\sum_{j}[R^{\triangledown }(e_{j},V),\nabla_{e_{j}}\Phi]\) and \(\{e_{1},...,e_{n}\}\) is an orthogonal basis of \(TS^{n}\). Furthermore, by the Equation (4), \[[\delta^{\triangledown}R^{\triangledown}(V),\Phi]=[[\nabla_{V}\Phi,\Phi], \Phi]\text{ and }\nabla_{V}\delta^{\triangledown}d^{\triangledown}\Phi=- \lambda\langle\nabla_{V}\Phi,\Phi\rangle\Phi+\frac{\lambda}{2}(1-|\Phi|^{2} )\nabla_{V}\Phi.\] Also noting that \(d^{\triangledown}\Phi\bot i_{V}R^{\triangledown}=\mathfrak{R}^{\triangledown }(d^{\triangledown}\Phi)(V)\), we have \[\mathscr{L}^{(\nabla,\Phi)}(i_{V}R^{\triangledown},\nabla_{V} \Phi)=\int_{S^{n}} \left(\langle i_{V}\Delta^{\triangledown}R^{\triangledown}+(4-n)i _{V}R^{\triangledown}-2f_{v}\delta^{\triangledown}R^{\triangledown}-[[i_{V}R^ {\triangledown},\Phi],\Phi]-2[d^{\triangledown}\Phi,\nabla_{V}\Phi],i_{V}R^{ \triangledown}\right)\] \[+\langle(2-n)\nabla_{V}\Phi-2f_{v}\delta^{\triangledown}d^{ \triangledown}\Phi,\nabla_{V}\Phi\rangle\rangle dV.\] By Equation (4), we have at \(x\in S^{n}\) \[i_{V}\Delta^{\triangledown}R^{\triangledown}(e_{k})= d^{\triangledown}[d^{\triangledown}\Phi,\Phi](V,e_{k})\] \[= \nabla_{V}[\nabla_{e_{k}}\Phi,\Phi]-\nabla_{e_{k}}[\nabla_{V}\Phi,\Phi]-\left[\nabla_{[V,e_{k}]}\Phi,\Phi\right]\] \[= [\nabla_{V}\nabla_{e_{k}}\Phi,\Phi]+[\nabla_{e_{k}}\Phi,\nabla_{V} \Phi]-[\nabla_{e_{k}}\nabla_{V}\Phi,\Phi]-[\nabla_{V}\Phi,\nabla_{e_{k}}\Phi]- \left[\nabla_{[V,e_{k}]}\Phi,\Phi\right]\] \[= [[R^{\triangledown}(V,e_{k}),\Phi],\Phi]+2[\nabla_{e_{k}}\Phi, \nabla_{V}\Phi]\] \[= \left[\left[i_{V}R^{\triangledown},\Phi\right],\Phi\right](e_{k}) +2\left[d^{\triangledown}\Phi,\nabla_{V}\Phi\right](e_{k})\,.\] Thus the second variation of \(\mathscr{A}\) at \((\nabla,\Phi)\) along \(\left(i_{V}R^{\triangledown},\nabla_{V}\Phi\right)\) is \[\mathscr{L}^{(\nabla,\Phi)}(i_{V}R^{\triangledown},\nabla_{V}\Phi)=\int_{S^{ n}}(4-n)|i_{V}R^{\triangledown}|^{2}+(2-n)|\nabla_{V}\Phi|^{2}-2f_{v}(\langle \delta^{\triangledown}R^{\triangledown},i_{V}R^{\triangledown}\rangle+ \langle\delta^{\triangledown}d^{\triangledown}\Phi,\nabla_{V}\Phi\rangle)dV.\] We finish the proof. \(\square\) For an orthogonal basis \(\{v_{i}\mid 1\leq i\leq n+1\}\) of \(\mathbb{R}^{n+1}\), we assume \(\{V_{i}\}\) is the corresponding Killing fields. We have proved that \(\sum_{i}f_{v_{i}}V_{i}=0\). Hence by adding \(\mathscr{L}^{(\nabla,\Phi)}(i_{V_{i}}R^{\triangledown},\nabla_{V_{i}}\Phi)\) together, we can prove the following lemma. **Lemma 4.3**: _Assume \(\{v_{i}\}\) and \(\{V_{i}\}\) defined as above, then we have_ \[\sum_{i}\mathscr{L}^{(\nabla,\Phi)}(i_{V_{i}}R^{\triangledown},\nabla_{V_{i}} \Phi)=\int_{S^{n}}2(4-n)|R^{\triangledown}|^{2}+(2-n)|d^{\triangledown}\Phi|^{ 2}dV.\] From the lemma above, we can immediately prove the stability theorem. **Theorem 4.4**: _Assume \((\nabla,\Phi)\) is a weakly stable Yang-Mills-Higgs pair on \(S^{n}\) for \(n\geq 5\), then \(R^{\triangledown}=0\), \(d^{\triangledown}\Phi=0\) and \(|\Phi|\equiv 1\). And if \(n=4\), then \(d^{\triangledown}\Phi=0\), \(|\Phi|\equiv 1\) and \(R^{\triangledown}\) is a Yang-Mills connection._ ## 5 The energy identity for a sequence of Yang-Mills-Higgs pairs In this section, we assume \(M\) is a four-dimensional compact Riemannian manifold. Let \(\{\nabla_{i},u_{i}\}\) be a sequence of Yang-Mills-Higgs pairs satisfying the Yang-Mills-Higgs equation (2) with uniformly bounded energy \(\mathscr{A}(\nabla_{i},u_{i})\leq K\). ### There is no energy concentration point for the Higgs field. Assume \((\nabla,u)\) satisfies the equation (2). In this part, we will give the estimate of \(\|u\|_{L^{\infty}}\) and \(\|d^{\triangledown}u\|_{L^{2}}\). In fact, we will show that if a sequence of Yang-Mills-Higgs \(\{(\nabla_{i},u_{i})\}\) weakly converges to \((\nabla,u)\) in \(W^{1,2}(\mathfrak{g}_{E})\times W^{1,2}(E)\), then \(u_{i}\) converges to \(u\) smoothly. **Lemma 5.1**: _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair, then \(\|u\|_{L^{\infty}}\leq 1\)._ * Assume \(|u|^{2}\) attains the maximum at \(x_{0}\in M\). Then at \(x_{0}\), we have \[0\leq\Delta|u|^{2}(x_{0})=(2\langle\delta^{\triangledown}d^{ \triangledown}u,u\rangle-2|d^{\triangledown}u|^{2})(x_{0})\leq\lambda(1-|u|^{ 2})|u|^{2}(x_{0}).\] Hence \(|u|^{2}(x_{0})\leq 1\) and this implies the lemma. \(\Box\) **Lemma 5.2**: _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair and \(\mathscr{A}(\nabla,u)\leq K\). Then for any \(B_{r}=B_{r}(x_{0})\subset M\), we have_ \[\|d^{\triangledown}u\|_{L^{2}(B_{r})}^{2}\leq C(r+r^{4}), \tag{28}\] _where \(C=C(K,\lambda)\)._ * Choose a cut off function \(\eta\) such that \(\eta\mid_{B_{r}}\equiv 1\), \(supp(\eta)\subset B_{2r}\) and \(|d\eta|\leq Cr^{-1}\). Then by the equation (2), we have \[\int_{B_{r}}|d^{\triangledown}u|^{2}dV \leq\int_{B_{2r}}\langle d^{\triangledown}u,\eta\cdot d^{ \triangledown}u\rangle dV\] \[=\int_{B_{2r}}\langle u,\delta^{\triangledown}(\eta\cdot d^{ \triangledown}u)\rangle dV\] \[=\int_{B_{2r}}\langle u,\eta\cdot\frac{\lambda}{2}(1-|u|^{2})u+d \eta\#d^{\triangledown}u\rangle dV\] \[\leq Cr^{4}+C\|d\eta\|_{L^{2}(B_{2r})}\|d^{\triangledown}u\|_{L ^{2}(B_{2r})}\] \[\leq C(r+r^{4}).\ \Box\] The above lemmas imply the following theorem immediately. **Theorem 5.3**: _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair and \(\mathscr{A}(\nabla,u)\leq K\), then for any \(x\in M\), we have_ \[\int_{B_{r}(x)}|d^{\triangledown}u|^{2}+\frac{\lambda}{4}(1-|u|^{2})^{2}dV\leq C (K,\lambda)(r+r^{4}).\] _In particular, we have_ \[\lim_{r\to 0}\int_{B_{r}(x)}|d^{\triangledown}u|^{2}+\frac{\lambda}{4}(1-|u|^{ 2})^{2}dV=0.\] ### The \(\epsilon\)-regularity The \(\epsilon\)-regularity theorem is proved by Uhlenbeck [17] for Yang-Mills connections, Struwe [15] for Yang-Mills flows and Hong-Fang [4] for Yang-Mills-Higgs flows. For completeness, we first give the proof of the \(\epsilon\)-regularity of Yang-Mills-Higgs pairs here. Let \(i(M)\) be the injective radius of \(M\). Then for any \(r<i(M)\) and \(x\in M\), there is a trivialization \(\nabla=d+A\) in \(B_{r}(x)\). **Lemma 5.4** (\(\epsilon\)-regularity): _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair. There exists \(\epsilon_{0}=\epsilon_{0}(M)\) such that if for some \(R<i(M)\) and \(x_{0}\in M\),_ \[\int_{B_{R}(x_{0})}|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2}dV\leq \epsilon_{0},\] _then for any \(r_{1}<R\), we have_ \[\sup_{B_{\frac{r_{1}}{2}}(x_{0})}(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2} )\leq Cr_{1}^{-4}\int_{B_{r_{1}}(x_{0})}|R^{\triangledown}|^{2}+|d^{\triangledown}u| ^{2}dV,\] _where \(C=C(M)\)._ * There exists \(r_{0}<r_{1}\), such that \[(r_{1}-r_{0})^{4}\sup_{D_{r_{0}}(x_{0})}(|R^{\triangledown}|^{2}+|d^{\triangledown }u|^{2})=\sup_{0\leq r\leq r_{1}}((r_{1}-r)^{4}\sup_{D_{r}(x_{0})}(|R^{\triangledown }|^{2}+|d^{\triangledown}u|^{2}))\] where \(D_{r}\left(x\right)=\partial B_{r}\left(x\right)\). Then there exists \(x_{1}\in D_{r_{0}}(x_{0})\) such that \[(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2})(x_{1})=\sup_{D_{r_{0}}(x _{0})}(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2}).\] Define \(e_{0}=(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2})(x_{1})\). We claim that \(e_{0}\leq 16(r_{1}-r_{0})^{-4}\). In fact, if we suppose \(e_{0}>16(r_{1}-r_{0})^{-4}\), then \(\rho_{0}=e_{0}^{-\frac{1}{2}}<\frac{r_{1}-r_{0}}{2}\). Define \[\rho:B_{1}(0)\to B_{\rho_{0}}(x_{1})\] such that \(\rho(x)=x_{1}+\rho_{0}x\), and \[\tilde{A}(x)=\rho^{*}A(x)=\rho_{0}A(x_{1}+\rho_{0}x),\] \[\tilde{u}(x)=\rho^{*}u(x)=u(x_{1}+\rho_{0}x).\] Let \(\widetilde{\nabla}=d+\tilde{A}\) be a connection of \(\rho^{*}E\) over \(B_{1}(0)\) and \(e_{\rho_{0}}=|R^{\triangledown}|^{2}+\rho_{0}^{2}|d^{\triangledown}\tilde{u} |^{2}=\rho_{0}^{4}(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2})\). Note that \(B_{\rho_{0}}(x_{1})\subset B_{\frac{r_{0}+r_{1}}{2}}(x_{0})\), we have \[1=e_{\rho_{0}}(0) \leq\sup_{B_{1}(0)}e_{\rho_{0}}\] \[=\rho_{0}^{4}\sup_{B_{\rho_{0}}(x_{1})}(|R^{\triangledown}|^{2}+ |d^{\triangledown}u|^{2})\] \[\leq\rho_{0}^{4}\sup_{B_{\frac{r_{1}+r_{0}}{2}}(x_{0})}(|R^{ \triangledown}|^{2}+|d^{\triangledown}u|^{2})\] \[=\rho_{0}^{4}(\frac{r_{1}-r_{0}}{2})^{-4}(r_{1}-\frac{r_{0}+r_{1} }{2})^{4}\sup_{B_{\frac{r_{0}+r_{1}}{2}}(x_{0})}(|R^{\triangledown}|^{2}+|d^{ \triangledown}u|^{2})\] \[\leq\rho_{0}^{4}(\frac{r_{1}-r_{0}}{2})^{-4}(r_{1}-r_{0})^{4}e_{0}\] \[=16,\] which implies \(|R^{\triangledown}|^{2}+\rho_{0}^{2}|d^{\triangledown}\tilde{u}|^{2}\leq 16\) on \(B_{1}(0)\). By equation (2) and the Bochner formula, we have \[\nabla^{*}\nabla R^{\triangledown} =d^{\triangledown}\delta^{\triangledown}R^{\triangledown}-R^ {\triangledown}\circ(Ric\wedge I+2R_{M})+R^{\triangledown}\#R^{\triangledown}\] \[=d^{\triangledown}(d^{\triangledown}\#u)-R^{\triangledown} \circ(Ric\wedge I+2R_{M})+R^{\triangledown}\#R^{\triangledown}\] \[=R^{\triangledown}\#R^{\triangledown}\#u+d^{\triangledown}u\#d^{ \triangledown}u-R^{\triangledown}(Ric\wedge I+2R_{M})+R^{\triangledown}\#R^{ \triangledown},\] and \[\nabla^{*}\nabla d^{\triangledown}u =d^{\triangledown}\delta^{\triangledown}d^{\triangledown}u+ \delta^{\triangledown}d^{\triangledown}d^{\triangledown}u-d^{\triangledown} u\circ Ric+R^{\triangledown}\#d^{\triangledown}u\] \[=d^{\triangledown}(\frac{\lambda}{2}(1-|u|^{2})u)+\delta^{ \triangledown}(R^{\triangledown}\cdot u)-d^{\triangledown}u\circ Ric+R^{ \triangledown}\#d^{\triangledown}u\] \[=d^{\triangledown}u\#u\#u+R^{\triangledown}\#d^{\triangledown}u -d^{\triangledown}u\circ Ric.\] Hence \[\Delta e_{\rho_{0}} =\rho_{0}^{6}\Delta(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2})\] \[=2\rho_{0}^{6}(\langle\nabla^{*}\nabla R^{\triangledown},R^{ \triangledown})-|\nabla R^{\triangledown}|^{2}+\langle\nabla^{*}\nabla d^{ \triangledown}u,d^{\triangledown}u\rangle-|\nabla d^{\triangledown}u|^{2})\] \[\leq C\rho_{0}^{6}(|R^{\triangledown}|^{3}+|R^{\triangledown} ||d^{\triangledown}u|^{2}+|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2})\] \[\leq C(|R^{\triangledown}|^{3}+\rho_{0}^{2}|R^{\triangledown} ||d^{\triangledown}\tilde{u}|^{2}+\rho_{0}^{2}|R^{\triangledown}|^{2}+\rho_{0 }^{4}|d^{\triangledown}\tilde{u}|^{2})\] \[\leq Ce_{\rho_{0}}.\] By Harnack inequality and note that \(B_{\rho_{0}}(x_{1})\subset B_{R}(x_{0})\), we have \[1=e_{\rho_{0}}(0) \leq C\int_{B_{1}(0)}e_{\rho_{0}}dV\] \[=C\int_{B_{\rho_{0}}(x_{1})}|R^{\triangledown}|^{2}+|d^{\triangledown }u|^{2}dV\] \[\leq C\epsilon_{0}.\] It is a contradiction if \(\epsilon_{0}\) small enough and we prove the claim. Then we have \[\sup_{B_{\frac{3r_{1}}{4}}(x_{0})}(|R^{\triangledown}|^{2}+|d^{ \triangledown}u|^{2}) =(\frac{r_{1}}{4})^{-4}(r_{1}-\frac{3r_{1}}{4})^{4}\sup_{B_{\frac {3r_{1}}{4}}(x_{0})}(|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2})\] \[\leq(\frac{r_{1}}{4})^{-4}(r_{1}-r_{0})^{4}e_{0}\] \[\leq Cr_{1}^{-4},\] where, more precisely, \(C=2^{12}\). For any \(x_{2}\in B_{\frac{r_{1}}{4}}(x_{0})\), define \[\hat{A}(x) =\frac{r_{1}}{4}A(x_{2}+\frac{r_{1}}{4}x),\] \[\hat{u}(x) =u(x_{2}+\frac{r_{1}}{4}x).\] Similarly, \(\hat{e}=|R^{\triangledown}|^{2}+(\frac{r_{1}}{4})^{2}|d^{\triangledown} \tilde{u}|^{2}=(\frac{r_{1}}{4})^{4}(|R^{\triangledown}|^{2}+|d^{\triangledown }u|^{2})\) satisfies \(\hat{e}\leq C\) and hence \(\Delta\hat{e}\leq C\hat{e}\) in \(B_{1}(0)\). By Harnack inequality we have \[\hat{e}(0)\leq C\int_{B_{1}(0)}\hat{e}(x)dV=C^{\prime}\int_{B_{ \frac{r_{1}}{4}}(x_{2})}|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2}dV\leq C \int_{B_{r_{1}}(x_{0})}|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2}dV.\] Then we prove the lemma. In order to obtain the \(\epsilon\)-regularity of high order derivatives, we need the following lemma. **Lemma 5.5** ([17], Theorem 1.3): _Assume \(\nabla=d+A\) is a connection over \(B_{1}\). There exists \(\kappa\) and \(c\) such that if \(\|R^{\triangledown}\|_{L^{2}(B_{1})}\leq\kappa\), then there exists a gauge \(g\) such that for any \(p\geq 2\), there is \(d^{*}A^{g}=0\) and \(\|A^{g}\|_{W^{1,\pi}(B_{1})}\leq c\|R^{\triangledown}\|_{L^{\pi}(B_{1})}\), where \(\kappa\) and \(c\) is only determined by \(dim(M)\)._ **Remark 5.1**: By letting \(p\to\infty\) we have \(\|A^{g}\|_{L^{\infty}(B_{1})}\leq c\|R^{\triangledown}\|_{L^{\infty}(B_{1})}\). This is consist with Theorem 2.7 in [18]. **Lemma 5.6** (\(\epsilon\)-regularity of high order derivative): _Assume \((\nabla,u)\) is a Yang-Mills-Higgs pair. There exists \(\epsilon_{1}=\epsilon_{1}(M)\) such that if for some \(R<i(M)\) and \(x_{0}\in M\),_ \[\int_{B_{R}(x_{0})}|R^{\triangledown}|^{2}+|d^{\triangledown}u|^{2}dV\leq \epsilon_{1},\] _then there exists a gauge transformation \(g\) such that for any \(r_{2}<R\) and \(k\geq 1\), we have_ \[\sup_{B_{\frac{r_{2}}{2}}(x_{0})}(|d^{k}A^{g}|^{2}+|d^{k}u^{g}|^{2})\leq C(k,r_ {2}).\] **Proof** Choose \(\epsilon_{1}=\min\{\epsilon_{0},\kappa^{2}\}\) and \(R_{1}<R_{0}\). By lemma 5.5, there exists a gauge transformation \(g\), such that \(d^{*}A^{g}=0\). For simplicity, we omit the superscript \(g\). Then the equations (2) are \[\Delta A+\Phi(A,u)=0, \tag{29}\] \[\Delta u+\Psi(A,u)=0,\] where \(\Delta\) is the covariant Laplacian operator on \(M\) and \[\Phi(A,u)=dA\#A+A\#A\#A+du\#u+A\#u,\] \[\Psi(A,u)=dA\#u+A\#du+A\#A\#u-\frac{\lambda}{2}(1-|u|^{2})u.\] By lemma 5.5, we have \(\|A\|_{L^{\infty}}\leq C(R_{1})\|R^{\triangledown}\|_{L^{\infty}}\) in \(B_{\frac{R_{1}}{2}}(x_{0})\). And by Lemma 5.4, we have \[\sup_{B_{\frac{1}{2}}B_{\frac{1}{2}}}(|R^{\triangledown}|^{2}+|d^{ \triangledown}u|^{2})\leq C_{1}.\] By \(L^{p}\)-estimate, for any \(R_{2}<R_{1}\) and \(p>1\), we have \(\|A\|_{W^{2,p}(B_{\frac{1}{2}R_{2}})}+\|u\|_{W^{2,p}(B_{\frac{1}{2}R_{2}})} \leq C(R_{1},R_{2},p)\). By Sobolev's embedding theorem, \(\|A\|_{C^{1}(B_{\frac{1}{2}R_{2}})}+\|u\|_{C^{1}(B_{\frac{1}{2}R_{2}})}\leq C( R_{1},R_{2})\). Differentiating the equation (29) and repeat the above process, we can prove that for any \(k\geq 1\) and \(R_{k}<R_{k-1}\), we have \(\|A\|_{C^{k}(B_{\frac{1}{2}R_{2}})}+\|u\|_{C^{k}(B_{\frac{1}{2}R_{2}})}\leq C( R_{1},R_{2},...,R_{k},k)\). For any \(r_{2}<R\), by choosing \(R>R_{1}>R_{2}>...>r_{2}\), the lemma is proved. \(\Box\) ### The proof of theorem 1.2 Assume \(\{(\nabla_{i},u_{i})\}\) is a sequence of Yang-Mills-Higgs pairs with uniformly bounded energy \(\mathscr{A}(\nabla_{i},u_{i})\leq K\). Define \[\Sigma=\{x\in M\mid\lim_{R\to 0}\liminf_{i\to\infty}\int_{B_{R}(x)}|R^{ \triangledown}|^{2}dV>\epsilon_{1}\}. \tag{30}\] \(\mathscr{A}(\nabla_{k},u_{k})\leq K\) implies that \(\Sigma\) is finite and the number of elements in \(\Sigma\) is no more than \(\frac{K}{\epsilon_{1}}\). The uniform \(W^{1,2}\) bounded of \((\nabla_{i},u_{i})\) implies there is a subsequence of \(\{(\nabla_{i},u_{i})\}\) which weakly converges to a Yang-Mills-Higgs pair \((\nabla_{\infty},u_{\infty})\) in \(\Omega^{1}(\mathfrak{g}_{E})\times\Omega^{0}(E)\). And by lemma 5.6, the convergence is smooth in \(M\backslash\Sigma\). For any \(x_{0}\in\Sigma\), choose \(R_{0}<i(M)\) such that \(B_{R_{0}}(x_{0})\cap\Sigma=\{x_{0}\}\). Assume \(\nabla_{i}=d+A_{i}\) on \(B_{R_{0}}(x_{0})\). Define \[\frac{1}{(r_{i}^{1})^{4}}=\sup_{B_{R_{0}}(x_{0})}(|R^{\triangledown}|^{2}+|d^ {\triangledown}u_{i}|^{2}),\] Then \(\lim_{i\to\infty}r_{i}^{1}=0\). Assume \[\tilde{A}_{1,i}(x)=\rho_{i}^{*}A_{i}(x)=r_{i}^{1}A_{i}(x_{0}+r_{ i}^{1}x),\] \[\tilde{u}_{1,i}(x)=\rho_{i}^{*}u_{i}(x)=u_{i}(x_{0}+r_{i}^{1}x),\] where \(\rho_{i}(x)=x_{0}+r_{i}^{1}x\) maps \(B_{R}(0)\) to \(B_{R_{1}^{1}}(x_{0})\) for any \(R>0\). The pairs \((\tilde{\nabla}_{1,i},\tilde{u}_{1,i})\) satisfy \[\delta^{\triangledown}{}_{1,i}R^{\triangledown}{}_{1,i}=-\frac{ (r_{i}^{1})^{2}}{2}(d^{\triangledown}{}_{1,i}\tilde{u}_{1,i}\otimes\tilde{u} _{1,i}^{*}-\tilde{u}_{1,i}\otimes d^{\triangledown}{}_{1,i}\tilde{u}_{1,i}),\] \[\delta^{\triangledown}{}_{1,i}d^{\triangledown}{}_{1,i}=\frac{ \lambda(r_{i}^{1})^{2}}{2}(1-|\tilde{u}_{1,i}|^{2})\tilde{u}_{1,i}.\] For any \(\delta<R_{0}\) and \(R>0\), \(i\) is large enough such that \(\delta>4Rr_{i}^{1}\). Note that \(\|R^{\triangledown}{}_{1,i}\|_{L^{\infty}(B_{R}(0))}\leq 1\). Applying theorem 5.3, lemma 5.4 and lemma 5.6, there is a subsequence of \((\tilde{A}_{1,i},\tilde{u}_{1,i})\) converges to \((\tilde{A}_{1,\infty,R},\tilde{u}_{1,\infty,R})\) on \(B_{R}(0)\) smoothly under some gauge transformations \(g_{i}\). For simplicity, we assume that the subsequence is \((\tilde{A}_{1,i},\tilde{u}_{1,i})\) itself. By choosing \(R_{1}<R_{2}<...\to\infty\) and subsequence repeatedly, we may assume \((\tilde{A}_{1,i}^{q_{i}},\tilde{u}_{1,i}^{q_{i}})\) converges to \((\tilde{A}_{1,\infty},\tilde{u}_{1,\infty})\) in \(C^{\infty}(\mathbb{R}^{4})\) satisfying \[\delta^{\triangledown_{1,\infty}}R^{\triangledown_{1,\infty}} =0,\] \[\delta^{\triangledown_{1,\infty}}d^{\triangledown_{1,\infty}} \tilde{u}_{1,\infty} =0,\] which implies \(\widetilde{\nabla}_{1,\infty}\) is a Yang-Mills connection and \(d^{\triangledown_{1,\infty}}\tilde{u}_{1,\infty}=0\). By Uhlenbeck's removable singularity theorem (see [18], corollary 4.3), \(\widetilde{\nabla}_{1,\infty}\) can be extended to a nontrivial Yang-Mills connection on \(S^{4}\) under some gauge transformation. Define \[\mathscr{A}(\nabla,u;\Omega)=\frac{1}{2}\int_{\Omega}|R^{\triangledown}|^{2} +|d^{\triangledown}u|^{2}+\frac{\lambda}{4}(1-|u|^{2})^{2}dV.\] The following lemma gives a sufficient condition for the energy on necks to vanish. **Lemma 5.7**: _Assume \((\nabla_{i},u_{i})\) are a sequence of Yang-Mills-Higgs pairs and \(\lim\limits_{i\to\infty}r_{i}^{1}=0\). There exists \(\epsilon\), such that for any \(R,\delta>0\), if_ \[\liminf_{i\to\infty}\sup_{r\in(\frac{R_{1}^{-1}}{2},2\delta)} \mathscr{A}(\nabla_{i},u_{i};B_{2r}(x_{0})\backslash B_{r}(x_{0}))<\epsilon, \tag{31}\] _then_ \[\lim_{R\to\infty}\lim_{\delta\to 0}\liminf_{i\to\infty} \mathscr{A}(\nabla_{i},u_{i};B_{\delta}(x_{0})\backslash B_{R_{r_{i}^{1}}}( x_{0}))=0.\] **Proof** For simplicity, we assume \(x_{0}=0\). For \(l\geq-1\), define \[\mathfrak{U}_{l} =\{x\mid 2^{-l-1}\leq|x|\leq 2^{-l}\},\] \[S_{l} =\{x\mid|x|=2^{-l}\}.\] Divide \(A_{i}\) into the radius part \(A_{i,r}\) and the sphere part \(A_{i,\theta}\), that is, \(A_{i}=A_{i,r}+A_{i,\theta}\). Recall that by Bochner's formula, the equation (2) and \(\|u_{i}\|_{L^{\infty}}\leq 1\), we have \[\Delta|R^{\triangledown_{i}}|^{2}\leq C(M)(|R^{\triangledown_{i}}|^{2}+|R^{ \triangledown_{i}}|^{3}+|R^{\triangledown_{i}}||D^{\triangledown_{i}}u_{i}|^ {2})-2|d^{\triangledown_{i}}R^{\triangledown_{i}}|^{2}.\] Note that \[\Delta(|R^{\triangledown_{i}}|^{2}+h)^{\frac{1}{2}}=\frac{1}{2}(|R^{\triangledown _{i}}|^{2}+h)^{-\frac{1}{2}}\Delta|R^{\triangledown_{i}}|^{2}+\frac{1}{4} \sum_{j}(|R^{\triangledown_{i}}|^{2}+h)^{-\frac{3}{2}}(e_{j}|R^{\triangledown _{i}}|^{2})^{2}\] and let \(h\) tends to \(0\), we have \[\Delta|R^{\triangledown_{i}}|\leq C(|R^{\triangledown_{i}}|+|R^{\triangledown _{i}}|^{2}+|d^{\triangledown_{i}}u_{i}|^{2}).\] Consider \(\tilde{A}_{i,l}(x)=2^{-l}A_{i}(2^{-l}x)\), \(\tilde{u}_{i,l}(x)=u_{i}(2^{-l}x)\) and assume \(\tilde{\nabla}_{i,l}=d+\tilde{A}_{i,l}\). Note that \[\Delta|R^{\triangledown_{i,l}}| =2^{-4l}\Delta|R^{\triangledown_{i}}|\] \[\leq 2^{-4l}C(|R^{\triangledown_{i}}|+|R^{\triangledown_{i}}|^{2} +|d^{\triangledown_{i}}u_{i}|^{2})\] \[\leq C(2^{-2l}|R^{\triangledown_{i,l}}|+|R^{\triangledown_{i,l} }|^{2}+2^{-2l}|d^{\triangledown_{i,l}}\tilde{u}_{i,l}|^{2}),\] and by Harnack's inequality, we have \[\sup_{\mathfrak{U}_{l}}|R^{\triangledown_{i}}| =2^{2l}\sup_{\mathfrak{U}_{0}}|R^{\triangledown_{i,l}}|\] \[\leq 2^{2l}C\int_{\mathfrak{U}_{-1\cup\mathfrak{U}_{0}\cup\mathfrak {U}_{1}}}2^{-2l}|R^{\triangledown_{i,l}}|+|R^{\triangledown_{i,l}}|^{2}+2^{-2 l}|d^{\triangledown_{i,l}}\tilde{u}_{i,l}|^{2}dV\] \[=C\int_{\mathfrak{U}_{l-1\cup\mathfrak{U}_{l}\cup\mathfrak{U}_{l+1 }}}2^{2l}|R^{\triangledown_{i}}|+|R^{\triangledown_{i}}|^{2}+2^{2l}|d^{\triangledown _{i}}u_{i}|^{2}dV\] \[\leq 2^{2l}C\epsilon\] Theorem 2.8 and corollary 2.9 in [18] show that **Lemma 5.8**: _Assume \(\nabla=d+A\) be a connection over \(\mathfrak{U}_{0}\), there exists \(\gamma\) such that if \(\|R^{\gamma}\|_{L^{\infty}(\mathfrak{U}_{0})}\leq\gamma\), then under a gauge transformation, we have \(\delta^{\gamma}A=0\) in \(\mathfrak{U}_{-1}\), \(\delta^{\gamma}_{\theta}A_{\theta}=0\) on \(S_{-1}\) and \(S_{0}\), and \(\int_{|x|=r}A_{r}dS=0\) for any \(1\leq r\leq 2\). Moreover, there is \(\|A\|_{L^{\infty}(\mathfrak{U}_{0})}\leq C\|R^{\gamma}\|_{L^{\infty}( \mathfrak{U}_{0})}\) and \(\|A\|_{L^{2}(\mathfrak{U}_{0})}\leq C\|R^{\gamma}\|_{L^{2}(\mathfrak{U}_{0})}\)._ By choosing \(\epsilon\) small enough, we may assume \(|R^{\widehat{\nabla}_{i,l}}|\leq\gamma\) on \(\mathfrak{U}_{0}\) and hence there exists a gauge transformation \(g_{i,l}\in\mathscr{G}(\mathfrak{U}_{l})\), if denote \(A_{i,l}=A_{i}^{g_{i,l}}\) and \(u_{i,l}=u_{i}^{g_{i,l}}\), we have \[(1)\ d^{*}A_{i,l}=0,\] \[(2)\ A_{i,l,\theta}\mid_{S_{l}}=A_{i,l-1,\theta}\mid_{S_{l}},\] \[(3)\ d^{*}_{\theta}A_{i,l,\theta}=0\ \text{on}\ S_{l}\ \text{and}\ S_{l+1},\] \[(4)\ \int_{S_{l}}A_{i,l,r}dS=\int_{S_{l+1}}A_{i,l,r}dS=0.\] Choosing \(\epsilon\) small enough such that \(\{\nabla_{i}\}\) satisfy the condition of lemma 5.5, we may assume \(\int_{\mathfrak{U}_{l}}|A_{i,l}|^{2}dV\leq 2^{-2l}C\int_{\mathfrak{U}_{l}}|R^{ \gamma_{i,l}}|^{2}dV\). Assume \[\bigcup_{l=l_{1}+1}^{l_{2}-1}\mathfrak{U}_{l}\subset B_{\delta} \backslash B_{R^{\gamma_{i}}_{l}}\subset\bigcup_{l=l_{1}}^{l_{2}}\mathfrak{U }_{l}.\] By Stokes' formula, we have \[\int_{\mathfrak{U}_{l}}|R^{\gamma_{i,l}}|^{2}dV=\int_{\mathfrak{U}_{l}}<\delta ^{\gamma_{i,l}}R^{\gamma_{i,l}},A_{i,l}>+A_{i,l}\#A_{i,l}\#R^{\gamma_{i,l}}dV+ \int_{S_{l}}A_{i,l}\wedge*R^{\gamma_{i,l}}-\int_{S_{l+1}}A_{i,l}\wedge*R^{ \gamma_{i,l}}.\] Let \(\hat{A}_{i,l}(x)=2^{-l}A_{i,l}(2^{-l}x)\) and assume \(\|R^{\widehat{\gamma}_{i,l}}\|_{L^{\infty}(\mathfrak{U}_{0})}=2^{-2l}\|R^{ \gamma_{i,l}}\|_{L^{\infty}(\mathfrak{U}_{l})}=2^{-2l}\|R^{\gamma_{i}}\|_{L^{ \infty}(\mathfrak{U}_{l})}\leq\gamma\) by choosing \(\epsilon\) small enough, where \(\gamma\) is the constant in lemma 5.8. Then we have \[\int_{\mathfrak{U}_{l}}A_{i,l}\#A_{i,l}\#R^{\gamma_{i,l}}dV \leq C\|R^{\gamma_{i,l}}\|_{L^{\infty}(\mathfrak{U}_{l})}\int_{ \mathfrak{U}_{l}}|A_{i,l}|^{2}dV\] \[\leq C\|R^{\gamma_{i,l}}\|_{L^{\infty}(\mathfrak{U}_{l})}2^{-2l} \int_{\mathfrak{U}_{0}}|\hat{A}_{i,l}|^{2}dV\] \[\leq C2^{2l}\epsilon\cdot 2^{-2l}\int_{\mathfrak{U}_{0}}|R^{ \gamma_{i,l}}|^{2}dV\] \[\leq C\epsilon\int_{\mathfrak{U}_{l}}|R^{\gamma_{i,l}}|^{2}dV.\] Choose \(\epsilon\) small enough such that \(C\epsilon\leq\frac{1}{2}\), then \[\sum_{l=l_{1}}^{l_{2}}\int_{\mathfrak{U}_{l}}|R^{\gamma_{i,l}}|^{ 2}dV\] \[= \sum_{l=l_{1}}^{l_{2}}\int_{\mathfrak{U}_{l}}\left(\left(\delta^ {\gamma_{i,l}}R^{\gamma_{i,l}},A_{i,l}\right)+A_{i,l}\#A_{i,l}\#R^{\gamma_{i,l }}\right)dV+\int_{S_{l}}A_{i,l}\wedge*R^{\gamma_{i,l}}-\int_{S_{l+1}}A_{i,l} \wedge*R^{\gamma_{i,l}}\] \[\leq \int_{S_{l_{1}}}A_{i,l_{1}}\wedge*R^{\gamma_{i,l_{1}}}-\int_{S_{l _{2}+1}}A_{i,l_{2}}\wedge*R^{\gamma_{i,l_{2}}}+\sum_{l=l_{1}}^{l_{2}}\int_{ \mathfrak{U}_{l}}\langle\delta^{\gamma_{i,l}}R^{\gamma_{i,l}},A_{i,l}\rangle dV +\frac{1}{2}\sum_{l=l_{1}}^{l_{2}}\int_{\mathfrak{U}_{l}}|R^{\gamma_{i,l}}|^{2}dV.\] Then we have \[\int_{B_{\delta}\backslash B_{R^{\gamma_{i}}_{l}}}|R^{\gamma_{i}}|^{ 2}dV \leq\sum_{l=l_{1}}^{l_{2}}\int_{\mathfrak{U}_{l}}|R^{\gamma_{i,l}}|^{ 2}dV\] \[\leq 2(\int_{S_{l_{1}}}A_{i,l_{1}}\wedge*R^{\gamma_{i,l_{1}}}-\int_{S_{l _{2}+1}}A_{i,l_{2}}\wedge*R^{\gamma_{i,l_{2}}}+\sum_{l=l_{1}}^{l_{2}}\int_{ \mathfrak{U}_{l}}\langle\delta^{\gamma_{i,l}}R^{\gamma_{i,l}},A_{i,l}\rangle dV).\] By equation (2) and Holder's inequality, we have \[\sum_{l=l_{1}}^{l_{2}}\int_{\mathbb{M}_{l}}\langle\delta^{\triangledown _{i,l}}R^{\triangledown_{i,l}},A_{i,l}\rangle dV \leq\sum_{l=l_{1}}^{l_{2}}\|d^{\triangledown_{i,l}}u_{i,l}\|_{L^{ 2}(\mathbb{M}_{l})}\|u_{i,l}\|_{L^{\infty}(\mathbb{M}_{l})}\|A_{i,l}\|_{L^{2}( \mathbb{M}_{l})}\] \[\leq C\sum_{l=l_{1}}^{l_{2}}2^{-l}\|d^{\triangledown_{i,l}}u_{i,l }\|_{L^{2}(\mathbb{M}_{l})}\|R^{\triangledown}_{i,l}\|_{L^{2}(\mathbb{M}_{l})}\] \[\leq 2^{-l_{1}}C\epsilon,\] which tends to \(0\) as \(\delta\) tends to \(0\) since \(2^{-l_{1}}\leq 2\delta\). By Fatou's lemma, we have \[\int_{0}^{R_{0}}\liminf_{i\to\infty}\int_{|x|=r}|R^{\triangledown_{i}}|^{2} dSdr\leq K,\] hence \[\delta\liminf_{\delta\to 0}\int_{|x|=\delta}|R^{\triangledown_{i}}|^{2} dS=0\] Since \(\|A_{i,l}\|_{L^{\infty}(S_{l})}\leq 2^{-l}C\|R^{\triangledown_{i}}\|_{L^{ \infty}(S_{l})}\leq 2^{l}C\epsilon\), we have \[\int_{S_{l_{1}}}A_{i,l_{1}}\wedge*R^{\triangledown_{i,l_{1}}}\leq\|A_{i,l_{1} }\|_{L^{2}(S_{l_{1}})}\|R^{\triangledown_{i,l_{1}}}\|_{L^{2}(S_{l_{1}})}\to 0\] as \(\delta\to 0\). According to the same argument for \(S_{l_{2}}\), we conclude that \[\lim_{R\to\infty}\lim_{\delta\to 0}\liminf_{i\to\infty}\int_{B_{\delta} \backslash B_{R^{\tau_{i}}_{1}}}|R^{\triangledown_{i}}|^{2}dV=0.\] Since lemma 5.1 and lemma 5.2 shows that Higgs part \(|d^{\triangledown_{i}}u_{i}|^{2}+\frac{\lambda}{4}(1-|u_{i}|^{2})^{2}\) has no concentration point, we have \[\lim_{R\to\infty}\lim_{\delta\to 0}\liminf_{i\to\infty}\mathscr{A}(\nabla_{i },u_{i},B_{\delta}\backslash B_{R^{\tau_{i}}_{1}})=0.\qed\] For any \(\delta,R>0\), define \(E_{i}=\{r_{i}\in(\frac{R^{\tau_{i}}_{1}}{2},2\delta)\mid\mathscr{A}(\nabla_{i },u_{i};B_{2r_{i}}(x_{0})\backslash B_{r_{i}}(x_{0}))\geq\epsilon\}\). If \(\cup_{j}\cap_{i>j}E_{i}\neq\varnothing\), define equivalent classes of \(\{(r_{1},r_{2},...)\mid r_{i}\in E_{i}\}\) such that \(\{r_{i}\}\) equals to \(\{r^{\prime}_{i}\}\) if and only if \(\liminf\frac{r_{i}}{r^{\prime}_{i}}>0\) and \(\limsup\frac{r_{i}}{r^{\prime}_{i}}<\infty\). And define \(\{r_{i}\}>\{r^{\prime}_{i}\}\) if and only if \(\liminf\frac{r^{\prime}_{i}}{r_{i}}=0\). Similarly to the estimate of the number of element of \(\Sigma\), we have the estimate of the number of equivalent class \(L\leq\frac{K}{\epsilon}\). Assume the equivalent classes are \(\{r^{1}_{i}\}<\{r^{2}_{i}\}<...<\{r^{L}_{i}\}\). Note that \[\mathscr{A}(\nabla_{i},u_{i};B_{\delta}(x_{0})\backslash B_{R^{ \tau_{i}}_{1}}(x_{0}))\] \[= \sum_{j=2}^{L}\mathscr{A}(\nabla_{i},u_{i};B_{\delta r^{j}_{i}}(x _{0})\backslash B_{R^{\tau_{i}-1}_{1}}(x_{0}))+\mathscr{A}(\nabla_{i},u_{i};B_ {R^{\tau_{i}}_{1}}(x_{0})\backslash B_{\delta r^{j}_{i}}(x_{0}))\] \[+\mathscr{A}(\nabla_{i},u_{i};B_{\delta}(x_{0})\backslash B_{R^{ \tau_{i}}_{1}}(x_{0})).\] For \(j=1,...,l\), define \[\tilde{A}_{j,i}(x) =r^{j}_{i}A_{i}(x_{0}+r^{j}_{i}x),\] \[\tilde{u}_{j,i}(x) =u(x_{0}+r^{j}_{i}x).\] Since \(\mathscr{A}(\nabla_{i},u_{i};B_{\delta r^{j}_{i}}(x_{0})\backslash B_{R^{j-1}_ {i}}(x_{0}))=\mathscr{A}(\widetilde{\nabla}_{j,i},\tilde{u}_{j,i};B_{\delta}(0 )\backslash B_{R^{j-1}_{i}/r^{j}_{1}}(0))\), where \(\lim_{i\to\infty}\frac{r^{j-1}_{i}}{r^{j}_{i}}=0\) and for any \(r\in(\frac{R^{r^{j-1}_{i}}_{1}}{2r^{j}_{1}},2\delta)\), we have \[\liminf_{i\to\infty}\mathscr{A}(\widetilde{\nabla}_{j,i},\tilde{u}_{j,i};B_{2 r}(0)\backslash B_{r}(0))<\epsilon\] (otherwise there exists an equivalent class between \(\{r_{i}^{j-1}\}\) and \(\{r_{i}^{j}\}\)). By lemma 5.7, we have \[\lim_{R\rightarrow\infty}\lim_{\delta\to 0}\liminf_{i\rightarrow\infty} \mathscr{A}(\nabla_{i},u_{i};B_{\delta r_{i}^{j}}(x_{0})\backslash B_{Rr_{i}^{j- 1}}(x_{0}))=0.\] Similarly, \(\{r_{i}^{L}\}\) is the largest equivalent implies \(\{(\nabla_{i},u_{i})\}\) satisfy (31) by replacing \(\{r_{i}^{1}\}\) with \(\{r_{i}^{L}\}\) and thus \[\lim_{R\rightarrow\infty}\lim_{\delta\to 0}\liminf_{i \rightarrow\infty}\mathscr{A}(\nabla_{i},u_{i};B_{\delta}(x_{0})\backslash B _{Rr_{i}^{L}}(x_{0}))=0.\] The uniformly \(W^{1,2}\) bound of \(\widetilde{\nabla}_{j,i}\) implies that there is a weakly limit \(\widetilde{\nabla}_{j,\infty}\) in \(\mathbb{R}^{4}-\{0\}\) and the removable singularity theorem shows that it could be extended to a Yang-Mills connection over \(S^{4}\). The convergence may be not smooth in \(S^{4}\), and we can repeat the bubble-neck decomposition at each blow-up point as above. This process must stop after finite steps by the uniform energy bound. Hence \[\lim_{R\rightarrow\infty}\lim_{\delta\to 0}\liminf_{i \rightarrow\infty}\mathscr{A}(\nabla_{i},u_{i};B_{Rr_{i}^{j}}(x_{0})\backslash B _{\delta r_{i}^{j}}(x_{0}))=YM(\widetilde{\nabla}_{j,\infty})+\sum_{k=1}^{K_{ j}}YM(\widetilde{\nabla}_{j,\infty,k}),\] where \(\widetilde{\nabla}_{j,\infty,k}\) are the bubbles of \(\widetilde{\nabla}_{j,\infty}\). For simplicity, we assume \(\Sigma=\{x_{0}\}\). Finally, by choosing a subsequence, we have \[\lim_{i\rightarrow\infty}\mathscr{A}(\nabla_{i},u_{i})= \lim_{\delta\to 0}\lim_{i\rightarrow\infty}\mathscr{A}( \nabla_{i},u_{i};M\backslash B_{\delta}(x_{0}))+\lim_{R\rightarrow\infty} \lim_{i\rightarrow\infty}\mathscr{A}(\nabla_{i},u_{i};B_{Rr_{i}^{1}}(x_{0}))\] \[+\lim_{R\rightarrow\infty}\lim_{\delta\to 0}\lim_{i \rightarrow\infty}\mathscr{A}(\nabla_{i},u_{i};B_{\delta}(x_{0})\backslash B _{Rr_{i}^{1}}(x_{0}))\] \[= \mathscr{A}(\nabla_{\infty},u_{\infty})+YM(\widetilde{\nabla}_{1,\infty})+\sum_{j=1}^{L}(YM(\widetilde{\nabla}_{j,\infty})+\sum_{k=1}^{K_{j}} YM(\widetilde{\nabla}_{j,\infty,k})).\] Then we finish the prove. **Remark 5.2**: If we consider the Higgs fields taking values in \(\Omega^{0}(\mathfrak{g}_{E})\), we can get the similar energy identity. **Theorem 5.9**: _Assume \(\{(\nabla_{i},\Phi_{i})\}\) is a family of Yang-Mills-Higgs pairs and \(\mathscr{A}(\nabla_{i},\Phi_{i})\leq K\), where \(\Phi\in\Omega^{0}(\mathfrak{g}_{E})\). Then there is a finite subset \(\Sigma=\{x_{1},...,x_{l}\}\subset M\), a Yang-Mills-Higgs pair \((\nabla_{\infty},\Phi_{\infty})\) on \(M\backslash\Sigma\) and Yang-Mills connections \(\{\widetilde{\nabla}_{jk}\mid 1\leq j\leq l,1\leq k\leq K_{j}\}\) over \(S^{4}\), such that there is a subsequence of \(\{(\nabla_{i},\Phi_{i})\}\) converges to \((\nabla_{\infty},\Phi_{\infty})\) in \(C^{\infty}_{loc}(M\backslash\Sigma)\) under gauge transformations and_ \[\lim_{i\rightarrow\infty}\mathscr{A}(\nabla_{i},\Phi_{i})= \mathscr{A}(\nabla_{\infty},\Phi_{\infty})+\sum_{j=1}^{l}\sum_{k=1}^{K_{j}} YM(\widetilde{\nabla}_{jk}). \tag{32}\] ## Conflict interests There is no conflict of interest. ## Acknowledgment All authors would like to thank Prof. Jiayu Li for his encouragement and constant help. The first author is supported by National Key R&D Program of China 2022YFA1005400 and NFSC No.12031017 and the second author is supported by NSFC No.12001532.
2310.18507
Differentiable Simulator For Dynamic & Stochastic Optimal Gas & Power Flows
In many power systems, particularly those isolated from larger intercontinental grids, operational dependence on natural gas becomes pivotal, especially during fluctuations or unavailability of renewables coupled with uncertain consumption patterns. Efficient orchestration and inventive strategies are imperative for the smooth functioning of these standalone gas-grid systems. This paper delves into the challenge of synchronized dynamic and stochastic optimization for independent transmission-level gas-grid systems. Our approach's novelty lies in amalgamating the staggered-grid method for the direct assimilation of gas-flow PDEs with an automated sensitivity analysis facilitated by SciML/Julia, further enhanced by an intuitive linkage between gas and power grids via nodal flows. We initiate with a single pipe to establish a versatile and expandable methodology, later showcasing its effectiveness with increasingly intricate examples.
Criston Hyett, Laurent Pagnier, Jean Alisse, Igal Goldshtein, Lilah Saban, Robert Ferrando, Michael Chertkov
2023-10-27T21:53:04Z
http://arxiv.org/abs/2310.18507v3
# Differentiable Simulator For Dynamic & Stochastic Optimal Gas & Power Flows ###### Abstract In many power systems, particularly those isolated from larger intercontinental grids, operational dependence on natural gas becomes pivotal, especially during fluctuations or unavailability of renewables coupled with uncertain consumption patterns. Efficient orchestration and inventive strategies are imperative for the smooth functioning of these standalone gas-grid systems. This paper delves into the challenge of synchronized dynamic and stochastic optimization for independent transmission-level gas-grid systems. Our approach's novelty lies in amalgamating the staggered-grid method for the direct assimilation of gas-flow PDEs with an automated sensitivity analysis facilitated by SciML/Julia, further enhanced by an intuitive linkage between gas and power grids via nodal flows. We initiate with a single pipe to establish a versatile and expandable methodology, later showcasing its effectiveness with increasingly intricate examples. ## 1 Introduction & Background The increasing penetration of renewable energy sources has amplified unpredictable fluctuations, leading to more severe and uncertain ramps in the duck curve associated with power demand. Simultaneously the transition from coal to cleaner "bridge fuels" such as natural gas, shift even more responsibility to the natural gas system. Beyond power generation, transmission level gas systems face potential stressors from factors such as residential and commercial distribution, and exports. The differing timescales between the gas and power networks -- power systems stabilize within seconds and gas systems might take days -- complicate coordination efforts in both real-time operations and day-ahead planning across sectors. Previous studies, such as [1] and [2], integrated gas dynamics into day-ahead plans using an optimization framework. These optimizations incorporated constraints for the gas network arising from either steady-state approximations or a coarse discretization of the elliptic approximation to the isothermal gas equations. Recent work has designed linear approximations for pipe segments, trading an increase in computational efficiency with a decrease in fidelity; suitable for integration into optimization frameworks [3]. However, addressing the intrinsic nonlinearity of gas system dynamics, particularly under stressed and uncertain conditions, remains a substantial challenge. The challenge at hand can be formally articulated as the solution to a PDE-constrained optimization problem, depicted schematically as: \[\min_{\{u^{(s)}(t),q^{(s)}(t)\}}\sum_{s\in\mathcal{S}}\int_{0}^{T}C^{( s)}(u^{(s)}(t),q^{(s)}(t))\,dt, \tag{1}\] \[\text{s.t.}\ \forall s,\ \forall t:\ \text{PDE}^{(s)}(u^{(s)}(t),q^{(s)}(t))=0,\] where \(u^{(s)}(t)\) and \(q^{(s)}(t)\) signify the time-evolving state space and control degrees of freedom for scenarios or samples \(s\in\mathcal{S}\) respectively. The term \(C^{(s)}(u^{(s)}(t),q^{(s)}(t))\) denotes the cumulative cost. In our chosen framework: \(q^{(s)}(t)\) embodies the gas extraction from the system, which can be redistributed across various nodes of the gas-grid where gas generators are positioned; \(u^{(s)}(t)\) represents the gas flows, gas densities, and, indirectly via the gas equation of state, pressures over the gas-grid pipes. The cost function \(C^{(s)}(u^{(s)}(t),q^{(s)}(t))\) encapsulates the discrepancy between aggregated energy generation (directly related to gas extraction at nodes) and demand, operational costs of gas generators, and pressure constraints at the gas-grid nodes. The equation \(\text{PDE}^{(s)}(u^{(s)}(t),q^{(s)}(t))=0\) characterizes the gas-flow equations, elucidating for each scenario \(s\) how gas flows and densities are spatially (across the gas-grid network) and temporally distributed, contingent on the profile of gas extraction and injection. A detailed explanation is provided in Section 2. In this paper, we propose a novel approach to solving Eq. (1), aiming to enhance the fidelity of gas accounting in day-ahead planning of power generation in a computationally efficient manner. Our solution crafts a differentiable simulator by leveraging the principles of differentiable programming (DP) [4], combined with an efficient explicit staggered-grid method [5], and the robust capabilities of the SciML sensitivity ecosystem [6]. As we delve further, it will become evident that our approach adeptly addresses the intertwined challenges of nonlinearity, dimensionality, and stochastic modeling. In the proposed framework, differentiable programming facilitates the calculation of gradients by seamlessly solving the gas-flow PDE across a network. This is realized by auto-generating the corresponding adjoint equations, providing flexibility in formulating the forward pass. The approach not only supports sensitivity analysis but, with a judicious selection of algorithms, proficiently manages scalability issues in parameter spaces, all while preserving the intricate nonlinear dynamics. Driven by the everyday operational challenges characteristic of Israel's power system, as expounded in [7] and its associated references, we design and solve a dynamic, stochastic problem that integrates power and gas flows over an operational timeframe ranging from several hours to an entire day. We predominantly target smaller, transmission-level systems akin to Israel's, characterized by: 1. Limited availability or operational restrictions of gas compressors; 2. Notable fluctuations in renewable resources and power loads, with curtailment being inadmissible under the normal operational paradigms assumed in this research; 3. An intentionally overengineered power system, ensuring power lines remain within thermal boundaries during standard operations. To demonstrate the efficacy of our methodology, we initiate with a single-pipe scenario, advancing thereafter to a more intricate, albeit representative, network. The remainder of the manuscript is structured as follows: In Section 2, we elucidate our gas modeling methodology, starting with a single pipe before extending to a broader network. Within this section, we also elaborate on our fundamental optimization problem and delineate our strategy for its resolution. Subsequent discussions of our experimental results are presented in Section 3. Finally, the manuscript culminates with conclusions and suggested future directions in Section 4. Methodology ### Gas-Flow Equations We begin by discussing the dynamics of a single pipe. The governing partial differential equations (PDEs) for the Gas Flow (GF), describing the dynamics of density \(\rho(x,t)\) and mass-flux \(\phi(x,t)\) along the pipe coordinate \(x\) with respect to time \(t\), are provided as follows [8],[9]: \[\partial_{t}\rho+\partial_{x}\phi =0, \tag{2}\] \[\partial_{t}\phi+\partial_{x}p =-\frac{\lambda}{2D}\frac{\phi|\phi|}{\rho}. \tag{3}\] These equations are valid under the assumption that the gas velocity is much smaller than the speed of sound in the gas (\(\phi/\rho\ll a\)). This is a reasonable approximation for the typical flows we consider. To provide a complete description, it is necessary to relate the pressure \(p(x,t)\) and density \(\rho(x,t)\) using an equation of state: \[p=Z(\rho,T)RT\rho, \tag{4}\] where \(Z(\rho,T)\) denotes the compressibility factor. For clarity, we adopt the ideal gas law to model the equation of state, where \(Z(\rho,T)RT\) is replaced by a constant, \(a^{2}\), with \(a\) representing the speed of sound in the gas. Notably, there are more accurate models available (e.g., CNGA [10]), and the methodology we present here is agnostic to the specific choice of model. The system of Eqs. (2,3,4) is also supplemented by the boundary conditions, for example given profile of injection/consumption, at both ends of the pipe of length \(l\), \(q(0,t)\) and \(q(L,t)\). To extend the described equations from a single pipe to a network, we must augment the PDEs (2), (3) and equation of state (4) for each pipe with boundary conditions that couple multiple pipes, and work with prescribed injection/consumption at nodes of the network. These numerical details are described in the next section. ### Explicit Numerical Method for the Forward Path To solve Eqs. (2,3,4) in the case of a single pipe and their network generalizations in more general setting of the PDEs over networks, we use an explicit, second-order, staggered-grid method introduced by Gyrya & Zlotnik [5]. This method applied to the interior of a pipe is shown schematically in Fig. (1). First-order differences in space and time become centered differences due to the variable staggering. In particular, the states \(\rho_{i}^{n}:=\rho(x_{i},t_{n})\), \(\phi_{j}^{m}:=\phi(x_{j},t_{m})\) (where \(i\) and \(n\) stand for discretization indexes in time and space, respectively, \(m=n+1/2\) and \(j=i+1/2\)) are advanced in time and space following \[\rho_{i}^{n+1} =\rho_{i}^{n}-\frac{\Delta t}{\Delta x}\left(\phi_{j}^{m}-\phi_{ j-1}^{m}\right), \tag{5}\] \[\phi_{j}^{m+1} =\phi_{j}^{m}-\Delta t\left(\frac{p_{i+1}^{n+1}-p_{i}^{n+1}}{ \Delta x}+\frac{\lambda}{2D}\frac{\phi_{j}^{n+1}|\phi_{j}^{n+1}|}{\rho_{j}^{n +1}}\right). \tag{6}\] Here Eq. (6) is in fact implicit in \(\phi_{j}^{m+1}\) due to averaging required to approximate \(\phi_{j}^{n+1}\), but an exact solution is available, (see [5] for details). As we are interested in integration in day-ahead planning of energy generation, we control Dirichlet boundary conditions on nodal mass flows \(\{q_{i}\}_{i\in\text{nodes}}\) (directly relating to generated power). These boundaries are resolved according to the numerical method using a boundary discretization shown in Fig. (2). The pressure updates for these junctions are evaluated using conservation of mass, and the Kirchoff-Neumann conditions for an update rule for boundary node \(\ell\) \[\sum_{k\in\partial\ell}\frac{\Delta x}{\Delta t}S_{kl}\rho_{\ell }^{n+1}=\] \[\qquad q_{l}+\sum_{k\in\partial\ell}\frac{\Delta x}{\Delta t}S_{ kl}\rho_{kl}^{n}-\sum_{k\in\partial\ell}\text{sgn}_{kl}S_{kl}\phi_{kl}^{m}, \tag{7}\] where \(S_{kl}\) is the cross-section area of pipe from node \(k\) to node \(l\), and \(\text{sgn}_{kl}\) keeps track of the directionality of the mass flux. \(\rho_{kl},\phi_{kl}\) denote the \(l\)-side boundary values of density and mass flux for the pipe from node \(k\) to node \(l\). After solving for the density at the node, the flux update at the ends of the pipes can proceed using the momentum equation (6). ### Optimization Formulation: Cost Function In our pursuit to devise a scalable framework that aptly accommodates optimization challenges akin to the archetype presented in Eq. (1), we pivot our attention to a paradigmatic problem: the minimization of an integrated objective spanning time and evaluated under the cloak of uncertainty. This uncertainty, reflected through diverse scenarios \(s\in\mathcal{S}\), pertains to the gas injection consumption \(q^{(s)}(t):=\{q^{(s)}_{i}(t)\}_{\forall i\in\text{nodes}}\), influenced possibly by variable renewable generation. The time interval \(t\in[0,T]\) typically encapsulates a pre-established performance window, like 24 hours. Our control parameters, symbolized by nodal flows \(q^{(s)}(t)\), permit adjustments within our forthcoming dynamic and stochastic optimization context. By solving Eqs. (2-3) for each scenario, we determine the network's state \(u^{(s)}(t):=\{\rho^{(s)}(x,t),\phi^{(s)}(x,t)\}_{\forall x\in\text{nodes \& pipes}}\). To streamline notation, we represent aggregated degrees of freedom over time and scenarios by \(u(t):=\{u^{(s)}(x,t)\}_{\forall x\in\text{pipes};\forall s}\) and \(q(t):=\{q^{(s)}_{i}(t)\}_{\forall i\in\text{nodes};\forall s}\). Additionally, we define \(u:=\{u(t)\}_{\forall t}\) and \(q:=\{q(t)\}_{\forall t}\) to aggregate over time. Our primary optimization task is delineated as minimization of \[O(u,q)=\sum_{s\in\mathcal{S}}\int_{0}^{T}C^{(s)}(u^{(s)}(t),q^{s}(t))\,dt, \tag{8}\] where the specific per-time and per-scenario cost is expanded as: \[C^{(s)}(u^{(s)}(t),q^{(s)}(t))=\alpha\left(D^{(s)}(t)-\sum_{i\in \text{nodes}}G_{i}(q^{(s)}_{i}(t))\right)^{2}\\ +\beta\sum_{i\in\text{nodes}}E^{(s)}_{i}(q^{(s)}_{i}(t))+\gamma \sum_{x\in\text{nodes}}V(p^{(s)}(x,t)), \tag{9}\] constrained by the gas-flow PDEs and associated boundary conditions over gas-grid network detailed earlier. The first term in Eq. (9) aims to minimize the cumulative mismatch between energy demand \(D^{(s)}(t)\) and the sum of generation at each node \(i\) and at each moment of time \(t\), \(G_{i}(q^{(s)}_{i}(t))\), with \(q^{(s)}_{i}(t)\) representing the nodal flows, which is our control variable (one we are optimizing over). \(G_{i}(q^{(s)}_{i}(t))\) is an efficiency function, mapping mass flow (in \(kg/s)\) Figure 1: The staggered grid method of [5] uses centered differences on grids offset by \(\Delta x/2\) and \(\Delta t/2\) to achieve a conservative 2nd-order numerical method for the effective gas-flow equations Eqs. (2-3) Figure 2: An example of a 3-pipe junction discretization, with a fictitious pipe setting the nodal flow boundary condition. to power production (in \(MW\)). Here the assumption is that any residual mismatch, if not optimal, can be adjusted by either shedding demand or introducing a generation reserve, at a certain cost. The second term in Eq. (9), \(E_{i}^{(s)}(q_{i}(t))\), stands for the cost of operating power generator run on gas and located at the node \(i\) at the gas withdrawal rate \(q_{i}(t)\). The third term in Eq. (9), \(\sum_{x\in\text{nodes}}V(p^{(s)}(x,t))\), is chosen to be a quasi-quadratic cost (regularized by the relu function) to penalize pressure constraint violations across the network (refer to Fig. 3): with \(p_{\min}(x)\) and \(p_{\max}(x)\) denoting pre-set pressure boundaries at system nodes. The influence of \(C^{(s)}\)'s components can be modulated using the hyperparameters \(\alpha\), \(\beta\), and \(\gamma\). ### Solving PDE Constrained Optimization In this Section, we elucidate our strategy to address Eq. (1). Essentially, two predominant methodologies emerge for tackling the PDE-constrained optimization challenge: 1. **Constraint Matrix Encoding:** This method integrates the PDE into a constraint matrix that grows as discretization becomes finer. A notable merit of this approach is its flexibility in harnessing advanced optimization techniques. On the flip side, the methodology grapples with potential pitfalls such as the emergence of unphysical solutions, non-adherence to constraints during intermediary timeframes, and the curse of dimensionality, manifesting as an exponential surge in complexity with the growth of the problem. 2. **Solving via the Adjoint Method:** This strategy employs Lagrangian multipliers for the PDEs and supplementary constraints, subsequently seeking the stationary point of the augmented Lagrangian concerning control degrees of freedom \(u\), exogenous degrees \(q\), and the associated adjoint variables (Lagrangian multipliers). Detailed discussions on this standard material provided for completeness are available in Appendix 4.1. This method ensures that \(u\) remains physically valid throughout the optimization process. Further, \(u\) converges to a well-defined solution as the grid undergoes refinement (\(\Delta x,\Delta t\to 0\)). The challenge arises in the calculation of gradients through the PDE solver. Moreover, the induced ODE system's dimensionality from the discretized PDEs scales as \(\mathcal{O}(N_{x})\), and due to the Courant-Friedrichs-Lewy (CFL) condition for hyperbolic PDEs[11], the number of necessary timesteps increase as a function of \(N_{x}\); in our instance, \(N_{t}\sim N_{x}\). In the present study, we adopt the second approach. The aforementioned challenges are addressed by adeptly solving the forward problem through an explicit staggered grid method and ascertaining the gradient of the objective with respect to the parameters \(q\) by tackling the adjoint equation, as expounded in Section 4.1. Notably, the deployment of the adjoint method is streamlined and automated with the assistance of an auto-differentiation software package. Figure 3: Quasi-quadratic penalty for violating pressure constraints, with \(40-80\) bar shown here, but configurable on a per-node basis. Results Our eventual goal is full integration of networked transient gas-flows with day-ahead/real-time unit commitment. Thus, we are interested in control of the mass-flows - and via a conversion through efficiency curves, energy generation - at the boundaries or nodes of our network. Therefore, our control variables in our optimization are the time series nodal flows \(\{q_{i}(t)\}_{i\in\text{nodes}}\). Further parameterization (e.g., of compressor settings) is possible but delegated to future work. We first benchmark the methodology on a toy optimization of a single pipe before performing a more realistic optimization on a small network. ### Single Pipe Before approaching meaningful gas/power-flow optimizations, we test the methodology for performance and convergence on a single pipe, elucidating a few key aspects using the simplicity of the example to benchmark the method. As we only have two nodes, our control parameters are simply \(q=\{q_{1}(t),q_{2}(t)\}_{t\in[0,T]}\). In particular, we solve the toy optimization \[\min_{q}\int_{t}\int_{x}\left[(\text{1-}\alpha)\left\|u(x,t,q)-U(x,t)\right\|_ {2^{\text{1}}}^{2}\!\!\alpha\left\|q_{1}(t)\right\|_{2}^{2}\right]dx\,dt \tag{10}\] where \(u(x,t,q)\) is the output of the staggered grid method using parameters \(q\), and \(U(x,t)\) is a reference solution. This dual objective function seeks to recreate the pressures and mass flows of the reference system, while being penalized for any mass-flow through node 1. This toy example shows the ability of the optimization to quickly converge to a solution, despite transient dynamics being present in the initial condition to the forward solve. The results are shown in Fig. (4). The top panel shows results for \(\alpha=0\), where without the nodal flow penalty, the network converges to the parameters used in the reference solution. The middle panel reports for \(\alpha=1/2\), the optimization was able to reduce the magnitude of the flow through the left node, and modify the flows through the right node to achieve a minimum. Finally, with \(\alpha=1\), shown in the bottom panel, the optimization easily finds the minimum, and the graph shows the (unpenalized) differences in pressures over time in the pipe. We also use this simple example to benchmark the method for compuation complexity. As shown in Fig. (5) we achieve computation complexity of \(\mathcal{O}(N_{x}\cdot N_{t})\) for the forward and gradient calculations, with \(N_{x}\) the number of spatial discretization points and \(N_{t}\) the number of timesteps. Of particular note is the absence of sensitivity of computation time to the number of parameters \(N_{p}\) - this suggests desirable scaling when extending this method to more complex parameterizations. ### Small Network We now apply the method to optimize the meaningful objective Eq. (9) on a 4 pipe network shown in Fig. (6). We use the artificial demand curve \[D(t)=200\cdot\sin\left(2\pi\frac{t}{T}\right)+400 \tag{11}\] with linear gas withdrawal cost, \(E(\psi)=\max(0,\psi)\), where \(\psi\) is positive if gas is being withdrawn from the network, and negative if gas is being injected into the network. Our network has one supply node (node 1), and three power plants (PP1, PP2, PP3), at nodes 2, 3, and 4, respectively. PP1 and PP3 are about 30% more efficient than PP2, and we thus expect the network to use their capacity first. The results of the optimization are shown in Fig. 6, where the top panel shows the energy demand and production, as well as contributions from the individual power plants. The bottom two panels show the pressures throughout the network at start and end, (animations showing the full temporal evolution of the pressure are available online [https://github.com/cmhyett/DiffGasNetworks](https://github.com/cmhyett/DiffGasNetworks)). In practice, we want first and foremost to ensure the network does not violate pressure constraints (minimum pressure crossings can lead to loss of generators and thus outages), second to ensure generated power meets demand, and third to provide power at the lowest cost. This leads to the ordering of hyperparameters \(\gamma\gg\alpha>\beta\) in Eq. (9). During our optimization, pressure falls, and PP3 being at the end of the network is most vulnerable to a low pressure crossing - thus PP2, despite having a higher generation cost supplements the generation during hours 5 and 6. ## 4 Conclusion & Path Forward The primary technical advancement of this manuscript lies in the harmonization of three distinct components for resolving the stochastic optimal gas flow problem - where stochasticity is incorporated via samples while seeking the stationary point of the respective augmented Lagrangian: Figure 4: Results for the optimization in Eq. (10) for a pipe of 200mi over a 24hr forecast. The panels show (top) \(\alpha=0\), (middle) \(\alpha=0.5\), and (bottom) \(\alpha=1\). The left plots show the reference and optimized boundary conditions, while the right plots show the first hours of the pressure evolution along the pipe. The initial condition is a constant pressure of 10bar, with a Gaussian bump at \(x=100m\). Figure 5: 1. **Efficient Gas-Flow Implementation:** Our approach leverages the explicit staggered-grid method for forward-in-time primal equations, streamlining the computational treatment of the gas flows. 2. **Integration with SciML/Julia Ecosystem:** By integrating our forward framework into the SciML/Julia ecosystem, we seamlessly gain access to automatic sensitivity analysis. This, in turn, facilitates the handling of adjoint (backward-in-time) equations automatically. 3. **Simple Gas-Power Coupling:** The inter-dependency between gas and power is accounted for through nodal flows, providing a straightforward formalization of the two energy infrastructure inter-dependency. We demonstrated the method achieved optimal computation scaling in both the forward and gradient calculations. The method was applied to solve optimizations on a single pipe system and subsequently on a more intricate four-node system, each containing nontrivial transient dynamics. Should our manuscript be accepted for PSCC, we plan to augment the final version with additional experimental data. Specifically, we aim to test our proposed methodology on a realistic 11-node representation of Israel's gas system. Looking ahead, our future objectives encompass: * **Enhanced Power Systems Modeling:** We plan to bolster our already sufficiently exhaustive gas-grid network model by integrating a more comprehensive representation of power systems. This enhancement will address aspects like power losses and will allow to extend optimization to other resources on the power system side beyond just the gas-power plants. * **Accounting for Emergencies:** We plan to integrate this research with a related study, which our team has also submitted to PSCC. This partner study delves Figure 6: Results for the optimization in Eq. (10) for a network of 4 pipes, each 70km long, and with varying widths and friction factors. Power plants 1 and 3 are about 30% more efficient than power plant 2. At late times, the optimization prefers the higher cost of running plant 2 to incurring a pressure penalty at plant 3 (node 4). deeper into emergency scenarios, particularly focusing on more challenging operational conditions. * **Long-Term Adaptations for Israel's Gas-Grid System:** Our ultimate ambition is to tailor this framework not only for operations but also for operations-aware planning of Israel's gas-grid system. Among other facets, this will involve the evaluation and comparison of potential extensions, like the inclusion of gas storage, compressors, batteries and evaluation various energy export options. ## Appendices ### Adjoint Method In order to utilize gradient descent algorithms to optimize Eq (8), we must compute \(d_{q}O(u,q)\) with \[O(u,q)=\int_{0}^{T}C(u,q,t)\,dt \tag{12}\] and \(C:=\text{Cost}\). By construction, we have that \(g(u,\dot{u},q,t)=0\) specifying the differential equation, and \(h(u(0),q)=0\) specifying the initial condition. Thus, we can rephrase this optimization using the Lagrangian formulation \[\mathcal{L}:=\int_{0}^{T}\left[C(u,q,t)+\lambda^{T}g(u,\dot{u},q,t)\right]dt+ \mu^{T}h(u(0),q), \tag{13}\] where \(\lambda\) and \(\mu\) are the Lagrangian multipliers associated with the dynamics and initial condition, respectively. Notice that \(d_{q}O=d_{q}\mathcal{L}\) since \(g(u,\dot{u},q,t)=h(u(0),q)=0\) everywhere by construction. This equality additionally gives us complete freedom to choose \(\lambda,\mu\) (with \(\lambda\) dependent on time). Then we compute \[d_{q}\mathcal{L} =\int_{0}^{T}(\partial_{u}Cd_{q}u+\partial_{q}C)\] \[\quad+\lambda^{T}\left(\partial_{u}gd_{q}u+\partial_{\dot{u}}gd_{ q}\dot{u}+\partial_{q}g\right)dt\] \[\qquad\qquad+\mu^{T}\left(\partial_{u(0)}hd_{q}u(0)+\partial_{q} h\right) \tag{14}\] We can use integration by parts to express \(d_{q}\dot{u}\) in terms of \(d_{q}u\) \[\int_{0}^{T}\lambda^{T}\partial_{\dot{u}}gd_{q}\dot{u}\,dt=\\ \left[\lambda^{T}\partial_{\dot{u}}gd_{q}u\right|_{0}^{T}-\int_{0 }^{T}\left(\lambda^{T}d_{t}\partial_{\dot{u}}g+\dot{\lambda}^{T}\partial_{ \dot{u}}g\right)d_{q}u\,dt \tag{15}\] Substituting Eq (15) into Eq (14) and collecting terms in \(d_{q}u\) and \(d_{q}u(0)\) \[d_{q}\mathcal{L}=\\ \int_{0}^{T}\Big{[}\left(\partial_{u}C+\lambda^{T}(\partial_{u}g- d_{t}\partial_{\dot{u}}g)-\dot{\lambda}^{T}\partial_{\dot{u}}g\right)d_{q}u+ \partial_{q}C+\lambda^{T}\partial_{q}g\Big{]}dt\] \[+\big{[}\lambda^{T}\partial_{\dot{u}}gd_{q}u\big{|}_{T}+\big{[} \mu^{T}\partial_{u}h-\lambda^{T}\partial_{\dot{u}}g\big{|}_{0}\,d_{q}u(0)+\mu^ {T}d_{q}h. \tag{16}\] We now begin exploiting freedom in \(\lambda,\mu\) to avoid calculation of \(d_{q}u\). Set \[\lambda(T)=0 \implies\big{[}\lambda^{T}\partial_{\dot{u}}gd_{q}u\big{|}_{T}=0 \tag{17}\] \[\mu^{T}=\big{[}\lambda^{T}\partial_{\dot{u}}g\big{|}_{0}\left( \partial_{u(0)}g\right)^{-1}\implies\big{[}\mu^{T}\partial_{u(0)}h-\lambda^{ T}\partial_{\dot{u}}g\big{|}_{0}=0 \tag{18}\] Then we have \[d_{q}\mathcal{L}=\int_{0}^{T}\Big{[}\left(d_{q}C+\lambda^{T} \left(\partial_{u}g-d_{t}\partial_{\dot{u}}g\right)+\dot{\lambda}^{T}\partial_ {\dot{u}}g\right)d_{q}u\\ +\lambda^{T}\partial_{q}g\Big{]}dt\\ +\big{[}\lambda^{T}\partial_{\dot{u}}g\big{|}_{0}\left(\partial_{ u(0)}h\right)^{-1}d_{q}h \tag{19}\] We still have freedom to set \(\lambda(t)\) for \(t\in[0,T)\). Thus, once again to avoid \(d_{q}u\), solve for \(\lambda\) backward in time from \[\partial_{q}C+\lambda^{T}\left(\partial_{u}g-d_{t}\partial_{\dot{u}}g\right) +\dot{\lambda}^{T}\partial_{\dot{u}}g=0 \tag{20}\] \[\text{with }\lambda(T)=0\] We then have \[d_{q}\mathcal{L} =d_{q}O=\] \[\int_{0}^{T}\left(d_{q}C+\lambda^{T}\partial_{q}g\right)dt+\left[ \dot{\lambda}^{T}\partial_{\dot{u}}g\right]_{0}\left(\partial_{u(0)}h\right)^{- 1}d_{q}h \tag{21}\] Thus, solving Eq. (20) allows performing the integration Eq. (21), at which time we have the desired gradient \(d_{q}O\) and can take an optimization step. Notice that we still have a functional forms to determine, and these functions depend on the solved state \(u\), e.g., \(d_{q}C(u,q,t),\partial_{u}g(u,\dot{u},q,t)\), etc. We use source-to-source AD to determine and evaluate these functional forms. ### Differentiable Programming Source-to-source differentiation, particularly from Zygote.jl [12], is a transformational capability that allows reverse-mode automatic differentiation (AD) through programming language constructs - enabling optimized adjoint function evaluation without the need to write the derivatives by hand. This freedom ensures correctness, and allows for generality in construction of the forward pass [13]. In order to compute the integral Eq. (21), the adjoint ODE Eq. (20) is solved for \(\lambda(t)\), and the term \(\partial_{q}g\) is found via source-to-source reverse-mode AD. This method to compute the adjoint has computational cost that scales linearly with the forward pass, and with the number of parameters [14]. Thus, while other methods of gradient calculation are more efficient for small numbers of parameters, we choose the adjoint using AD in anticipation of extension of the optimization problem to large networks with varying configurations.
2302.08505
Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper Learning
Objective The coordination of human movement directly reflects function of the central nervous system. Small deficits in movement are often the first sign of an underlying neurological problem. The objective of this research is to develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT) that can track the fastest human movement accurately when webcams or laptop cameras are used. Materials and Methods We applied RMT to finger tapping, a well-validated test of motor control that is one of the most challenging human motions to track with computer vision due to the small keypoints of digits and the high velocities that are generated. We recorded 160 finger tapping assessments simultaneously with a standard 2D laptop camera (30 frames/sec) and a high-speed wearable sensor-based 3D motion tracking system (250 frames/sec). RMT and a range of DLC models were applied to the video data with tapping frequencies up to 8Hz to extract movement features. Results The movement features (e.g. speed, rhythm, variance) identified with the new RMT system exhibited very high concurrent validity with the gold-standard measurements (97.3\% of RMT measures were within +/-0.5Hz of the Optotrak measures), and outperformed DLC and other advanced computer vision tools (around 88.2\% of DLC measures were within +/-0.5Hz of the Optotrak measures). RMT also accurately tracked a range of other rapid human movements such as foot tapping, head turning and sit-to -stand movements. Conclusion: With the ubiquity of video technology in smart devices, the RMT method holds potential to transform access and accuracy of human movement assessment.
Renjie Li, Chun Yu Lao, Rebecca St. George, Katherine Lawler, Saurabh Garg, Son N. Tran, Quan Bai, Jane Alty
2023-01-18T22:57:34Z
http://arxiv.org/abs/2302.08505v1
# Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper Learning ###### Abstract **Objective**: The coordination of human movement directly reflects function of the central nervous system. Small deficits in movement are often the first sign of an underlying neurological problem. The objective of this research is to develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT) that can track the fastest human movement accurately when webcams or laptop cameras are used. **Materials and Methods**: We applied RMT to finger tapping, a well-validated test of motor control that is one of the most challenging human motions to track with computer vision due to the small keypoints of digits and the high velocities that are generated. We recorded 160 finger tapping assessments simultaneously with a standard 2D laptop camera (30 frames/sec) and a high-speed wearable sensor-based 3D motion tracking system (250 frames/sec). RMT and a range of DLC models were applied to the video data with tapping frequencies up to 8Hz to extract movement features. **Results**: The movement features (e.g. speed, rhythm, variance) identified with the new RMT system exhibited very high concurrent validity with the gold-standard measurements (97.3% of RMT measures were within +/-0.5Hz of the Optotrak measures), and outperformed DLC and other advanced computer vision tools (around 88.2% of DLC measures were within +/-0.5Hz of the Optotrak measures). RMT also accurately tracked a range of other rapid human movements such as foot tapping, head turning and sit-to- stand movements. **Conclusion**: With the ubiquity of video technology in smart devices, the RMT method holds potential to transform access and accuracy of human movement assessment. ## 1 Background and Significance Assessment of human movement is critical to neuroscience as well as a broad range of other fields including sport science and rehabilitation medicine as it allows interrogation of brain networks, and measurement of the response to interventions [1, 2, 3]. Currently, movement features are evaluated by a clinician or researcher, using validated rating scales that provide ordinal scores for individual components such as speed and amplitude, or composite scores of these combined together [4, 5, 6]. These methods are subjective, expensive (as it requires at least one rater, often a medical specialist, per participant) and imprecise, with considerable inter-rater variability [7]. To reduce bias, multiple raters are recommended to rate one participant's movement [7] but this is labor dependent, time consuming, and lacks the granularity of a continuous measure. Wearable sensors provide accuracy, but they require specialist equipment and tend to be expensive. Furthermore they cannot be applied at a population level for epidemiological or health screening purposes, and are largely inaccessible for people in remote areas. No wearable sensor methods have yet been routinely incorporated into standard clinical practice. Due to the ubiquity of cameras in personal smart devices worldwide, deep learning-based computer vision methods applied to video recordings of human movement is a promising solution to overcome these limitations. However, current computer vision methods require cameras with a high sampling frequency that produce high resolution videos, and this limits the use of computer vision techniques with standard cameras in laptops or desktop webcams that have relatively low sampling rates. This is problematic as laptops and desktops are the most common method used in telemedicine [8] which has boomed in use since the COVID-19 pandemic. The Finger Tapping Test (FTT) is a well validated test of human movement function. It is used clinically and in neuroscience research studies as a measure of fine motor control. Participants are required to tap their index finger against their thumb repeatedly and usually instructed to tap 'as big and fast as possible'. The FTT is used to assess motor function across neuroscience research and an array of neurological disorders including the two most common neurodegenerative disorders: Parkinson's disease [9] and Alzheimer's disease [10]. With ageing populations, these two conditions already affect 61 million people worldwide and are predicted to affect 165 million by 2050 [11], so there is growing need for objective methods to evaluate the FTT. Our recent work, was the first to assess the performance of currently available cutting-edge computer vision models for extracting FTT movement features from relatively low frame per second (fps) web-cameras. This demonstrated that DeepLabCut (DLC) and other similar computer vision models (8 were assessed in total) [8] were able to reliably track finger tapping movements up to 4Hz. However, when FTT frequencies were above 4Hz, the motion blur on videos prevents accurate keypoint detection and renders DLC and other computer vision methods inaccurate. This means that whilst DLC techniques can track slower tapping speeds of patients with obvious motor impairment (e.g. in more advanced Parkinson's), it would not be able to measure subtle changes in fast movements that are generated by young adult participants or by those in the pre-motor, or early, stages of Parkinson's and other neurological disorders, where movements are generally above 4Hz. This severely limits the application of computer vision methods in real world settings. Thus, there remains an urgent unmet need for a computer vision method that can accurately measure human hand movements, and other fast human motions, using standard webcam technologies. This would transform our ability to remotely assess humans in their own homes for both clinical and research purposes. With the global COVID-19 pandemic, this need is even more pressing. To overcome these shortcomings of DLC and other advanced computer vision techniques, we have developed a new system, called Rapid-Motion-Track (RMT) that can extract accurate features of fast human movements from standard (relatively low) 30fps laptop cameras. We designed and conducted a number of experiments whereby participants completed the FTT at a range of metronome- and self- paced frequencies. Tapping movements were simultaneously recorded with a standard laptop camera and high speed 3D wearable sensors (Optotrak, 250 fps). Features determined through RMT applied to the video recordings were validated against the gold standard, wearable sensor method and compared to DLC and other deep learning methods. Our main contribution is that the new RMT system can extract valid and accurate fast human movement features using relatively low fps cameras that are superior to current DLC and other computer vision models. Sub-contributions are: * We show RMT outperforms DLC and other state-of-the-art computer vision methods in two challenging cases: 1) tapping frequencies from 0.5Hz to 6Hz and 2) videos in low resolution (\(256\times 256\)) and low frame rates (30 fps). * We demonstrate that RMT has robust tracking across a range of human movements used in motor control assessments including sit-to-stand, head turning, foot tapping and leg agility tests. ## 2 Materials and Methods ### Data Collection The data collection process has been described in detail previously [8]. In brief, participants sat facing a laptop camera with an Optotrak high speed 3D camera behind them. The Optotrak system (Northern Digital Inc.) sensors (infrared Light Emitting Diodes) were attached on participant's index fingertip and thumb-tip (both on the back of the hand). Sixteen participants performed FTT for 20 seconds under 5 conditions (0.5 Hz, 1Hz, 2Hz, 3Hz and maximal speed (which was typically in the range 5-8 Hz)). The laptop camera recorded 2D videos and the Optotrak system recorded the real life positions of the two sensors. Further details can be found in our previous work [8]. Each video contains around 600 frames (\(20\times 30\)) and 20 frames were extracted from each video based on K-means clustering algorithm (K=10) for manually labelling the index fingertip and thumb-tip position. Total 4,400 (with finger digit position being labelled) frames were used for evaluating the digit tracking result. The hand movement features calculated based on Optotrak sensors were used as the feature ground truths for evaluating the feature extraction result. ### RapidMotionTrack System RapidMotionTrack takes 2D FFT videos as the input and the outputs are hand kinematic features. This system consists of three modules, i.e. Fingertip Tracking, Adaptive Vertex Recognition and Feature Extraction, shown in Figure 1. The Fingertip Tracking module adapts our previous work P-MSDSNet [12] to track thumb-tip and index fingertip positions on the 2D finger tapping videos. The role of the Adaptive Vertex Recognition module is to smooth the distance-versus-time curve adaptively based on the dominant tapping frequency of a participant to accurately localize the time point of tapping peaks and tapping valleys. The Feature Extraction module takes charge of extracting different hand movement features that would be useful for neuroscience research. #### 2.2.1 Fingertip Tracking Module The Fingertip Tracking module takes 2D finger tapping videos as input and outputs a distance-versus-time curve graph. The distance is measured between the thumb-tip and the index fingertip. We applied and adjusted our previous work P-MSDSNet [12] to track thumb-tip and index fingertip positions on each frame of the video. P-MSDSNet is a multi-scale neural network which can learn abundant features from high resolution feature maps to low resolution feature maps in a cyclical and cascade pattern. Due to P-MSDSNet's deep supervision-based spatial attention mechanism on different scale levels of the input images, the network is able to extract discriminable features to effectively predict the locations of the thumb tip and the index fingertip. In the Fingertip Tracking module, we employed a P-MSDSNet with a stack of neural network blocks, each consisting of 5 different scaling operators. The multi-scale features are fused together in an up-down manner and are propagated to deeper levels under a deep supervision mechanism to increase the prediction accuracy. For FTT, some frames may have motion blur areas due to the very quick finger tapping. To mitigate the impact of the motion blur on the fingertips detection, we improve the final prediction stage network by taking into account the learned feature maps from different scales at the last neural network block in the stack (Figure 2). Figure 1: An illustration of RapidMotionTrack system with three modules, including Fingertip Tracking Module, Adaptive Vertex Recognition Module and Feature Extraction Module. In the network forward propagation process, the original frame is transformed to 5 (\(S=5\)) different scale feature maps (from \(X^{(1,0)}\) to \(X^{(5,0)}\)) through some convolutional operators. Then, along each scale, feature maps propagate information forward in a parallel manner, meanwhile, fuse different information through down-up scale connection blocks between adjacent scales. For example, \(X^{(1,1)}\) feature map not only propagates forward along its own scale path, but also does the downscale operation and fuse with \(X^{(2,1)}\) feature map. Then, \(X^{(2,1)}\) feature map propagates forward after it fuses the downscaled information from \(X^{(1,1)}\). In this case, information is spread not only along its self scale path but also across different scales paths in a downscale-upscale fusing mechanism. We apply 3 downscale-upscale blocks (\(M=3\)) and add deep supervision module at the end of each block [12]. In the final prediction stage, we fuse the final feature map from the smallest scale to largest scale step by step using convolutional operator and concatenating operator. At last, the final prediction of heatmap is inferred at the largest scale \(\hat{Y}\)(the same scale with the original frame). In the network training process, twenty frames were selected from each of the 220 videos by K-means (\(K=10\)) clustering algorithm [13] for manually annotating the positions of thumb-tip and index fingertip. Then, 4,400 annotated frames (\(20frames\times 220videos\)) were randomly split into training dataset (95%) and testing dataset (5%). We trained different deep learning models on training dataset and compared the performance on testing dataset. The annotated fingertip positions on each frame is transposed to a \(H\times W\times 2\) heatmap, where \(H\) and \(W\) refer to the height and width of the frame and 2 refers to the number of fingertips (in this case, thumb-tip and index fingertip). The heatmap is generated by a Gaussian function with \(\boldsymbol{\mu}=\begin{bmatrix}x\\ y\end{bmatrix}\) and \(\boldsymbol{\Sigma}=\begin{bmatrix}3&3\\ 3&3\end{bmatrix}\) for fingertip, \((x,y)\) is the position of the fingertip. For training, Mean Square Error (MSE) between the true fingertip position heatmap and predicted heatmap is used as loss function and Adam algorithm [14] is used as learning optimizer. The trained model will be used on each frame of each video to track the positions of thumb-tip and index fingertip. After obtaining the positions of thumb-tip and index fingertip on each frame, we calculate the Euclidean distance between thumb-tip and index fingertip (in pixels) on each frame and draw the distance-versus-time curve graph. Figure 2: An illustration of adjusted P-MSDSNet used in RapidMotionTrack. RapidMotionTrack applies 3 up-downscale blocks (\(M=3\)) and 5 different scale sizes (\(S=5\)). It also adds a final prediction stage to fuse the outputs from different scale feature maps together to make the final prediction. #### 2.2.2 Adaptive Vertex Recognition Module Since the predicted fingertip position covers only one pixel area of the frame, even when the performance of fingertip tracking is good, it is inevitable that the distance vs time curve will not be smooth (this can be caused by little tremor when participant does the finger tapping). Unsmooth curve increases the difficulty in identifying peaks and troughs, which further leads to inaccurate feature extraction. To solve this issue, we developed an Adaptive Vertex Recognition algorithm to identify peaks and troughs along the distance-versus-time curve graph adaptively by filtering out rough areas. Firstly, we use the distance difference of the adjacent frames \(\Delta S\) as a criteria to remove the fluctuation of the signal. \[\Delta S^{\prime}=\begin{cases}\begin{aligned} 0&when&\Delta S<\gamma_{flatness}R\\ \Delta S&when&\Delta S\geq\gamma_{flatness}R\end{aligned}\end{cases} \tag{1}\] where \(S\) is the distance signal after time-averaged mean removal, \(S^{\prime}\) is the curve after fluctuation removal, \(\Delta\) represents the difference between adjacent frames of specified signal, \(\gamma_{flatness}\) is the threshold which set as 0.1 in the present study, and \(R\) is the range of the interested signal. The distance signal before (\(DeltaS\)) and after fluctuation removal (\(\Delta S^{\prime}\)) are plotted side-by-side in Figure 3 (left). Secondly, we reconstructed the signal based on \(\Delta S^{\prime}\) and \(S\). The former obtains sharp change between sections (where \(DeltaS^{\prime}=0\), i.e., finger open or close), the latter guides the magnitude of the reconstructed signal. Moreover, the existence of peaks and troughs can be determined by calculating the moving average value \(\mu\) over the signal. Peaks are determined when the local section are above \(\mu_{i}\), in contrast, troughs are recognised when the local sections are below. \[\begin{split}\mu_{i}=\frac{1}{n}\sum_{i-n/2}^{i+n/2}S^{\prime}_ {i}\\ \text{w.r.t }n=\gamma_{window}\times N\end{split} \tag{2}\] where \(\gamma_{window}\) is the relative window size (set as 0.1 in the present study) and \(N\) is the total number of frames in the signal. Please note that the duration of the first and last platforms are omitted, since it often is meaningless waiting period. Figure 3 (center) shows the original signal \(S\), reconstructed signal \(S^{\prime}\), and the moving mean \(\mu\). Due to the fact that the features, such as the time between each tap and open duration of the fingers, of every signal varies significantly. Thus, the cases of short and long opening are both need to be considered and processed separately to recognise the vertexes in a reliable manner throughout all videos, following are the formulas applied. \[t_{vertex}=\begin{cases}t_{localvertex}&when&l_{section}\leq\gamma_{platform}N \\ t_{median}&when&l_{section}>\gamma_{platform}N\end{cases} \tag{3}\] where \(t_{localvertex}\) is the moment of the local maxima or minima of the section, \(t_{median}\) is the central time of the local section, \(l_{section}\) is the duration of the local section, and \(\gamma_{platform}\) is the constant which set as 0.01 in the present study. Figure 3: Adaptive vertex recognition process. Once the time of local vertexes \(t_{vertex}\) are located, the local height \(S_{vertex}\) can be found by below: \[S_{vertex}=\begin{cases}S_{localsection}&when\quad\Delta S^{\prime}(t_{vertex})= 0\\ S(t_{vertex})&when\quad\Delta S^{\prime}(t_{vertex})>0\end{cases} \tag{4}\] where \(S_{localsection}\) is the maxima or minima of the local section. The sample signal, processed signal, and recognised vertexes are demonstrated in the Figure 3 (right). The curve of recognised vertexes (\(t_{vertex}\), \(S_{vertex}\)) are ready for the next step, feature extraction in the following section. #### 2.2.3 Feature Extraction The Feature Extraction module takes reconstructed distance (between thumb-tip and index fingertip)- versus-time graph data as the input and different hand kinematic features are the outputs. Based on key neuroscience measures, we summarized speed, amplitude and rhythm related features of the FTT and their calculations. These features are known to deteriorate in Parkinson's and other neurological disorders. Speed related features include Mean Tapping Frequency (M-TF), Total Tapping Count (TTC), Maximum Speed (MS), Mean Inter Tap Interval (M-ITI) and Decrements on Speed (DoS). Amplitude related features include Coefficient of Variance of Amplitude (COV-A) and Decrements on Amplitude (DoA). Rhythm related features include Coefficient of Variance of Tapping Frequency (COV-TF) and Intra Individual Variability (IIV). Detailed validation results on all 9 features are presented in supplementary document. \[\text{M-TF}=\frac{1}{K_{p}-1}\sum_{k=2}^{K_{p}}\frac{1}{t_{(k)}-t_ {(k-1)}} \tag{5}\] \[\text{TTC}=min(K_{p},K_{v})\] (6) \[\text{MS}=1/min(t_{(k)}-t_{(k-1)})\text{ for }k=2,3,...,K_{p}\] (7) \[\text{M-ITI}=\frac{1}{K_{v}-1}\sum_{k=2}^{K_{v}}(t_{(k)}-t_{(k-1)})\] (8) \[\text{DoS}=\frac{1}{K_{p}-1}ln(\frac{1}{t_{(k)}-t_{(1)}}/\frac{1} {t_{(K_{p})}-t_{(K_{p}-1)}})\] (9) \[\text{COV-A}=\frac{\sqrt{(a_{(k)}-\frac{1}{K_{p}}\sum_{k=1}^{K_{p }}a_{(k)})^{2}}}{K_{(p)}}/(\frac{1}{K_{p}}\sum_{k=1}^{K_{p}}a_{(k)})\] (10) \[\text{DoA}=\frac{1}{K_{p}}ln\frac{a_{(1)}}{a_{(K_{p})}}\] (11) \[\text{COV-TF}=\sqrt{\frac{\sum_{k=2}^{K_{p}}(\frac{1}{t_{(k)}-t_ {(k-1)}}-\text{M-TF})^{2}}{K_{p}-1}}/\text{M-TF}\] (12) \[\text{IIV}=\sqrt{\frac{1}{K_{p}-1}\sum_{k=2}^{K_{p}}[(t_{(k)}-t_ {(k-1)})-\text{M-ITI}]^{2}} \tag{13}\] where \(K_{p}\) refers to the number of peaks, \(K_{v}\) refers to the number of valleys, \(t_{(k)}\) refers to the time point at \(k^{th}\) peak and \(a_{k}\) refers to the normalized amplitude. ### Evaluation Method We compare the performance of RMT with other state-of-the-art methods, including DLC, to precisely track the thumb-tip and index fingertip during the FTT. The Percentage of Correct Keypoints (PCK) measures the percentage of correctly localized thumb-tip and index fingertip. A correctly localized keypoint is confirmed if the distance (in pixels) between the predicted position and true position is within a pre-set threshold (see equation 14).The Mean of Per Joint Position Error (MPJPE) measures how precisely the predicted position of fingertip can reach compared with the true position (see equation 15). Figure 4 shows the PCK of different methods and Table 1 shows the MPJPE of different methods. RMT achieves the best performance in both two metrics. \[\text{PCK}_{@T} =\frac{1}{N\times J}\sum_{j=1}^{J}\sum_{n=1}^{N}(||P_{n}^{(j)}-Y_{n}^ {(j)}||_{2}<T) \tag{14}\] \[\text{MPJPE} =\frac{1}{N\times J}\sum_{j=1}^{J}\sum_{n=1}^{N}||P_{n}^{(j)}-Y_{n }^{(j)}||_{2} \tag{15}\] where \(N\) is the number of frames and \(J\) is the number of fingertips to be detected (in this case, \(J=2\)), \(P_{n}^{(j)}\) is the predicted position of the \(j^{th}\) fingertip on the \(n^{th}\) frame, \(Y_{n}^{(j)}\) is the true position of the \(j^{th}\) fingertip on the \(n^{th}\) frame and \(T\) is the pre-set threshold. ## 3 Results In this section, we show that RMT achieves accurate digit tracking from 2D videos and accurate movement feature extraction. ### Digit Tracking Results Digit tracking by RMT is more accurate than other available state-of-the-art methods. This is reflected in a higher Percentage of Correct Keypoints (PCK) and a lower Mean of Per Joint Position Error (MPJPE) on the testing dataset shown in Figure 4 and Table 1. The calculations of PCK and MPJPE are described in the Online Method section. The distance between detected index fingertip and thumb-tip positions normalized to maximum aperture at each frame over time is plotted in Figure 2 (Figure 5). A visual comparison to the wearable sensor method demonstrates that when applied to the unlabelled videos, RMT provides good tracking performance in the'maximal speed' condition (approximately 6Hz). \begin{table} \begin{tabular}{l c c c} \hline **Methods** & **Thumb-tip** & **Index Fingertip** & **Average** \\ \hline \hline **RapidMotionTrack** & **1.10** & **1.32** & **1.21** \\ DLC-ResNet50 [9] & - & - & 3.00 \\ DLC-MobileNet [9] & - & - & 2.30 \\ HigherHRNet [15] & 1.47 & 1.71 & 1.59 \\ Hourglass [16] & 1.37 & 1.82 & 1.60 \\ \hline \end{tabular} \end{table} Table 1: Different computer vision models’ Mean of Per Joint Position Error on FTT testing dataset Figure 4: Different models’ PCK on FTT testing dataset (DLC software does not report PCK). This shows that RMT has the highest percentage of detected correct keypoints at different thresholds when compared to the wearable sensor data (ground truth). Figure 5: Normalized sample distance vs time graphs calculated from computer vision methods in one individual performing the FTT. The blue graphs show the results from each computer vision method compared to the wearable sensor gold standard method, Optotrak (black graph). The X-axis shows the time (frame by frame where 300 frames for computer vision methods and 2,500 frames for the Optotrak method) and the Y-axis shows normalized distance (measured in pixels for computer vision methods and in mm for the Optotrak method). Each row denotes a different FTT condition, where Row 1 is the self paced maximal speed condition and Rows 2, 3, 4 and 5 are the 3Hz, 2Hz, 1Hz and 0.5Hz metronome-paced conditions respectively. ### Feature Extraction Results The RMT system, HigherHRNet and Hourglass models can extract valid hand movement features even when participants tap in a high frequency range (\(>\)4Hz). In comparison, hand movement speed, amplitude, and rhythm related features calculated based on the DLC system's digit tracking result are invalid when participants tap more quickly than 4Hz. These are reflected in the comparison between different computer vision methods and the Optotrak in terms of Bland Altman plots, X-Y plots (Figure 6) and statistical tests (Table 2). Details of movement features are explained on the Online Method section. ### Generalisability of RMT to other fast human movements RMT was also able to accurately track a range of other human movements when applied to webcam videos of head turning, toe tapping [17], leg agility, and sit-to-stand movements [18, 19] as shown in Figure 7. The tracking videos are provided in the supplementary materials. ## 4 Discussion and Conclusion ### Precise Digit Tracking from 2D Laptop Camera Videos Digit tracking by RMT is more accurate than available state-of-the-art methods. When participants tap in a fast range (e.g. higher than 4Hz), DLC cannot accurately and reliably track the digits [8]. Additionally, this will further lead to incorrect digit distance vs time graph. Figure 5 demonstrated an individual participant's distance (between index fingertip and thumb-tip) vs time graphs under 5 conditions of finger tapping calculated from different computer vision methods and the Optotrak method. In the maximal speed condition, motion blur frames were very common. Compared with other deep learning networks to detect keypoints, the distance vs time graph obtained from the RMT system is more stable and closer to the gold standard Optotrak methods. This is due to the P-MSDSNet fusing multi-scale features from both same depth levels and different depth levels in a parallel manner, which can gradually refine the features and detected fingertip positions layer by layer and scale by scale even on a blurred frame. An accurate digit tracking lays the foundation for extracting hand movement features. ### Feature Extraction Discussion Despite the accurate digit tracking results, the unsmooth distance vs time curve may also affect the accuracy of final feature extraction. To overcome this issue, the proposed Adaptive Vertex Recognition algorithm can help RMT system correctly identify the peaks and troughs from a particular distance vs time graph without being affected by the noise or unsmooth part. \begin{table} \begin{tabular}{l l l l l} \hline **Condition** & **Computer Vision Method** & **Speed** & **Amplitude** & **Rhythm** \\ \hline \multirow{3}{*}{Maximal speed (\(<\)4Hz)} & RMT & accept & **accept** & accept \\ & DLC-ResNet50 & accept & reject & accept \\ & DLC-MobileNet & accept & reject & accept \\ \hline \multirow{3}{*}{Maximal speed (\(>\)4Hz)} & RMT & **accept** & **accept** & accept \\ & DLC-ResNet50 & reject & reject & accept \\ & DLC-MobileNet & reject & reject & accept \\ \hline \multirow{3}{*}{3Hz} & RMT & accept & accept & accept \\ & DLC-ResNet50 & accept & accept & accept \\ & DLC-MobileNet & accept & accept & accept \\ \hline \multirow{3}{*}{2Hz} & RMT & accept & accept & accept \\ & DLC-ResNet50 & accept & accept & accept \\ & DLC-MobileNet & accept & accept & accept \\ \hline \multirow{3}{*}{1Hz} & RMT & accept & accept & accept \\ & DLC-ResNet50 & accept & accept & accept \\ & DLC-MobileNet & accept & accept & accept \\ \hline \multirow{3}{*}{0.5Hz} & RMT & accept & accept & accept \\ & DLC-ResNet50 & accept & accept & accept \\ \hline \end{tabular} \end{table} Table 2: Welch’s t test (at 0.05 significant level) results of different computer vision methods compared with the Optotrak measures on speed, amplitude and rhythm. Results from two methods have no significant difference when \(P>0.05\) (accept null hypothesis), and have significant difference when \(P<0.05\) (reject null hypothesis). ### Strengths, Limitations and Future Direction A particular strength of RMT is that it takes original finger tapping videos as input, through the Fingertip Tracking Module and Adaptive Vertex Recognition Module, and outputs accurate and valid hand movement features for both assessment and research. This is an additional output compared with DLC which only tracks digits (RMT not only tracks digits but also produces valid movement feature data). In addition, both digit tracking and the feature extraction achieve the best results out of all currently available state of the art computer vision deep learning models and this can serve for not only FTT but also other movement test analysis including sit-to-stand movement test [18, 19] and foot-tapping test [17] from recorded 2D videos. We showed a comparison of RMT with the DLC and the Optotrak system in Table 3; this shows that the features extracted using RMT are equivalent to wearable sensor methods but, in addition, have the potential to be performed remotely (e.g., for telemedicine) and at the population level (due to the wide reach of webcams). Rather than the Optotrak system and DLC whose output is fingertip position, the RMT system can directly output validated finger tapping features. Currently RMT is offline but future work will develop this into an online system. Additionally, we will extend RMT that can be used for general human movement feature extraction. First, we will build a model zoo consisting of different models for different movements, and keep updating the zoo. Second, we will make RMT as a windows-based system that can be used in a more efficient manner for neuroscience. RMT will also be used in the TAS Test project [20], which aims to detect the earliest stages of Alzheimer's disease using hand movement analysis.